Google’s Quantum Dream May Be Just Around The Corner

In 1981, Richard Feynman, probably the most famous physicist of his time asked the question: “Can we simulate physics on a computer?” At the time the answer was “theoretically yes,” but practically not at that time. Today, we may be on the verge of answering “yes” in practice to Feynman’s original question. Quantum computers operate in such a strange way and are so radically different from today’s computers that it requires some understanding of quantum mechanics and bizarre properties like “quantum entanglement.” Quantum computers are in a realm orders of magnitude beyond today’s supercomputers and their application in specific computational problems like cryptography, Big Data analysis, computational fluid dynamics (CFD), and sub-atomic physics will change our World. Canadian quantum computing company, D-Wave Systems has been at the center of Google’s efforts to pioneer this technology.


In 1981, Richard Feynman, probably the most famous physicist of his time asked the question: “Can we simulate physics on a computer?” At the time the answer was “theoretically yes,” but practically not at that time. Today, we may be on the verge of answering “yes” in practice to Feynman’s original question. Quantum computers operate in such a strange way and are so radically different from today’s computers that it requires some understanding of quantum mechanics and bizarre properties like “quantum entanglement.” Quantum computers are in a realm orders of magnitude beyond today’s supercomputers. Their application in specific computational problems like cryptography, Big Data analysis, computational fluid dynamics (CFD), and sub-atomic physics will change our World. Canadian quantum computing company, D-Wave Systems has been at the center of Google’s efforts to pioneer this technology.

Reblogged from New Scientist

Google’s Quantum Dream May Be Just Around the Corner

 QUANTUM-articleLarge-v2

31 August 2016

Revealed: Google’s plan for quantum computer supremacy

The field of quantum computing is undergoing a rapid shake-up, and engineers at Google have quietly set out a plan to dominate

SOMEWHERE in California, Google is building a device that will usher in a new era for computing. It’s a quantum computer, the largest ever made, designed to prove once and for all that machines exploiting exotic physics can outperform the world’s top supercomputers.

And New Scientist has learned it could be ready sooner than anyone expected – perhaps even by the end of next year.

The quantum computing revolution has been a long time coming. In the 1980s, theorists realised that a computer based on quantum mechanics had the potential to vastly outperform ordinary, or classical, computers at certain tasks. But building one was another matter. Only recently has a quantum computer that can beat a classical one gone from a lab curiosity to something that could actually happen. Google wants to create the first.

The firm’s plans are secretive, and Google declined to comment for this article. But researchers contacted by New Scientist all believe it is on the cusp of a breakthrough, following presentations at conferences and private meetings.

“They are definitely the world leaders now, there is no doubt about it,” says Simon Devitt at the RIKEN Center for Emergent Matter Science in Japan. “It’s Google’s to lose. If Google’s not the group that does it, then something has gone wrong.”

We have had a glimpse of Google’s intentions. Last month, its engineers quietly published a paper detailing their plans (arxiv.org/abs/1608.00263). Their goal, audaciously named quantum supremacy, is to build the first quantum computer capable of performing a task no classical computer can.

“It’s a blueprint for what they’re planning to do in the next couple of years,” says Scott Aaronson at the University of Texas at Austin, who has discussed the plans with the team.

So how will they do it? Quantum computers process data as quantum bits, or qubits. Unlike classical bits, these can store a mixture of both 0 and 1 at the same time, thanks to the principle of quantum superposition. It’s this potential that gives quantum computers the edge at certain problems, like factoring large numbers. But ordinary computers are also pretty good at such tasks. Showing quantum computers are better would require thousands of qubits, which is far beyond our current technical ability.

Instead, Google wants to claim the prize with just 50 qubits. That’s still an ambitious goal – publicly, they have only announced a 9-qubit computer – but one within reach.

“It’s Google’s to lose. If Google’s not the group that does it, then something has gone wrong“

To help it succeed, Google has brought the fight to quantum’s home turf. It is focusing on a problem that is fiendishly difficult for ordinary computers but that a quantum computer will do naturally: simulating the behaviour of a random arrangement of quantum circuits.

Any small variation in the input into those quantum circuits can produce a massively different output, so it’s difficult for the classical computer to cheat with approximations to simplify the problem. “They’re doing a quantum version of chaos,” says Devitt. “The output is essentially random, so you have to compute everything.”

To push classical computing to the limit, Google turned to Edison, one of the most advanced supercomputers in the world, housed at the US National Energy Research Scientific Computing Center. Google had it simulate the behaviour of quantum circuits on increasingly larger grids of qubits, up to a 6 × 7 grid of 42 qubits.

This computation is difficult because as the grid size increases, the amount of memory needed to store everything balloons rapidly. A 6 × 4 grid needed just 268 megabytes, less than found in your average smartphone. The 6 × 7 grid demanded 70 terabytes, roughly 10,000 times that of a high-end PC.

Google stopped there because going to the next size up is currently impossible: a 48-qubit grid would require 2.252 petabytes of memory, almost double that of the top supercomputer in the world. If Google can solve the problem with a 50-qubit quantum computer, it will have beaten every other computer in existence.

Eyes on the prize

By setting out this clear test, Google hopes to avoid the problems that have plagued previous claims of quantum computers outperforming ordinary ones – including some made by Google.

Last year, the firm announced it had solved certain problems 100 million times faster than a classical computer by using a D-Wave quantum computer, a commercially available device with a controversial history. Experts immediately dismissed the results, saying they weren’t a fair comparison.

Google purchased its D-Wave computer in 2013 to figure out whether it could be used to improve search results and artificial intelligence. The following year, the firm hired John Martinis at the University of California, Santa Barbara, to design its own superconducting qubits. “His qubits are way higher quality,” says Aaronson.

It’s Martinis and colleagues who are now attempting to achieve quantum supremacy with 50 qubits, and many believe they will get there soon. “I think this is achievable within two or three years,” says Matthias Troyer at the Swiss Federal Institute of Technology in Zurich. “They’ve showed concrete steps on how they will do it.”

Martinis and colleagues have discussed a number of timelines for reaching this milestone, says Devitt. The earliest is by the end of this year, but that is unlikely. “I’m going to be optimistic and say maybe at the end of next year,” he says. “If they get it done even within the next five years, that will be a tremendous leap forward.”

The first successful quantum supremacy experiment won’t give us computers capable of solving any problem imaginable – based on current theory, those will need to be much larger machines. But having a working, small computer could drive innovation, or augment existing computers, making it the start of a new era.

Aaronson compares it to the first self-sustaining nuclear reaction, achieved by the Manhattan project in Chicago in 1942. “It might be a thing that causes people to say, if we want a full-scalable quantum computer, let’s talk numbers: how many billions of dollars?” he says.

Solving the challenges of building a 50-qubit device will prepare Google to construct something bigger. “It’s absolutely progress to building a fully scalable machine,” says Ian Walmsley at the University of Oxford.

For quantum computers to be truly useful in the long run, we will also need robust quantum error correction, a technique to mitigate the fragility of quantum states. Martinis and others are already working on this, but it will take longer than achieving quantum supremacy.

Still, achieving supremacy won’t be dismissed.

“Once a system hits quantum supremacy and is showing clear scale-up behaviour, it will be a flare in the sky to the private sector,” says Devitt. “It’s ready to move out of the labs.”

“The field is moving much faster than expected,” says Troyer. “It’s time to move quantum computing from science to engineering and really build devices.”

D-Wave Quantum Machine Tested by NASA and Google Shows Promise


Researchers from Google’s AI Lab say a controversial quantum machine that it and NASA have been testing since 2013 resoundingly beat a conventional computer in a series of tests.

Source: Controversial Quantum Machine Tested by NASA and Google Shows Promise | MIT Technology Review

Inside this box is a superconducting chip, cooled to within a fraction of a degree of absolute zero, that might put new power behind artificial-intelligence software.

Google says it has proof that a controversial machine it bought in 2013 really can use quantum physics to work through a type of math that’s crucial to artificial intelligence much faster than a conventional computer.

Governments and leading computing companies such as Microsoft, IBM, and Google are trying to develop what are called quantum computers because using the weirdness of quantum mechanics to represent data should unlock immense data-crunching powers. Computing giants believe quantum computers could make their artificial-intelligence software much more powerful and unlock scientific leaps in areas like materials science. NASA hopes quantum computers could help schedule rocket launches and simulate future missions and spacecraft. “It is a truly disruptive technology that could change how we do everything,” said Rupak Biswas, director of exploration technology at NASA’s Ames Research Center in Mountain View, California.

Biswas spoke at a media briefing at the research center about the agency’s work with Google on a machine the search giant bought in 2013 from Canadian startup D-Wave systems, which is marketed as “the world’s first commercial quantum computer.” The computer is installed at NASA’s Ames Research Center in Mountain View, California, and operates on data using a superconducting chip called a quantum annealer. A quantum annealer is hard-coded with an algorithm suited to what are called “optimization problems,” which are common in machine-learning and artificial-intelligence software.

However, D-Wave’s chips are controversial among quantum physicists. Researchers inside and outside the company have been unable to conclusively prove that the devices can tap into quantum physics to beat out conventional computers.

Hartmut Neven, leader of Google’s Quantum AI Lab in Los Angeles, said today that his researchers have delivered some firm proof of that. They set up a series of races between the D-Wave computer installed at NASA against a conventional computer with a single processor. “For a specific, carefully crafted proof-of-concept problem we achieve a 100-million-fold speed-up,” said Neven.

Google posted a research paper describing its results online last night, but it has not been formally peer-reviewed. Neven said that journal publications would be forthcoming.

Google’s results are striking—but even if verified, they would only represent partial vindication for D-Wave. The computer that lost in the contest with the quantum machine was running code that had it solve the problem at hand using an algorithm similar to the one baked into the D-Wave chip. An alternative algorithm is known that could have let the conventional computer be more competitive, or even win, by exploiting what Neven called a “bug” in D-Wave’s design. Neven said the test his group staged is still important because that shortcut won’t be available to regular computers when they compete with future quantum annealers capable of working on larger amounts of data.

Matthias Troyer, a physics professor at the Swiss Federal Institute of Technology, Zurich, said making that come true is crucial if chips like D-Wave’s are to become useful. “It will be important to explore if there are problems where quantum annealing has advantages over even the best classical algorithms, and to find if there are classes of application problems where such advantages can be realized,” he said, in a statement with two colleagues.

Last year Troyer’s group published a high-profile study of an earlier D-Wave chip that concluded it didn’t offer advantages over conventional machines. That question has now been partially resolved, they say. “Google’s results indeed show a huge advantage on these carefully chosen instances.”

Google is competing with D-Wave to make a quantum annealer that could do useful work. Last summer the Silicon Valley giant opened a new lab in Santa Barbara, headed by a leading academic researcher, John Martinis (see “Google Launches Effort to Build Its Own Quantum Computer”).

Martinis is also working on quantum hardware that would not be limited to optimization problems, as annealers are. A universal quantum computer, as such a machine would be called, could be programmed to take on any problem and would be much more useful but is expected to take longer to perfect. Government and university labs, Microsoft (see “Microsoft’s Quantum Mechanics”), and IBM (see “IBM Shows Off a Quantum Computing Chip”) are also working on that technology.

John Giannandrea, a VP of engineering at Google who coördinates the company’s research, said that if quantum annealers could be made practical, they would find many uses powering up Google’s machine-learning software. “We’ve already encountered problems in the course of our products impractical to solve with existing computers, and we have a lot of computers,” he said. However, Giannandrea noted, “it may be several years before this research makes a difference to Google products.”

Reid Hoffman: Venture Capitalist Loser | MIT Technology Review

An insightful interview with Reid Hoffman, venture capitalist and founder of LinkedIn. But to my mind, Hoffman seems blase’ about Big Ideas and “deep tech” funding. I share the views of Startup Genome founder, Max Marmer, and bemoan the limited focus of VC’s on world-changing technologies, leaving it to billionaire angels. I also sense myopia about the ongoing intense debate over the distortion of the sharing economy by Uber, Airbnb, and others.


UPDATE: Since I wrote this post last week, on November 25th, events swiftly unfolded to underscore the points I made in criticism of Reid Hoffman’s views on venture capital, in his interview with the MIT Technology Review. Bill Gates and a host of global leaders, Silicon Valley industry leaders, and high-tech billionaires announced the Clean Tech Initiative, at the opening of the UN COP21 Climate Change Conference.  This initiative precisely makes my point that venture capitalists like Reid Hoffman fail to see their social responsibility, or to examine the ethics of their investments.  At the time I wrote the opening paragraph to this post (below), I had absolutely no idea that my points would be validated by Bill Gates, Obama, and high-tech industry leaders  Meg Whitman of HP, Facebook Chief Executive Officer Mark Zuckerberg, Alibaba Chairman Jack Ma, Amazon CEO Jeff Bezos, Ratan Tata, retired chairman of India’s Tata Sons, the holding company of the Tata group, and South African billionaire Patrice Motsepe of African Rainbow Minerals.  I would now go so far to say that Hoffman’s views are an embarrassment to himself in the face of the vision of others.

BillGates

READ MORE: Bill Gates, Mark Zuckerberg, Jeff Bezos And A Host of Others Announce Clean Tech Initiative

An insightful interview with Reid Hoffman, venture capitalist and founder of LinkedIn. But to my mind, Hoffman seems blase’ about Big Ideas and “deep tech” funding. I share the views of Startup Genome founder, Max Marmer, and bemoan the limited focus of VC’s on world-changing technologies, leaving it to billionaire angels. I also sense a myopia about the ongoing intense debate over the distortion of the sharing economy by Uber, Airbnb, and others.  Thanks to Gary Reischel for posting this article on his Facebook page.

My attention is focused on two privately funded Big Idea entrepreneurial ventures in Vancouver B.C., General Fusion, and D-Wave.  General Fusion and at least two other companies in California and Germany are competing against the two massively funded governmental nuclear fusion projects, ITER at Cadarache in France, and The National Ignition Facility at the U.S.  Department of Energy’s Livermore National Labs. D-Wave, is pioneering quantum computing, having successfully sold two early quantum computers to Google and Lockheed Martin/NASA in Silicon Valley.

Max Marmer…read more: Reversing The Decline In Big Ideas

Read More mayo615: Are Venture Capitalists and Big Ideas Converging Again?

 

Source: Venture Capital in Transition | MIT Technology Review

Reid Hoffman has worked the entire tech startup ecosystem: he cofounded LinkedIn in 2002, used the money he made there to become one of Silicon Valley’s most prolific angel investors, invested early in Facebook, Zynga, and many others, and is now a venture capitalist at Greylock Partners. At Greylock, which he joined in 2009, Hoffman has focused his investments on consumer Internet companies that use software to create networks of millions of users, such as the home-sharing site Airbnb.

Startup incubators that nurture entrepreneurs’ early ideas, super-angels who invest small amounts in large numbers of early-stage companies, and project crowdfunding via Internet sites such as Kickstarter are all presenting alternatives to traditional VCs. Hoffman thinks firms like his can compete by providing services such as dedicated teams that recruit engineers and holding dozens of networking and educational events to help startups get big faster. He’s currently teaching a Stanford University class for entrepreneurs in “blitzscaling,” his term for the rapid scaling up of startups.

Hoffman spoke with MIT Technology Review contributing editor Robert Hof about why that’s especially important today and whether enough investing is being done in core technologies such as computer science, networking, and semiconductors.

How have changes in technology altered the way you invest?
Starting a software company is now a lot cheaper and faster than it used to be, thanks to Amazon Web Services, open-source software, and the ability to build an app on iOS or Android. Speed to realizing a global opportunity is more critical competitively. I wanted to build out a [venture capital] platform that was appropriate to the modern age of entrepreneurship.

VCs have always provided help on networking and hiring. How is your platform different?

Think about how an application gets built on iOS. It calls up services on Apple’s platform, such as a graphics framework or how to create a dialog box. Similarly, a business gets built by hiring people, developing its product or service, growing its revenues. The modern venture firm needs to provide a set of services that the company can call upon. We have a dedicated team to recruit engineers and product people. We have more than a dozen communities of people from big Valley companies like Apple and Facebook focused on technical topics such as big data and user growth. They meet with our companies to teach things like growth hacking, the use of social media, and other low-cost alternatives for marketing.

“There are still billions of people coming online. Also, software is affecting almost every industry … And we’re just beginning to see how data informs everything.”

How long will these software-driven networks you’re focused on be good investing opportunities?

There are still billions of people coming online. Also, software is affecting almost every industry, from transportation, with Uber and self-driving cars, to personalized medicine, health, and genetics. And we’re just beginning to see how data informs everything. Those trends are in the very early innings, so they’re the ones that will have the macroeconomic impact over the next five to 10 years.

You’ve said you don’t think there’s a bubble in tech investing, but surely not all these upstarts are worth so much?

People are so exuberant about finding their way to the cutting-edge companies that valuations are going up across the board. Some companies are so massively valuable that even when you invest in them at an accelerated valuation, they’re still cheap in retrospect. But many companies are given [high] valuations when they actually shouldn’t be.

I don’t think higher valuations in private [venture capital fund-raising] rounds lead to a massive [public] market correction. A private down round [fund-raising that values the company at a lesser amount than the previous round] doesn’t destabilize the public capital markets. But it’s still pretty frothy. So when you’re seeing inflated valuations, you sit it out.

Have you been sitting out more often?
We’ve passed on many more deals in the past two years.

Is true innovation beyond slick apps being financed to the extent it should?
Markets tend to go toward realizable, short-term rewards that require little capital.

That tends to favor pure-play software companies like Airbnb, Dropbox, and Uber that have global reach and network effects [in which a service becomes much more valuable as more people use it]. If more capital naturally flowed toward deep tech, I think that would be a good thing for the world. But you do have SpaceX, you do have Tesla. Deep tech isn’t that starved for capital.

VC investing is way up, but the traditional exit, the IPO, often comes after a company has already grown quite large. As a result, public investors, as well as employees don’t share as much of the increase in value. Is that a problem?
It used to be, back in 1993–’96, tech companies would go public and then public market shareholders would benefit from the huge growth in valuations. Now it’s more the private investors who benefit. I don’t think that’s necessarily a problem.

Doesn’t that go against the idea that employee stock options and so on will democratize wealth, or at least spread it more broadly?
Ideally, you’d like to make the capital returns available to everybody, not just to the folks who can participate in these elite private funds or elite private financings. I’d rather have it democratized. But on the other hand, it makes complete sense from a company perspective to delay liquidity, because they can run much more efficiently as a private company and get as much momentum as possible.

Moore’s Law at 50: At Least A Decade More To Go And Why That’s Important

Gordon Moore, now 86, is still spry and still given to the dry sense of humor for which he has always been known. In an Intel interview this year he said that he had Googled “Moore’s Law” and “Murphy’s Law,” and Moore’s beat Murphy’s by two to one,” demonstrating how ubiquitous is the usage of Dr. Moore’s observation. This week we are commemorating the 50th anniversary of the April 19, 1965 issue of Electronics magazine, in which Dr. Moore first described his vision of doubling the number of transistors on a chip every year or so.


Gordon Moore, now 86, is still spry and still given to the dry sense of humor for which he has always been known.  In an Intel interview this year he said that he had Googled “Moore’s Law” and “Murphy’s Law,” and Moore’s beat Murphy’s by two to one,” demonstrating how ubiquitous is the usage of Dr. Moore’s observation. This week we are commemorating the 50th anniversary of the April 19, 1965 issue of Electronics magazine, in which Dr. Moore first described his vision of doubling the number of transistors on a chip every year or so.

mooreslaw

It may seem geeky to be interested in the details of 14 nanometer (billionth of a meter) integrated circuit design rules, 7 nanometer FinFET (transistor) widths, or 5 nanometer line wire widths, but the fact of matter is that these arcane topics are driving the future of technology applications, telecommunications, business and economic productivity.  As just one example, this week’s top telecommunications business news is the proposed merger of Nokia and Alcatel-Lucent, with the vision to deploy a 5 G (fifth generation) LTE (long term evolution) mobile telephony network. Building out such a high speed voice and data network is almost entirely dependent on the power of the microprocessors in the system and ultimately Moore’s Law.  Nokia apparently believes that it can deploy this technology sooner rather than later and essentially leap frog the competition.  My UBC Management students will recall that in my first university teaching experience in Industry Analysis, I chose to expose them to the semiconductor industry for this exact reason.  Semiconductors are in virtually every electrical device we use on a daily basis.

However, as we cross this milestone we are able to see that we are near the limits of the physics of Moore’s Law.  International Business Strategies, a Los Gatos based consulting firm, estimates that only a decade ago, it cost only $16 million to design and test a new very large scale integated circuit (VLSI), but that today the design and testing cost has skyrocketed to $132 million.  Keep in mind that the cost of design, fabrication and testing of bleeding edge IC’s has been reduced dramatically over the decades by automation, also driven by Moore’s Law. So we are seeing a horizon line.  That said, entirely new technologies are already in the laboratories and may, in a way,  extend Moore’s Law, and the dramatic improvements in cost and productivity that come with it, but through entirely new and different means.

 

 

Quantum tech is more than just crazy science: It’s good business from mobile payments to fighting the NSA,

Management students may ask why the title of this post claims that quantum technology is good business. So let me try to explain, and then read on to the PandoDaily post by David Holmes. The bottom line is that some basic understanding of quantum mechanics is going to be a valuable management skill going forward. Why? Read on


Management students may ask why the title of this post claims that quantum technology is good business. So let me try to explain, and then read on to the PandoDaily post by David Holmes. The bottom line is that some basic understanding of quantum mechanics is going to be a valuable management skill going forward. Why? Read on

Yesterday, National Public Radio in the United States (which can be heard online) broadcast a fascinating discussion about Monday’s announcement of the long awaited breakthrough of proving the existence of gravitational waves which include the fingerprint of the original Big Bang.  Featuring legendary astrophysicist Leonard Susskind of Stanford and a number of other leading physicists, the discussion inevitably drifted to quantum mechanics, and the original Big Bang itself, which Stanford Physics Professor. Chao-Lin Kuo, described as “mind scrambling.”  Quantum entanglement is another area that defies common sense: particles that mimic each other and change faster than the speed of light, which should be impossible.  Einstein’s famous quote, “God does not play dice,” was his reaction to the non-deterministic nature of quantum events and theory, which also violate his general theory of relativity. It turns out the random nature of quantum mechanics provides a superior solution for hideously complex problems, finding the best “probabilistic” solutions. Quantum mechanics is also providing a potential way forward in encryption and privacy.

Read and listen on NPR: Scientists Announce Big Bang Breakthrough

However, all of this “mind scrambling” pure science is rapidly becoming applied science: science becoming useful technological innovation and applied to economic activity.  Some of my students may recall our discussions of Moore’s Law in semiconductor design. As  Moore’s Law reaches it finite limit, quantum “technology” is creating one path forward, and providing new solutions to Internet security and supercomputing.  David Holme’s PandoDaily article today attempts to explain in greater detail why this is important for business. 

Vern Brownell, CEO of D-Wave Systems has written an excellent explanation in layman’s terms, of the importance of quantum computing, and how it differs from “deterministic” computing.

Read more:  Solving the unsolvable: a quantum boost for supercomputing

Best of all there is an excellent book for those willing to devote the time and grey matter to quantum physics, “Quantum physics, a beginners’ guide,” by Alistair Rae, available in paperback on Amazon or Kindle e-book.

quantum physics

Speedy qubits lead the quantum evolution


Understanding the applications of quantum computing

Speedy qubits lead the quantum evolution

d-wave-two_210x140-2

SUMMARY:Advances in quantum computing can have countless applications from drug discovery to investment and health care. Lockheed Martin’s collaborations with the University of Southern California and D-Wave Systems continue to push science and technology boundaries, recently quadrupling qubit capacity in the D-Wave Two machine.

There are a few defining moments of innovation that we can point to that changed the future. Quantum computing may be that next big moment.

“Computationally, quantum computing is the equivalent of the Wright Brothers at Kitty Hawk,” said Greg Tallant, the program manager at Lockheed Martin.

Nearly everything around us, from cars and airplanes to smartphones and watches, has software. Debugging millions of lines of code for these increasingly complex systems is a big data problem that could cost big bucks.

“With quantum computing it’s not that we can solve problems that we cannot solve classically, it’s just that we can solve things faster,” said Daniel Lidar, the Scientific and Technical director at the University of Southern California Lockheed Martin Quantum Computation Center (QCC).

Unlike regular computers, quantum computers can simultaneously test all possible input combinations because they are not limited to just zeroes and ones (i.e., bits). Quantum bits, or qubits, can be both zero and one and all points in between, all at once.

A joint effort of Lockheed Martin Corporation and USC, the QCC recently upgraded to the D-Wave Two quantum computer designed with 512 qubits, increased from 128 in the original D-Wave One, both built by D-Wave Systems. The D-Wave Two is the largest programmable quantum information processor built.

“The QCC is a perfect example of industry and science coming together to advance our knowledge and quantum capabilities, pushing the boundaries of information science and technology,” said Bo Ewald, President of D-Wave Systems U.S.

Gigaom

There are a few defining moments of innovation that we can point to that changed the future. Quantum computing may be that next big moment.

“Computationally, quantum computing is the equivalent of the Wright Brothers at Kitty Hawk,” said Greg Tallant, the program manager at Lockheed Martin.

Nearly everything around us, from cars and airplanes to smartphones and watches, has software. Debugging millions of lines of code for these increasingly complex systems is a big data problem that could cost big bucks.

“With quantum computing it’s not that we can solve problems that we cannot solve classically, it’s just that we can solve things faster,” said Daniel Lidar, the Scientific and Technical director at the University of Southern California Lockheed Martin Quantum Computation Center (QCC).

Unlike regular computers, quantum computers can simultaneously test all possible input combinations because they are not limited to just zeroes and ones (i.e., bits). Quantum…

View original post 98 more words

A breakthrough in quantum cryptography could make financial markets of the future cheat-proof


As I wrote in an earlier post the world of quantum computing and cryptography shows great promise for the future, particularly in overcoming the current problems with encryption and Internet privacy.

Read more: Quantum encryption takes center stage in wake of NSA encryption cracking

Quantum Computing Takes Center Stage In Wake of NSA Encryption Cracking

In the late 1990’s, I participated in the creation of the “point-to-point tunneling protocol” (PPTP) with engineers at Microsoft and Cisco Systems, now an Internet Engineering Task Force (IETF) industry standard. PPTP is the technical means for creating the “virtual private networks” we use at UBC, by encrypting “open” Internet packets with IPSEC 128 bit code, buried in public packets. It was an ingenious solution enabling private Internet traffic that we assumed would last for a very long time. It was not to be, as we now know. Most disturbing, in the 1990’s the US Congress debated giving the government the key to all encryption, which was resoundingly defeated. Now, the NSA appears to have illegally circumvented this prohibition and cracked encryption anyway. But this discussion is not about the political, legal and moral issues, which are significant. In this post I am more interested in “so now what do we do?” There may be an answer on the horizon, and Canada is already a significant participant in the potential solution.


In the late 1990’s while I was with Ascend Communications, I participated in the creation of the “point-to-point tunneling protocol” (PPTP) with engineers at Microsoft and Cisco Systems, now an Internet Engineering Task Force (IETF) industry standard.  PPTP is the technical means for creating the “virtual private networks” we use at UBC, by encrypting “open” Internet packets with IPSEC 128 bit code, buried in public packets. It was an ingenious solution, enabling private Internet traffic that we assumed would last for a very long time.  It was not to be, as we now know.  Most disturbing, in the 1990’s the US Congress debated giving the government the key to all encryption, which was resoundingly defeated. Now, the NSA appears to have illegally circumvented this prohibition and cracked encryption anyway. But this discussion is not about the political, legal and moral issues, which are significant.  In this post I am more interested in exploring the question: “So now what do we do?” There may be an answer on the horizon, and Canada is already a significant participant in the potential solution.

As it happens, Canada is already at the forefront of quantum computing, a critically important new area of research and development, that has significant future potential in both computing and cryptography.  I have previously written about Vancouver-based D-Wave, which has produced commercial systems that have been purchased by Google and Lockheed Martin Aerospace.  The Institute for Quantum Computing in Waterloo, Ontario is the other major center of quantum computing research in Canada. Without taking a major diversion to explain quantum mechanics and its applications in computing and cryptography, there is a great PBS Nova broadcast, available online, which provides a basic tutorial.  The Economist article below, also does an admirable job of making this area understandable, and the role that the Waterloo research centre is playing in advancing cryptography to an entirely new level.

We need to insure that Canada remains at the forefront of this critically important new technology.

Cryptography

The solace of quantum

Eavesdropping on secret communications is about to get harder

  • CRYPTOGRAPHY is an arms race between Alice and Bob, and Eve. These are the names cryptographers give to two people who are trying to communicate privily, and to a third who is trying to intercept and decrypt their conversation. Currently, Alice and Bob are ahead—just. But Eve is catching up. Alice and Bob are therefore looking for a whole new way of keeping things secret. And they may soon have one, courtesy of quantum mechanics.

At the moment cryptography concentrates on making the decrypting part as hard as possible. The industry standard, known as RSA (after its inventors, Ron Rivest, Adi Shamir and Leonard Adleman, of the Massachusetts Institute of Technology), relies on two keys, one public and one private. These keys are very big numbers, each of which is derived from the product of the same two prime numbers. Anyone can encrypt a message using the public key, but only someone with the private key can decrypt it. To find the private key, you have to work out what the primes are from the public key. Make the primes big enough—and hunting big primes is something of a sport among mathematicians—and the task of factorising the public key to reveal the primes, though possible in theory, would take too long in practice. (About 40 quadrillion years with the primes then available, when the system was introduced in 1977.)

Since the 1970s, though, the computers that do the factorisation have got bigger and faster. Some cryptographers therefore fear for the future of RSA. Hence the interest in quantum cryptography.

Alice, Bob and Werner, too?

The most developed form of quantum cryptography, known as quantum key distribution (QKD), relies on stopping interception, rather than preventing decryption. Once again, the key is a huge number—one with hundreds of digits, if expressed in the decimal system. Alice sends this to Bob as a series of photons (the particles of light) before she sends the encrypted message. For Eve to read this transmission, and thus obtain the key, she must destroy some photons. Since Bob will certainly notice the missing photons, Eve will need to create and send identical ones to Bob to avoid detection. But Alice and Bob (or, rather, the engineers who make their equipment) can stop that by using two different quantum properties, such as the polarities of the photons, to encode the ones and zeros of which the key is composed. According to Werner Heisenberg’s Uncertainty Principle, only one of these two properties can be measured, so Eve cannot reconstruct each photon without making errors. If Bob detects such errors he can tell Alice not to send the actual message until the line has been secured.

One exponent of this approach is ID Quantique, a Swiss firm. In collaboration with Battelle, an American one, it is building a 700km (440-mile) fibre-optic QKD link between Battelle’s headquarters in Columbus, Ohio, and the firm’s facilities in and around Washington, DC. Battelle will use this to protect its own information and the link will also be hired to other firms that want to move sensitive data around.

QuintessenceLabs, an Australian firm, has a different approach to encoding. Instead of tinkering with photons’ polarities, it changes their phases and amplitudes. The effect is the same, though: Eve will necessarily give herself away if she eavesdrops. Using this technology, QuintessenceLabs is building a 560km QKD link between the Jet Propulsion Laboratory in Pasadena, California, which organises many of NASA’s unmanned scientific missions, and the Ames Research Centre in Silicon Valley, where a lot of the agency’s scientific investigations are carried out.

A third project, organised by Jane Nordholt of Los Alamos National Laboratory, has just demonstrated how a pocket-sized QKD transmitter called the QKarD can secure signals sent over public data networks to control smart electricity grids. Smart grids balance demand and supply so that electricity can be distributed more efficiently. This requires constant monitoring of the voltage, current and frequency of the grid in lots of different places—and the rapid transmission of the results to control centres. That transmission, however, also needs to be secure in case someone malicious wants to bring the system down.

In their different ways, all these projects are ambitious. All, though, rely on local fixed lines to carry the photons. Other groups of researchers are thinking more globally. To do that means sending quantum-secured data to and from satellites.

At least three groups are working on this: Thomas Jennewein and his team at the Institute for Quantum Computing in Waterloo, Canada; a collaboration led by Anton Zeilinger at the University of Vienna and Jian-Wei Pan at the University of Science and Technology of China; and Alex Ling and Artur Ekert at the Centre for Quantum Technologies in Singapore.

Dr Jennewein’s proposal is for Alice to beam polarisation-encoded photons to a satellite. Once she has established a key, Bob, on another continent, will wait until the satellite passes over him so he can send some more photons to it to create a second key. The satellite will then mix the keys together and transmit the result to Bob, who can work out the first key because he has the second. Alice and Bob now possess a shared key, so they can communicate securely by normal (less intellectually exhausting) terrestrial networks. Dr Jennewein plans to test the idea, using an aircraft rather than a satellite, at some point during the next 12 months.

An alternative, but more involved, satellite method is to use entangled photon pairs. Both Dr Zeilinger’s and Dr Ling’s teams have been trying this.

Entanglement is a quantum effect that connects photons intimately, even when they are separated by a large distance. Measure one particle and you know the state of its partner. In this way Alice and Bob can share a key made of entangled photon pairs generated on a satellite. Dr Zeilinger hopes to try this with a QKD transmitter based on the International Space Station. He and his team have been experimenting with entanglement at ground level for several years. In 2007 they sent entangled photon pairs 144km through the air across the Canary Islands. Dr Ling’s device will test entanglement in orbit, but not send photons down to Earth.

If this sort of thing works at scale, it should keep Alice and Bob ahead for years. As for poor Eve, she will find herself entangled in an unbreakable quantum web.

From the print edition: Science and technology

Vancouver D-Wave’s Groundbreaking Quantum Computer Sale to Lockheed Martin Aerospace

This is a very Big Deal, which increases the likelihood that Big Data will be a very Big Deal.

While the Canadian economy is expected to languish in the doldrums for the foreseeable future, D-Wave, a Vancouver quantum computing company, with e@UBC funding, is making big waves (pun intended). Seemingly out of the blue we now have two Vancouver companies that may be showing Canada the way out of its “natural resource curse:” D-Wave and potentially also Hootsuite.


QUANTUM2-popup

D-Wave‘s Very Strange New Computing Technology Promises Great Speed and Capacity 

This is a very Big Deal, which also increases the likelihood that Big Data will be a very Big Deal.

While the Canadian economy is expected to languish in the doldrums for the foreseeable future, D-Wave, a Vancouver quantum computing  company, with e@UBC funding, is making big waves (pun intended).  Only last month I sat with Todd Farrell, UBC’s venture fund manager and we discussed D-Wave. How could an advanced technology like this thrive in Vancouver, and not need to be in Silicon Valley?  Todd argued convincingly that Vancouver was a perfect location for D-Wave, and that there was no longer any need for companies to trudge south.  So now, seemingly out of the blue we have three Vancouver-based high tech companies that may be showing Canada the way out of its “natural resource curse:”  D-Wave, General Fusion, and potentially also Hootsuite.

Read more about Canada’s “natural resource curse:” http://mayo615.com/2013/03/11/alberta-bitumen-bubble-and-the-canadian-economy-industry-analysis-case-study/

I will try to explain this in layman’s terms. QUANTUM effects are vital to modern electronics. They can also be a damnable nuisance. Make a transistor too small, for example, and electrons within it can simply vanish from one place and reappear in another because their location is quantumly indeterminate. Currents thus leak away, and signals are degraded.

Other people, like D-Wave’s founders, Geordie Rose and Vern Brownell, though, saw opportunity instead. Some of the weird things that go on at the quantum scale afford the possibility of doing computing in a new and faster way, and of sending messages that—in theory at least—cannot be intercepted. Several groups of such enthusiasts have been working to build quantum computers capable of solving some of the problems which stump today’s machines, such as finding prime factors of numbers with hundreds of digits or trawling through large databases.  As recently as 2012, The Economist was reporting that quantum computing was in its infancy and years from commercial realization.  At that time, I had also discussed quantum computing with our resident local expert on advanced semiconductors, Andrew Labun, who shared the view of The Economist. It now appears that D-Wave is at the forefront of this technology, having succeeded in selling one of its computers to Lockheed Martin Aerospace, for hideously complex applications in space, bleeding edge radar technology, and aerospace finite element analysis or FEA.  FEA simulates the performance of airframes  in high stress and high speed environments. This has been done for years at facilities like NASA Ames Research Center in Mountain View, CA, home of the largest wind tunnel in the World.  However, the complexity of the analysis required hours of supercomputer number crunching to show results. Silicon Graphics, which sat directly next door to NASA Ames, tried to sell its 3D visualization supercomputers to NASA with some limited success, but the technology at that time was not up to the task. Silicon Graphics is no longer in business.  D-Wave’s sale to Lockheed Martin, which also sits on the NASA Ames site, suggests that D-Wave’s technology is ready for prime time. This is a potentially huge leap forward, and a strong message on what is needed to lift the Canadian economy: technological innovation and basic research and development funding.

Read more about D-Wave in the New York Timeshttp://www.nytimes.com/2013/03/22/technology/testing-a-new-class-of-speedy-computer.html?pagewanted=all&_r=0

Read more about D-Wave in the Vancouver SunMetro Vancouver firm’s groundbreaking quantum computer wins confidence of U.S. aerospace giant.

Read more on quantum computing in The Economisthttp://www.economist.com/node/21548151

Memristor Breakthrough Promises Dramatic 99% Reduction In Energy


We are all indebted to Intel’s great scientist, Dov Frohman, for the development of the original “floating gate” technology, which made possible EPROM‘s, electronically programmable read only memories, and later E2PROM, electrically erasable and programmable memory, now known as “flash memory.”  As with Gordon Moore’s Law, physics has discovered the next generation of “flash”, which will also dramatically reduce the energy required to operate flash memory. This dramatic reduction in required energy opens doors to new applications, and reduced demand on battery technology, which is also pushing at its limits.  The net net of all this are potentially bright prospects for mobile devices and their capabilities well into the future.

Meanwhile, Moore’s Law is being pushed by developments in quantum computing which hold the potential to further extend Moore’s Law.

http://www.smartplanet.com/blog/intelligent-energy/cut-pc-energy-use-by-99-use-a-memristor/16083?tag=search-river