Topolology in play as Microsoft Research ups quantum computing ante

Micrsoft has garnered two top boffins as it ‘doubles down’ on a quantum computing bet that is unique in a field a-full with uniqueness. At the heart of the Microsoft effort is an approach known as topological quantum computing – it is a different path than others are taking.

Among the topological qubit researchers now joining the company are Charles Marcus of the Niels Bohr Institute at the University of Copenhagen and Leo Kouwenhoven, a distinguished professor at Delft University of Technology. They have been deep in the innards of topology, but want to be mothers of actual invention.

The news was covered in the New York Times by the redoubtable John Markhoff in “Microsoft spends big to build quantum computer out of science fiction.” That is a title made for Amazing Techno Tales!

A topological quantum computer is one that does not use the venerable trapped quantum particle approach. Instead the topological type (according to Wikipedia):

“Employs two-dimensional quasiparticles called anyons, whose world lines pass around one another to form braids in a three-dimensional spacetime (i.e., one temporal plus two spatial dimensions). These braids form the logic gates that make up the computer.”

The Wikipedia citation goes on to suggest that the topological approach is more stable and, one might guess, in need of less error correction. (Ed Note Hope we don’t have to make a correction!)

Among members of the Redmond, Wash.-giant’s research team are principals who in conversation indicate they are looking to the first days of the transistor to inform their approach to the qubit. – Jack Vaughan

November 29, 2016 at 2:52 am Leave a comment

I got the blues about Moore’s Law, baby

89587923_ibm_quantum_processor

With Moore’s Law in retreat, the Quibit of Quantum vies to compete.

Fair to say this blog turned into “The Saturday Evening Review of John Markhoff” a long time ago. Well, the news feeds are good – and we could do worse than to track John Markhoff, who has been covering high tech at NYTimes for lo these many years.

For your consideration: His May 5 article on Moore’s Law. He rightly points out this at inception was more an observation than a law, but Intel’s Gordon Moore’s 1965 eureka that the number of components that could be etched onto the surface of a silicon wafer was doubling at regular intervals stood the test of what today passes for time.

The news hook is a decision by the Semiconductor Industry Assn’s to discontinue its Technology Roadmap for Semiconductors, based I take it on the closing of the Moore’s Law era. IEEE will take up where this leaves off, with a forecasting roadmap [system] that tracks a wider swath of technology. Markhoff suggests that Intel hasn’t entirely accepted the end of this line.

Possible parts of that swath, according to Markhoff, are quantum computing and  graphene.  The heat of the chips has been the major culprit blocking Moore’s Law further run. Cost may be the next bugaboo. So far, parallelism has been the answer.

Suffice it to say, for some people at least, Moore’s Law has chugged on like a beautiful slow train of time. With the Law in effect people at Apple, Sun, Oracle, etc. could count on things being better tomorrow than they were today in terms of features and functionality. So the new future, being less predictable, is a bit more foreboding.

I had my uh-ha moment on something like this in about 1983 when I was working on my master’s thesis on Local Area Networks. This may not completely be a story about Moore’s Law.. But I think it has a point.

Intel was working at the time to place the better part of the Ethernet protocol onto an Ethernet controller (in total maybe it was a 5-chip set). This would replace at least a couple of PC boards worth of circuitry that were the only way at the time to make an Ethernet node.

I was fortunate enough to get a Mostek product engineer on the phone to talk about the effect the chip would have on the market – in those days it was pretty much required that there were alternative sources for important chips, in this case Mostek. The fella described to me the volume that was anticipated over 5 or so years, and the pricing of the chip over that time. I transcribed his data points to a graph paper, and, as the volume went up, the price went down. Very magical moment.

http://www.nytimes.com/2016/05/05/technology/moores-law-running-out-of-room-tech-looks-for-a-successor.html

 

 

 

 

May 11, 2016 at 2:14 am Leave a comment

Quantum error correction

It is hard to say if quantum computing has come very far since its inception in the 1990s. In recent years Lockheed and government funded D-Wave efforts gave rise to notion that commercialization was nearing, which is probably not the case. One issue is the qubits that form the core memory elements are error prone. A recent advance in Quantum error correction proves both that useful work is underway and that we still have a long ways to go. Google’s interest hardly betokens looming commercialization. – Jack Vaughan

Related Links
http://www.scottaaronson.com/blog/?p=1400

http://www.scottaaronson.com/blog/?p=2155

March 11, 2015 at 1:39 am 1 comment

AI fever, catch it!

Galvonameter, circa 1935.

Able New York Times technology writer John Markoff (he has been far away the star of my RJ-11 blog) had two of three (count em, three) AI articles in the Dec 16 Times. One discusses Paul Allen’s AI2 institute work; the other discusses a study being launched at Stanford with the goal to look at how technology reshapes roles of humans. Dr. Eric Horvitz of MS Research will lead a committee with Russ Altman, a Stanford professor of bioengineering and computer science. The committee will include Barbara J. Grosz, a Harvard University computer scientist; Yoav Shoham, a professor of computer science at Stanford; Tom Mitchell, the chairman of the machine learning department at Carnegie Mellon University; Alan Mackworth, a professor of computer science at the University of British Columbia; Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley. The last, Mulligan, is the only one who immediately with some cursory Googling appears to be ready to accept that there are some potential downsides to AI re-emergence. It looks like Horvitz has an initial thesis formed ahead of the committee work. That is that, based on a TED presentation (“Making friends with AI”) , while he understand some people’s issues with AI, that the methods of AI will come to support people’s decisions in a nurturing way. The theme would be borne out further if we look at the conclusion of an earlier Horvitz’z organized study on AI’s ramifications (that advances were largely positive and progress relatively graceful). Let’s hope the filters the grop implement tone down the rose-colored learning machine that enforces academics’ best hopes. – Jack Vaughan

December 18, 2014 at 2:52 am Leave a comment

Synaptic breakthrough?

Synaptic Semiconductor

The long history of neural networks took a new turn this week. It is another spin in the up-and-down fortunes for neurals, which were first proposed as a computational model in the 1940s by Warren McCulloch and Walter Pitts.
Today’s NYTimes story, “A New Chip Functions Like a Brain, IBM says,” by able tech veteran John Markoff, describes the highly parallel TrueNorth processor that IBM created and itself reported in the journal Science.

According to Markhoff, Google and others have turned to neural technology to improve speech recognition and photo classification. Neither of those apps have airtight super precise accuracy I would guess, although speech recognitions validity is quite possible for any person to adjudge.

The IBM work was sponsored by DARPA through its melodiously christened “Synapse” (“Systems of Neuromorphic Adaptive Plastic Scalable” program, in some part to automate analysis of military drones’ surveillance images.

An earlier version of the chip had bad one neurosynaptic core containing 256 neurons. With 5.4 billion transistors, the new chip is the biggest chip IBM has ever made, and its creators liken it to a “supercomputer the size of a postage stamp.” This chip has 4,096 cores.

Related
http://en.wikipedia.org/wiki/Neural_network
http://nyti.ms/1mq4qcg
http://www.nytimes.com/2014/08/08/science/new-computer-chip-is-designed-to-work-like-the-brain.html
http://www.eetimes.com/document.asp?doc_id=1323441

August 9, 2014 at 1:38 am Leave a comment

MoonRocketAlthough I respect John Markoff’s expertise and reporting on technical matters, the end-of-year story on things neural, “Brainlike Computers Learning from Experience,” is not too well supported by evidence. It seems to be about a new era of neuromorphic computing, and a good guess would be that the original story was longer than the one that ran. Anyone who went through the neural net mini-craze of the 1980s and 1990s would have to ask why did’t they change the world then and what could be different this time? Are we talking about memristors? Are we talking about synapse chips? Carver Mead appears as the nomer of the neuromorphic. They do seem to be non-Von, but that covers a lot of ground. The online story does point to some useful sources. 

Related
http://cbmm.mit.edu/
http://calit2.net/
http://www.stanford.edu/group/brainsinsilicon/index.html

 

 

March 11, 2014 at 12:07 am Leave a comment

I hate meces to pieces

Had a friend in the biology trade who once fulminated, “I am so sick of mice.” He felt the hegemony of mice in biological research had, well, gone too far. Today in “Mice Fall Short as Test Subjects for Deadly Illness” evidence buttresses his view. A study now says testing mice misleads. Are mice the species to surrogate for study of human disease? The notion is ingrained. At least for sepsis, burns and trauma, it seems questionable. This has ramifications for other assaults on the immune system, including cancer. The report’s Mass General authors had a long road to publication – rejected by Science and Nature, among others. Is it a surprise!? They were even faulted for not showing the gene response they faulted had not occurred in mice. That was the point! To get funding you need experiments using mouse models. What if mouse data is bad in this case, the human case?

February 16, 2013 at 5:23 pm Leave a comment

Older Posts