Quantum ad impedimenta computing

RS232

A recent Wall Street Journal article doesn’t hold back on the hype, at least in its headline. Quantum computing, it promises, will change the world as we know it. Courtesy of Google. The story that follows is a bit more measured. The obstacles to successful quantum computing are discussed, the murkiness of the applications is considered. There is a discussion of activity of some players – Dwave, IBM, and especially Google. Also noted- The NSA is building a quantum computer too. The conjecture (ala Google’s Hartmut Neven) is put forward that the nearest closest biggest opportunity for quantum computing relates to machine learning – guess because probability is involved and the computation problems could grow unmanageable eventually. We will see.

The most obvious expectation is that the NSA is anticipating the possibility of a point where quantum computers could break important codes. And thus disrupt the present status of Internet commerce. Is that as big a threat as Hitler? I ask that because there are parts of the quest for quantum computing that bring to mind the atomic bomb program of the 20th Century.

When you look at the obstacles to successful quantum computing at a reasonable scale, you look at a problem of physics, and more. Obstacles to successful quantum computing include error correction, coherence time of the qubit (life-time), low-temperature wiring, verification of quantum algorithms, more.

It brings to mind the massive obstacles that faced the teams working in World War II to create the atomic bomb.

In search of something that worked, three methods were used for uranium enrichment: electromechanical, gaseous and thermal. Parallel efforts worked to produce plutonium from uranium using graphite reactors or chemically separation or irradiatiation and transmutaion (God bless you). This is collectively known as The Manhattan Project.

In the end they built two types of bombs – one that employed a gun-type of detonator that proved impractical for plutonium. A simpler gun using very rare uranium-238 (painstakingly separated from uranium-235) was used in the final analysis, and a couple of years later the emphasis was all about hydrogen bombs.

This is not to mention the German atomic bomb effort, which, as far as I know, looked more like a reactor than a bomb. Or the British effort, which predated the American effort. Efforts are underway today by lesser powers, assuredly using methods honed by the great powers over time. If you are a lesser power, I cadged that from the guy sitting next to me in class – so, please address all correspondence to him.

It is hard to think — today anyway — that development of the quantum computer of the 21st century will benefit from the same type of impetus the Manhattan Project did (World War). But that could change. Time will tell.

In the meantime, looking at the picture above, I think we are looking at RS-232 connectors in the innards of Google’s machine – why not RJ-11?? – Juan Ignacio Vaughan

Related

https://www.wsj.com/articles/how-googles-quantum-computer-could-change-the-world-1508158847

Thank you Wikiepedia https://en.wikipedia.org/wiki/Manhattan_Project

Advertisements

October 19, 2017 at 1:38 am Leave a comment

On the same wave?

hot-chips-stratix-10-board-1Project Brainwave from Microsoft was discussed at this summer’s HotChips conference. it’s claimed to be a major leap forward in both performance and flexibility for cloud-based serving of deep learning models. It’s about real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency.

 

Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users. The Project Brainwave system is built with three main layers:

  1. A high-performance, distributed system architecture;

  2. A hardware DNN engine synthesized onto FPGAs; and

  3. A compiler and runtime for low-friction deployment of trained models.

https://www.microsoft.com/en-us/research/blog/microsoft-unveils-project-brainwave/

September 20, 2017 at 3:41 pm Leave a comment

Synchronous data parallel methodology said to make GPUs better learners

By Jack Vaughan

GPUs and deep learning. Marriage made in silicon heaven. Right? The GPU has memory bandwidth benefit over its CPU sibling, when it comes to neural underlying deep learning AI. But well nothing is easy, as Jethro Tull said. You may have large data sets or you may have large data models. But maybe not both. Meanwhile, adding servers on end is counterproductive. Faster GPUs can slow things down further. It’s conundrum time.

IBM Research sees a path for improvement, specifically in terms of reducing time for training large models with large data sets. Their distributed deep learning software approach does deep learning training synchronously with low communication overhead.  The boffins write:

“..as GPUs get much faster, they learn much faster, and they have to share their learning with all of the other GPUs at a rate that isn’t possible with conventional software. This puts stress on the system network and is a tough technical problem. Basically, smarter and faster learners (the GPUs) need a better means of communicating, or they get out of sync and spend the majority of time waiting for each other’s results. So, you get no speedup–and potentially even degraded performance–from using more, faster-learning GPUs.”

The secret sauce: synchronous data parallel methodology.

What could go wrong : It is key that the community continue to extend demonstration of large-scale distributed Deep Learning to other popular neural network types, in particular, recurrent neural networks. The whole training has to be made resilient and elastic since it is very likely that some devices will malfunction when the number of learners increases. Automation and usability issues have to be addressed to enable more turnkey operation, especially in a cloud…

topolopgy

Related
https://www.ibm.com/blogs/research/2017/08/distributed-deep-learning/
https://www.youtube.com/watch?v=GDPDYltjXQM
https://arxiv.org/pdf/1708.02188.pdf
http://searchbusinessanalytics.techtarget.com/news/450424573/IBM-cracks-the-code-for-speeding-up-its-deep-learning-platform

September 2, 2017 at 3:48 am Leave a comment

Dont throw away your tape back up again

IBM (NYSE: IBM) Research scientists have achieved a new world record in tape storage – their fifth since 2006. The new record of 201 Gb/in(gigabits per square inch) in areal density was achieved on a prototype sputtered magnetic tape developed by Sony Storage Media Solutions. The scientists presented the achievement today at the 28thMagnetic Recording Conference (TMRC 2017) here.

August 14, 2017 at 5:20 pm Leave a comment

Topolology in play as Microsoft Research ups quantum computing ante

Micrsoft has garnered two top boffins as it ‘doubles down’ on a quantum computing bet that is unique in a field a-full with uniqueness. At the heart of the Microsoft effort is an approach known as topological quantum computing – it is a different path than others are taking.

Among the topological qubit researchers now joining the company are Charles Marcus of the Niels Bohr Institute at the University of Copenhagen and Leo Kouwenhoven, a distinguished professor at Delft University of Technology. They have been deep in the innards of topology, but want to be mothers of actual invention.

The news was covered in the New York Times by the redoubtable John Markhoff in “Microsoft spends big to build quantum computer out of science fiction.” That is a title made for Amazing Techno Tales!

A topological quantum computer is one that does not use the venerable trapped quantum particle approach. Instead the topological type (according to Wikipedia):

“Employs two-dimensional quasiparticles called anyons, whose world lines pass around one another to form braids in a three-dimensional spacetime (i.e., one temporal plus two spatial dimensions). These braids form the logic gates that make up the computer.”

The Wikipedia citation goes on to suggest that the topological approach is more stable and, one might guess, in need of less error correction. (Ed Note Hope we don’t have to make a correction!)

Among members of the Redmond, Wash.-giant’s research team are principals who in conversation indicate they are looking to the first days of the transistor to inform their approach to the qubit. – Jack Vaughan

November 29, 2016 at 2:52 am Leave a comment

I got the blues about Moore’s Law, baby

89587923_ibm_quantum_processor

With Moore’s Law in retreat, the Quibit of Quantum vies to compete.

Fair to say this blog turned into “The Saturday Evening Review of John Markhoff” a long time ago. Well, the news feeds are good – and we could do worse than to track John Markhoff, who has been covering high tech at NYTimes for lo these many years.

For your consideration: His May 5 article on Moore’s Law. He rightly points out this at inception was more an observation than a law, but Intel’s Gordon Moore’s 1965 eureka that the number of components that could be etched onto the surface of a silicon wafer was doubling at regular intervals stood the test of what today passes for time.

The news hook is a decision by the Semiconductor Industry Assn’s to discontinue its Technology Roadmap for Semiconductors, based I take it on the closing of the Moore’s Law era. IEEE will take up where this leaves off, with a forecasting roadmap [system] that tracks a wider swath of technology. Markhoff suggests that Intel hasn’t entirely accepted the end of this line.

Possible parts of that swath, according to Markhoff, are quantum computing and  graphene.  The heat of the chips has been the major culprit blocking Moore’s Law further run. Cost may be the next bugaboo. So far, parallelism has been the answer.

Suffice it to say, for some people at least, Moore’s Law has chugged on like a beautiful slow train of time. With the Law in effect people at Apple, Sun, Oracle, etc. could count on things being better tomorrow than they were today in terms of features and functionality. So the new future, being less predictable, is a bit more foreboding.

I had my uh-ha moment on something like this in about 1983 when I was working on my master’s thesis on Local Area Networks. This may not completely be a story about Moore’s Law.. But I think it has a point.

Intel was working at the time to place the better part of the Ethernet protocol onto an Ethernet controller (in total maybe it was a 5-chip set). This would replace at least a couple of PC boards worth of circuitry that were the only way at the time to make an Ethernet node.

I was fortunate enough to get a Mostek product engineer on the phone to talk about the effect the chip would have on the market – in those days it was pretty much required that there were alternative sources for important chips, in this case Mostek. The fella described to me the volume that was anticipated over 5 or so years, and the pricing of the chip over that time. I transcribed his data points to a graph paper, and, as the volume went up, the price went down. Very magical moment.

http://www.nytimes.com/2016/05/05/technology/moores-law-running-out-of-room-tech-looks-for-a-successor.html

 

 

 

 

May 11, 2016 at 2:14 am Leave a comment

Quantum error correction

It is hard to say if quantum computing has come very far since its inception in the 1990s. In recent years Lockheed and government funded D-Wave efforts gave rise to notion that commercialization was nearing, which is probably not the case. One issue is the qubits that form the core memory elements are error prone. A recent advance in Quantum error correction proves both that useful work is underway and that we still have a long ways to go. Google’s interest hardly betokens looming commercialization. – Jack Vaughan

Related Links
http://www.scottaaronson.com/blog/?p=1400

http://www.scottaaronson.com/blog/?p=2155

March 11, 2015 at 1:39 am 1 comment

Older Posts