AI resurgent: Boosting chip innovation

It is has been interesting to see a re-mergence in interest in new chip architectures. Just when you think it is all been done and there’s nothing new. Bam! For sure.

The driver these days is A.I. but more particularly the machine learning aspect of AI. GPUs jumped out of the gamer console and onto the Google and Facebook data center. But there was more in the way of hardware tricks to come. The effort is to get around the tableau I here repeatedly cited: the scene is the data scientist sitting there thumb twiddling while the neural machine slowly does its learning.

I know when I saw that Google had created a custom ASIC for Tensor Flow processing, I was taken aback. If new chips are what is needed to succeed in this racket, it will be a rich man’s game.

Turns out a slew of startups are on the case. This article by Cade Metz suggests that at least 45 startups are working on chips for AI type applications such as speech recognition and self-driving cars. It seems the Nvidia GPU that has gotten us to where we are, may not be enough going forward. Co processors for co processors, chips that shuttle data about in I/O roles for GPUs, may be the next frontier.

Metz names a number of AI chip startups: Cerbras, Graphcore, Wave Computing, Mythic, Nervana (now part of Intel). – Smiling Jack Vaughan

Advertisements

January 21, 2018 at 6:26 pm Leave a comment

Quantum frutition

50_qubit_system_desktopIt was a phenomenon discovered 100 years ago. Now, as technology based on quantum mechanics emerges from labs, Accenture and others are setting their sights on quantum machine learning, writes Jack Vaughan. As years many years of quanta shuffling bears tutti fruity.

Click to read: Quantum machine learning may come knocking on analytics’ door – SearchDataManagement

December 23, 2017 at 1:59 am Leave a comment

Quantum ad impedimenta computing

RS232

A recent Wall Street Journal article doesn’t hold back on the hype, at least in its headline. Quantum computing, it promises, will change the world as we know it. Courtesy of Google. The story that follows is a bit more measured. The obstacles to successful quantum computing are discussed, the murkiness of the applications is considered. There is a discussion of activity of some players – Dwave, IBM, and especially Google. Also noted- The NSA is building a quantum computer too. The conjecture (ala Google’s Hartmut Neven) is put forward that the nearest closest biggest opportunity for quantum computing relates to machine learning – guess because probability is involved and the computation problems could grow unmanageable eventually. We will see.

The most obvious expectation is that the NSA is anticipating the possibility of a point where quantum computers could break important codes. And thus disrupt the present status of Internet commerce. Is that as big a threat as Hitler? I ask that because there are parts of the quest for quantum computing that bring to mind the atomic bomb program of the 20th Century.

When you look at the obstacles to successful quantum computing at a reasonable scale, you look at a problem of physics, and more. Obstacles to successful quantum computing include error correction, coherence time of the qubit (life-time), low-temperature wiring, verification of quantum algorithms, more.

It brings to mind the massive obstacles that faced the teams working in World War II to create the atomic bomb.

In search of something that worked, three methods were used for uranium enrichment: electromechanical, gaseous and thermal. Parallel efforts worked to produce plutonium from uranium using graphite reactors or chemically separation or irradiatiation and transmutaion (God bless you). This is collectively known as The Manhattan Project.

In the end they built two types of bombs – one that employed a gun-type of detonator that proved impractical for plutonium. A simpler gun using very rare uranium-238 (painstakingly separated from uranium-235) was used in the final analysis, and a couple of years later the emphasis was all about hydrogen bombs.

This is not to mention the German atomic bomb effort, which, as far as I know, looked more like a reactor than a bomb. Or the British effort, which predated the American effort. Efforts are underway today by lesser powers, assuredly using methods honed by the great powers over time. If you are a lesser power, I cadged that from the guy sitting next to me in class – so, please address all correspondence to him.

It is hard to think — today anyway — that development of the quantum computer of the 21st century will benefit from the same type of impetus the Manhattan Project did (World War). But that could change. Time will tell.

In the meantime, looking at the picture above, I think we are looking at RS-232 connectors in the innards of Google’s machine – why not RJ-11?? – Juan Ignacio Vaughan

Related

https://www.wsj.com/articles/how-googles-quantum-computer-could-change-the-world-1508158847

Thank you Wikiepedia https://en.wikipedia.org/wiki/Manhattan_Project

October 19, 2017 at 1:38 am Leave a comment

On the same wave?

hot-chips-stratix-10-board-1Project Brainwave from Microsoft was discussed at this summer’s HotChips conference. it’s claimed to be a major leap forward in both performance and flexibility for cloud-based serving of deep learning models. It’s about real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency.

 

Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users. The Project Brainwave system is built with three main layers:

  1. A high-performance, distributed system architecture;

  2. A hardware DNN engine synthesized onto FPGAs; and

  3. A compiler and runtime for low-friction deployment of trained models.

https://www.microsoft.com/en-us/research/blog/microsoft-unveils-project-brainwave/

September 20, 2017 at 3:41 pm Leave a comment

Synchronous data parallel methodology said to make GPUs better learners

By Jack Vaughan

GPUs and deep learning. Marriage made in silicon heaven. Right? The GPU has memory bandwidth benefit over its CPU sibling, when it comes to neural underlying deep learning AI. But well nothing is easy, as Jethro Tull said. You may have large data sets or you may have large data models. But maybe not both. Meanwhile, adding servers on end is counterproductive. Faster GPUs can slow things down further. It’s conundrum time.

IBM Research sees a path for improvement, specifically in terms of reducing time for training large models with large data sets. Their distributed deep learning software approach does deep learning training synchronously with low communication overhead.  The boffins write:

“..as GPUs get much faster, they learn much faster, and they have to share their learning with all of the other GPUs at a rate that isn’t possible with conventional software. This puts stress on the system network and is a tough technical problem. Basically, smarter and faster learners (the GPUs) need a better means of communicating, or they get out of sync and spend the majority of time waiting for each other’s results. So, you get no speedup–and potentially even degraded performance–from using more, faster-learning GPUs.”

The secret sauce: synchronous data parallel methodology.

What could go wrong : It is key that the community continue to extend demonstration of large-scale distributed Deep Learning to other popular neural network types, in particular, recurrent neural networks. The whole training has to be made resilient and elastic since it is very likely that some devices will malfunction when the number of learners increases. Automation and usability issues have to be addressed to enable more turnkey operation, especially in a cloud…

topolopgy

Related
https://www.ibm.com/blogs/research/2017/08/distributed-deep-learning/
https://www.youtube.com/watch?v=GDPDYltjXQM
https://arxiv.org/pdf/1708.02188.pdf
http://searchbusinessanalytics.techtarget.com/news/450424573/IBM-cracks-the-code-for-speeding-up-its-deep-learning-platform

September 2, 2017 at 3:48 am Leave a comment

Dont throw away your tape back up again

IBM (NYSE: IBM) Research scientists have achieved a new world record in tape storage – their fifth since 2006. The new record of 201 Gb/in(gigabits per square inch) in areal density was achieved on a prototype sputtered magnetic tape developed by Sony Storage Media Solutions. The scientists presented the achievement today at the 28thMagnetic Recording Conference (TMRC 2017) here.

August 14, 2017 at 5:20 pm Leave a comment

Topolology in play as Microsoft Research ups quantum computing ante

Micrsoft has garnered two top boffins as it ‘doubles down’ on a quantum computing bet that is unique in a field a-full with uniqueness. At the heart of the Microsoft effort is an approach known as topological quantum computing – it is a different path than others are taking.

Among the topological qubit researchers now joining the company are Charles Marcus of the Niels Bohr Institute at the University of Copenhagen and Leo Kouwenhoven, a distinguished professor at Delft University of Technology. They have been deep in the innards of topology, but want to be mothers of actual invention.

The news was covered in the New York Times by the redoubtable John Markhoff in “Microsoft spends big to build quantum computer out of science fiction.” That is a title made for Amazing Techno Tales!

A topological quantum computer is one that does not use the venerable trapped quantum particle approach. Instead the topological type (according to Wikipedia):

“Employs two-dimensional quasiparticles called anyons, whose world lines pass around one another to form braids in a three-dimensional spacetime (i.e., one temporal plus two spatial dimensions). These braids form the logic gates that make up the computer.”

The Wikipedia citation goes on to suggest that the topological approach is more stable and, one might guess, in need of less error correction. (Ed Note Hope we don’t have to make a correction!)

Among members of the Redmond, Wash.-giant’s research team are principals who in conversation indicate they are looking to the first days of the transistor to inform their approach to the qubit. – Jack Vaughan

November 29, 2016 at 2:52 am Leave a comment

Older Posts