Google Introduces Next Generation TPU

At the Google I/O 2017 conference, Google has announced the second generation of Tensor Processing Units. Google has been working on Machine learning and AI training quite differently for a long time, and TPU has played an important role in accelerating the machine learning workloads.
 
 
Image Source: blog.google
 
The company has named the newly launched TPU devices Cloud TPUs, and now they are available on Google Cloud via Google Compute Engine. Each of the Cloud TPUs is capable of delivering up to 180 teraflops of floating-point performance and high-speed network.
 
In the official blog, Google states,
 
“We’re bringing our new TPUs to Google Compute Engine as Cloud TPUs, where you can connect them to virtual machines of all shapes and sizes and mix and match them with other types of hardware, including Skylake CPUs and NVIDIA GPUs. You can program these TPUs with TensorFlow, the most popular open-source machine learning framework on GitHub, and we’re introducing high-level APIs, which will make it easier to train machine learning models on CPUs, GPUs or Cloud TPUs with only minimal code changes.”
 
The company says that it has designed these TPUs to work together so as to provide even better performance. This is how the company has built a machine learning supercomputer, TPU Pod, combining 64 TPUs together. A single TPU pod provides up to 11.5 petaflops.
 
Well! That’s a lot of computation. Seems like Google really wants its machines to learn faster.