The new model will allow to instruct new machine learning algorithms in less than 30 seconds with unprecedented efficiency and speed
Machine learing algorithms play a key role in fields such as engineering and transport of services or goods. These algorithms build on past experience and improve themselves, mimicking the learning process of human beings. Recently, Google announced, through its sites, the creation of a new supercomputer with unrivalled performance. The new PC consists of TPU (tensor processing unit) chips, AI accelerators built ad hoc by Google. The first tests carried out on artificial intelligence machine learning models were amazing.
The most powerful configuration managed to train the algorithm in just 30 seconds. To have a yardstick for comparison, in 2015 the same process took more than three weeks. In just 5 years the efficiency then increased by five orders of magnitude, making the process 100 thousand times faster. The size of the supercomputer is far from small. The machine is 4 times the size of its predecessor and occupies an entire room, which needs to be properly cooled. The supercompure contains 4096 TPU chips and hundreds of CPU processors in aspired machines. The units are connected through ultra-fast connectors built specifically for this use.
The total computing power of the supercompure exceeds 430 petaFLOPS. The previous records belong to IBM’s Summit (200 petaFLOPS) and Fugaku’s HPL (415 petaFLOPS). The value of 430 petaFLOPS is therefore a record for single supercomputers. The fourth-generation TPU chips have proven to be 2.7 times more efficient than previous versions, providing enough computing power to lighten the load on memories and refine the internal connector technology. The researchers say: “Google’s MLPerf Training v0.7 submissions demonstrate our commitment to advancing machine learning research and engineering at scale and delivering those advances to users through open-source software, Google’s products, and Google Cloud“.