Welcome to the kingdom of AI
2022 is the year of AI. All the biggest tech conferences in the world are putting AI at their heart. The topic led the way at SXSW in March this year – from questioning the collaboration between AI and humans to Google I/O’s LaMDA 2 AI system that is capable of generating unique natural conversations. Nvidia CEO, Jensen Huang, states that it will change all industries. And Nvidia is indeed placing itself as the reference in terms of AI revolution, from hardware and software to platforms and applications.
AIs are refining hundreds of terabytes of data to deliver intelligence to its user. And thanks to their learning capacity, they are now capable of things that were unimaginable just a few years ago. Think physics and quantum physics, building 3D mock-ups from 2D images or even lifelike movement simulations. AIs have improved so much that they can now assist humans, live, by instantly translating what they are saying in a webinar, like Nvidia’s Maxine conferencing app for example.
But offering such qualitative next-generation apps requires powerful hardware and advanced system software. And in the 2022 GTC conference, Nvidia has shown incredible progress. From H100 GPUs capable of training AI models in just a few days (compared to several weeks before) to the Grace CPU superchip which can process one terabytes per second of memory bandwidth, Nvidia is on the way to releasing the most advanced switch ever built – Spectrum-4. This opens the path to a new class of supercomputers capable of handling omniverse digital twins and edge data centres.
A super framework for a next-level world
Hardware infrastructure has been given, in the words of Jensen, a million-X speedup, blowing a wind of revolution across all industries. With adapted hardware, Nvidia is getting ready to release the full power of AI. Thanks to this progress, the virtual world is set to become the lab of the physical world with the birth of the omniverse.
But can we really speak about birth? Nvidia defines the omniverse as “a scalable, multi-GPU real-time reference development platform for 3D simulation and design collaboration”, which isn’t unlike the announcements on digital twins made by Microsoft during last year’s keynote. But Nvidia has taken this a step further as they are now capable of running simulations at a very large and very precise scale, for example the Earth with FourCastNet. This GPU-accelerated model can run thousands of weather simulations based on global weather patterns on a virtual version of the Earth and predict extreme weather conditions and catastrophes faster and more accurately than ever before. Besides that, it can monitor several aspects of climate change and help companies and industries to adapt their decisions accordingly.
Things get even more interesting with Nvidia’s omniverse kit that includes DeepSearch 3D assets library, Replicator, Omnigraph and Avatar. There are also next-level extensions and apps built around these features, such as Isaac Sim, which allows for the development, testing and managing of AI-based robots. These applications are making working apart, yet together, easier with Omniverse Cloud. It will even facilitate hybrid – AI and human – virtual collaboration to positively impact a physical project.
Giant corporations such as Pepsi and Amazon are already using the omniverse kit to optimise how things are organised in their warehouses. Pepsi aims to improve the safety and efficiency of its supply chain by running multiple simulations on its digital twin. Amazon is going even further with a 100% robotised warehouse using Isaac Sim – they are working on a digital twin to optimise the autonomous robot’s work inside the warehouse.
The example of autonomous cars
We used to dream of flying cars; now we dream about autonomous ones. This dream does not seem that farfetched anymore with Nvidia’s DRIVE omniverse applications. Imagine a warm welcome from your virtual driver as you enter your autonomous car. Nvidia Drive’s chauffeur will safely take you to your final destination while being able to inform you about the buildings around you, the weather and any other questions regarding your environment.
The car itself will be equipped with multiple sensors that will be processed with Hyperion 8 and 9, as well as cameras, radars and lidars. The car bases itself on a multimodal map that is loaded on the omniverse. The map is geotypical and includes buildings, vegetation and road objects such as other cars, traffic lights ect.
Dynamic objects are generated and placed in the map’s digital twin to create adversarial scenarios. The system will learn how to react to thousands of different situations so that the physical car knows how to behave. On the other side, the map’s digital twin will be adapted and finetuned by all the data collected by the car’s captors in the real world. Another way for the car to learn is by using neurographic AI. The virtual map is built in 3D from a video of the real world. And in this virtual map, Drive SIM will change the behaviour of the items of the video to create adversarial situations.
These cars will finally come to life thanks to deep reinforcement learning from their digital twins.
Cars are just one example, but the omniverse is also used in healthcare with Clara Holoscan, supply chain management with Isaac SIM and much more to come.
By introducing Azure’s digital twins in 2021, Microsoft already paved the way to something that will revolutionise our industries and more extensively, our world. Nvidia has taken digital twins further with the omniverse, by considering each layer of their full-stack platform: hardware, system software, platform and application frameworks. The company is bringing industries one step closer to the future by offering companies a way to run simulations on their virtual version, by offering teams next-level collaboration tools including AI members and by making our dream to own autonomous cars more concrete.