Transform Tuesday — Good Buy Mr. Chips: Nvidia Helps Robots To Learn
Nvidia officially took the cover off one its latest products at Taipei’s Computex 2018 event, which is intended to bring down the cost of introducing A.I. to robotics. CEO Jensen Huang declared, “Someday, there will be billions of intelligent machines in manufacturing, home delivery, warehouse logistics and much more“.
I will attempt to rehash press releases and move past most of the superlatives.
Nvidia has made a robotics computer called “Jetson™ Xavier™. It has over 9 billion transistors and delivers over 30 TOPS (trillion operations per second). This is reported as having more processing capability than a powerful workstation while using a third the energy of a lightbulb. This computer has 6 high-performance processors: a Volta Tensor Core GPU, an eight-core ARM64 CPU, dual NVDLA deep learning accelerators, and image, vision, and video processors.
TLDR: A computer designed for robots that is fast with power efficiency.
“These enable dozens of algorithms to be processed concurrently and in real time for sensor processing, odometry, localization and mapping, vision and perception, and path planning. This level of performance is essential for a robot to take input from sensors, locate itself, perceive its environment, recognize and predict motion of nearby objects, reason about what action to perform and articulate itself safely.”
The software is composed of 3 parts: An SDK, algo software & a simulator
Isaac SDK (Software Development Kit) includes APIs and tools to develop robotics algorithm software and a run-time framework with libraries.
Isaac IMX (Isaac Intelligent Machine Acceleration) is robotics algorithm software.
Isaac SIM (virtual simulation environment) to train autonomous machines and perform hardware-in-the-loop testing with Jetson Xavier.
This hits two technology cross-currents, robotics and Artificial Intelligence.
Robots with A.I.’s potential for learning how to behave, create certain outcomes, or “predict” has long been anticipated. Instead of robots with hard-coded rules to govern their actions, they can instead operate with artificial intelligence created through machine learning (ML). A growing subset of ML is the use of Deep Learning to “train” AIs. (As a layerperson, the line from the film “Bladerunner” “You’re talking about memories” came to mind.)
The training of A.I. requires both large data sets and a great deal of computing power, a/k/a chips. In recent years, that means Graphical Processing Units, a/k/a GPUs. Enter Nvidia.
The simulator is what captured my attention and imagination and CEO Huang put it plainly: “Nvidia is… applying our deep expertise in simulating the real world so that robots can be trained more precisely, more safely, and more rapidly.”
The price-tag for Nvidia Isaac developer kit, to be released August 2018, will be $1299 USD but I suspect that the cost will continue to drop. As cost continues to decline, so will the cost of what this tech enables — Deep Learning. The ability to train systems at lower cost, can mean a propagation of these techniques for more users and in a greater potential variety of experiments and ventures. More users at lower cost and more situations and therefore more use cases could be discovered.
This announcement, for my untrained eyes, feels iterative but that’s okay. I also get a sense, that even with all the progress already done and Nvidia’s meteoric rise, it is still early days.
I noticed an effort at benchmarking the cost-effectiveness of Deep Learning from the Stanford DAWN project:
“Computation time and cost are critical resources in building deep models, yet many existing benchmarks focus solely on model accuracy. DAWNBench provides a reference set of common deep learning workloads for quantifying training time, training cost, inference latency, and inference cost across different optimization strategies, model architectures, software frameworks, clouds, and hardware.”
“Despite considerable research on systems, algorithms and hardware to speed up deep learning workloads, there is no standard means of evaluating end-to-end deep learning performance.”
“Existing benchmarks measure proxy metrics, such as time to process one minibatch of data, that do not indicate whether the system as a whole will produce a high-quality result. In this work, we introduce DAWNBench, a benchmark and competition… We believe DAWNBench will provide a useful, reproducible means of evaluating the many trade-offs in deep learning systems.“
Nvidia has been running long and hot for years thanks to massive demand for what GPUs can compute, far beyond graphics.
Originally published at big-stack.com on June 5, 2018.