By Mark Priscaro
We’ve never made much noise about our emulation lab. After all, the multimillion-dollar collection of high-performance processors we use to verify and QA our chip designs is a little bit like a secret weapon. But after learning that the lab is now home to the world’s largest installation of Cadence Verification Computing Platform Systems, we decided to open up this amazing facility for a virtual tour. Hidden in an out-of-the-way spot at NVIDIA corporate headquarters is a warren of specialized machines that work around the clock as engineers test the GPUs and mobile processors of the future.
Emulators are massive with tons of wires. This one stands 7 feet tall, 6 feet wide and 10 feet deep.
Hardware emulators re-create a specific computing environment so design engineers can test the design and performance of new processors after a chip has been designed but before it’s manufactured. It’s one thing to design a breakthrough architecture like Fermi, it’s another to make sure it works correctly in the real world. It’s simply not feasible to make physical prototypes of these chips and iterate for each design tweak. Other solutions – such as software simulators – are much too slow. Emulation speeds up the testing process a thousand fold.
Emulators are designed to provide an exact replica of actual hardware. (Software tools, in comparison, simulate or mimic what a particular piece of hardware will do.) When an emulator is plugged in to a PC, it’s exactly like placing a physical chip on the motherboard. From then on, chip designers can test away.
This huge cable comes out of an emulator, delivering the pins of the GPU inside. We connect the cable to a graphics card in a test PC.
Due to the cost and complexity, not every company invests in emulation. But among those that do, we submit that we’re pretty intense about it. The simple reason is that having a world-class emulation lab means we can keep innovating ahead of our competition.
“Today’s GPUs, which are some of the world’s most complex devices, have billions of transistors,” said Narendra Konda, NVIDIA engineering and emulation lab director. “There’s no way around the fact that cutting-edge design tools like hardware emulators are essential for designing, verifying, developing software drivers and integrating software and hardware components of GPUs and mobile processors.”
Since 1995 NVIDIA has invested millions of dollars in emulation. Today, the lab covers a vast, roughly 6,000-foot space secreted away behind locked doors. Step inside and you’re immediately surrounded by racks of equipment. Cables and pipes snake along the floors and along the walls, vents and air conditioning units create a constant whir as they work to keep these gargantuan machines cool. The emulators themselves are sleek, water-cooled beasts, each named after a major river.
“Nile” is an 8 year old emulator in our lab, still going strong.
Near the front is Tigris, a snowflake-shape configuration of sixteen chassis that was built to emulate Fermi. It’s physically the biggest emulator in the lab, but no longer the most powerful. That title goes to Indus, a multimillion-dollar steel-blue piece of hardware a little longer than a minivan.
Three and a half years in the making, Indus was designed to handle Kepler, our next-generation chip architecture and the successor to Fermi. According to Nimish Modi, senior vice president for the System and Software Realization Group at Cadence, “Indus is the world’s largest installation of Cadence Verification Computing Platform systems, Palladium XP. It’s great working with a partner like NVIDIA to see how our technologies can work together to advance this industry.”
We worked closely with Cadence on Indus’ design – and although it’s smaller than Tigris, it’s more than twice as powerful. It’s stunning to look at Indus’s mass and complexity and realize that all that power represents one chip.
A space ship? No! It’s the “Tigris” emulator in all its glory.
Filling out the lab are Rhine, Nile and a host of other emulators that might be emulating any number of GPUs designed for uses from mobile to gaming to supercomputing to embedded. If an emulator needs more power, Konda and his team can daisy chain them together in the same way gamers improve their system performance by running multiple graphics cards in SLI. The entire lab has an emulation capacity of 4 billion gates, which are the building blocks of a design.
“Indus” : World’s largest emulator, based on Cadence’s Palladium emulator technology.
“Deploying and managing these complex tools requires a very skilled and committed engineering team,” Konda said. “The great work that the emulation team does keeps this state-of-the-art lab humming along.”
NVIDIA’s Emulation engineering team in front of Indus – They’re one of our secret weapons.
Each emulator connects to a number of PCs which are used for testing and can be accessed remotely. So, for example, an NVIDIA engineer in India can log on, boot up, and start running tests at any time of day or night. Since all graphics processing goes back, in the end, to drawing triangles, the tests start there. Can this new chip draw a triangle? Can it draw a red triangle (not blue, not green)? Testing proceeds until everyone is satisfied that the chip can handle the most complex visual computing tasks and is compatible with all the necessary drivers, systems and so on. At any point, the designers might need to go back to the drawing board and repeat the process again. Once a chip graduates from the emulation lab, it’s sent out to be “fabbed” by our manufacturing partners in Taiwan and from there it’s released into the world.
Today, NVIDIA GPUs are powering supercomputers, in-flight entertainment systems and everything in between. They represent some of the most complex technology on the planet. It gives you a new perspective to stand in the emulation lab and think about the advances in these chips – the millions and billions in R&D, the years of work – and realize each one starts out right here, trying to draw a red triangle.
Mark Priscaro is Senior Public Relations Manager at Nvidia
You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.