×

Please register to view this content

First Name
Last Name
Company
Job Title
Country
State
Opt me in to receive communications from Mercury Systems
Thank you
Error - something went wrong!

Using NVIDIA-based Distributed Processing to Speed Mission-Critical AI at the Edge

May 16, 2023

The need for evolving data center-caliber technologies architected to deliver higher performance and enable powerful centralized edge processing is growing exponentially. Greater amounts of data ingest from sensors must be analyzed in real-time to gain actionable insights for decision-making and competitive advantages.

For the first time in the market, there is an optimized network-attached rugged distributed GPU processing system purpose-built for challenging AI edge workloads. Join NVIDIA and Mercury to learn more about how they are:

  • Speeding low-latency, network-attached everything at the edge with disaggregated processing
  • Enabling GPU parallel computing resources via high-speed Ethernet networks without an x86 host
  • Pairing NVIDIA DPUs and GPUs for high-performance applications
  • Designing Rugged Distributed Processing (RDP) servers that reduce SWaP, cost and complexity of deploying GPU servers

WHO WE ARE

FOLLOW US ON SOCIAL MEDIA

Previous Video
Exploring the advantages of chip-scale vs SMT integration for high-speed processing modules
Exploring the advantages of chip-scale vs SMT integration for high-speed processing modules

Join Mercury Systems as they discuss the technologies and processes required to deliver leading performance...

Next Video
Harness More Data with Intel-based Rugged Computing
Harness More Data with Intel-based Rugged Computing

Learn about Mercury's all-new field-proven RES XR7 rugged rack servers with Intel’s latest 3rd gen Xeon Sca...