In a recent launch event, AMD claimed its latest GPU for AI and high-performance computing is faster than Nvidia’s equivalent GPU. However, Nvidia has responded, asserting that its H100-based machines are actually faster when properly optimized. Nvidia emphasized the importance of optimized software, robust parallel computing, versatile tools, refined algorithms, and great hardware for high AI performance. It also presented performance metrics for its H100-based servers, demonstrating quicker inference task completion compared to AMD’s machines. Nvidia highlighted the significance of response time and overall efficiency, especially when handling multiple inference requests in larger batches. The company stressed the importance of these factors in industry benchmarks like MLPerf.