Nvidia’s Jensen Huang says AI hallucinations are solvable, artificial general intelligence is 5 years away

Key Points:

  • Artificial general intelligence (AGI) represents a significant future leap in artificial intelligence and can perform a broad spectrum of cognitive tasks at or above human levels.
  • Concerns exist about the unpredictability of AGI’s decision-making processes and objectives, which may not align with human values or priorities, leading to potential uncontrollable scenarios.
  • Nvidia’s CEO, Jensen Huang, suggests that defining AGI and setting specific tests can help predict its development timeline, with the possibility of achieving AGI within 5 years with well-defined criteria.


Artificial General Intelligence, or AGI, stands as a pivotal advancement within the realm of artificial intelligence, offering capabilities beyond specialized tasks performed by narrow AI. AGI, often dubbed “strong AI” or “human-level AI,” signifies a future where machines excel at a wide range of cognitive functions comparable to or exceeding human levels. While addressing the press at Nvidia’s recent GTC developer conference, CEO Jensen Huang expressed weariness with the ubiquitous inquiries about AGI. This anticipation stems from the profound implications of AGI on human existence, raising concerns about the potential divergence of machine objectives from human values and the uncharted territory of autonomous AI decision-making.


The uncertainty surrounding AGI’s development timeline incites sensationalism and apprehension, with stakeholders wary of predicting the convergence of technology and human intellect. Huang delves into the complexity of defining AGI, likening it to recognizing milestones like the onset of a new year or reaching a distinct location through specified measures. He proposes that achieving a functional AGI within five years is feasible if defined by stringent criteria, such as excelling at specific tests like legal examinations, logical reasoning, or medical assessments. However, Huang refrains from speculation without a clear understanding of the specific benchmarks characterizing AGI.


A pertinent issue raised during the conference pertains to AI hallucinations, where AI systems generate responses that seem plausible but lack factual grounding. Huang dismisses this challenge as surmountable, advocating for a comprehensive approach to answer generation termed “retrieval-augmented generation.” This methodology necessitates thorough research and fact-checking for each response, mirroring principles of media literacy by verifying information sources and discarding inaccuracies. Emphasizing the importance of cross-referencing multiple credible sources, Huang underscores the significance of AI acknowledging its limitations by conveying uncertainties or admitting insufficient knowledge for certain inquiries, particularly in critical domains like healthcare.


The discourse surrounding AGI encapsulates not only the technological advancements but also the ethical considerations and challenges in ensuring AI aligns with human values and remains accountable for its decision-making processes. As the boundaries between human and artificial intelligence blur, conversations led by industry leaders like Jensen Huang serve to navigate the evolving landscape of AI development responsibly and ethically.



Prompt Engineering Guides



©2024 The Horizon