Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

The Unsung Originator of 'AGI': A Deep Dive into the Term's Unexpected History

Artificial Intelligence AGI Dartmouth Conference Technology History Nanotechnology DeepMind Robotics
October 31, 2025
Source: Wired AI
Viqus Verdict Logo Viqus Verdict Logo 8
Hidden Roots
Media Hype 7/10
Real Impact 8/10

Article Summary

The story of 'artificial general intelligence' (AGI) is surprisingly rooted in a cautionary tale about nanotechnology and the potential for misuse of advanced technology. Mark Gubrud, a researcher obsessed with the dangers of nanotechnology and its potential as a weapon, initially came up with the term in 1997. While attending a Foresight Conference, he needed a way to differentiate between general AI—machines with broad cognitive abilities—and the more limited ‘expert systems’ then prevalent. Gubrud defined AGI as systems rivaling or surpassing the human brain in complexity and speed, capable of acquiring and manipulating general knowledge. This definition, presented in a paper focused on international security, was largely ignored at the time. However, as the field of AI began to gain traction in the early 2000s, the term ‘AGI’ took hold, largely thanks to its adoption by figures like Shane Legg at Google's DeepMind. Gubrud’s original warning about a potential ‘arms race’ surrounding AGI—particularly regarding autonomous weapons—has proven prescient, given the current focus on AI development and the immense wealth being invested in achieving true AGI. The story highlights a critical and often-overlooked element of the AI narrative: the ethical considerations and potential risks that frequently get sidelined in the pursuit of technological advancement.

Key Points

  • Mark Gubrud coined the term ‘artificial general intelligence’ (AGI) in 1997 while warning against the dangers of nanotechnology as a potential weapon.
  • Gubrud’s initial definition of AGI focused on systems capable of broad cognitive abilities, surpassing human intelligence in complexity and speed.
  • Despite the significance of his contribution, Gubrud’s work was initially overlooked, with the term ‘AGI’ gaining prominence later through the efforts of figures like Shane Legg.

Why It Matters

This story is important because it shifts the narrative surrounding AGI. We’ve been focused on the ‘big players’ – OpenAI, Google, Meta – and their trillion-dollar ambitions. However, this piece reveals that the fundamental concept of AGI originated with a researcher deeply concerned about the potential dangers of this technology. It serves as a crucial reminder that technological progress must be accompanied by careful consideration of ethical implications and potential risks. Furthermore, it highlights the often-unrecognized contributions of individuals working on the fringes of the field, whose warnings can prove invaluable as powerful forces shape the future of AI.

You might also be interested in