Hobbyist AI 'Time Travels' to 1834 London, Unearthing Historical Facts
8
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
While the incident itself is relatively contained, the underlying concept—an AI unexpectedly accessing and accurately representing a historical moment—has significant potential, generating considerable buzz within both the AI research community and the broader public, deserving of a high impact score.
Article Summary
A hobbyist developer, Hayk Grigorian, has created a small AI language model, TimeCapsuleLLM, trained exclusively on 6.25GB of texts from 1800-1875 London. The model, built using nanoGPT and Microsoft’s Phi 1.5 architecture, surprisingly generated a detailed account of the 1834 London protests, referencing Lord Palmerston and the Poor Law Amendment Act—events the developer didn’t explicitly train the model on. The AI assembled these connections from scattered references within the Victorian-era data. This accidental reconstruction demonstrates a potential for historical data to influence AI outputs, offering a new approach to training language models. Grigorian’s experimentation with ‘Selective Temporal Training’ (STT), which involves training from scratch to avoid modern data contamination, is attracting attention within the AI research community. The model’s ability to ‘remember’ and connect disparate pieces of information from its training data mirrors a known effect of scaling data within smaller AI models. Grigorian’s work contributes to the emerging field of ‘Historical Large Language Models’ (HLLMs), raising possibilities for interactive period linguistic models and offering novel methods for studying past eras through AI.Key Points
- A hobbyist developer created an AI language model trained solely on Victorian-era texts.
- The AI model unexpectedly generated a historically accurate account of the 1834 London protests, referencing specific figures and events.
- This ‘factcident’ showcases the potential for historical data to influence AI outputs, particularly when using small, untrained models.