Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

Thinking Machines Labs Tackles AI Determinism

AI LLM Reproducibility OpenAI Mira Murati Thinking Machines Lab Reinforcement Learning
September 10, 2025
Viqus Verdict Logo Viqus Verdict Logo 8
Controlled Chaos
Media Hype 6/10
Real Impact 8/10

Article Summary

Thinking Machines Lab’s research directly addresses a significant hurdle in the advancement of large language models (LLMs). The team is tackling 'non-determinism' – the fact that LLMs like ChatGPT often produce different responses to the same prompt. This is primarily due to the stochastic nature of GPU kernel orchestration during inference. By gaining precise control over this process, Thinking Machines Lab aims to generate more consistent and reliable AI responses. This research has implications for several areas, including reinforcement learning (RL), where noisy responses can hinder training, and for creating customized AI models tailored to specific business needs. The lab’s commitment to open research, through its ‘Connectionism’ blog series, positions it as a potentially pivotal player in the evolution of AI, contrasting with some other companies' increasingly closed approaches. Their focus on reproducibility represents a critical step towards building trust and practical applications for these powerful models, particularly as they look to justify their $12 billion valuation.

Key Points

  • Thinking Machines Lab is researching methods to eliminate the randomness in LLM responses.
  • The core issue is the stochastic nature of GPU kernel orchestration during AI model inference.
  • Successfully addressing non-determinism will improve reinforcement learning training and enable more consistent, reliable AI responses for enterprise applications.

Why It Matters

This research is significant because it tackles a fundamental problem limiting the practical application of state-of-the-art AI models. Addressing non-determinism is crucial for building AI systems that can be reliably used in industries like robotics, scientific research, and enterprise software development. It moves beyond theoretical advancements and toward tangible solutions, potentially unlocking the full potential of LLMs and validating the substantial investment in Thinking Machines Lab. The shift toward an open research model also signals a potential change in the AI landscape, contrasting with the growing trend of secrecy in the industry.

You might also be interested in