Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

AI Hallucinations Plague Prestigious NeurIPS Conference Papers

AI GPTZero Hallucinations NeurIPS Startups Artificial Intelligence Research
January 21, 2026
Viqus Verdict Logo Viqus Verdict Logo 8
Verification Gap
Media Hype 6/10
Real Impact 8/10

Article Summary

GPTZero’s analysis of papers accepted by the Conference on Neural Information Processing Systems (NeurIPS) revealed a concerning number of hallucinated citations. The startup identified 100 instances of fabricated citations across 51 papers, confirming them as entirely fake. This finding highlights a potential issue within the increasingly reliance on Large Language Models (LLMs) for tasks like citation generation, particularly among the world’s foremost AI researchers. While NeurIPS argues that even a small percentage of inaccurate references doesn't invalidate the research’s core findings, the sheer volume of citations generated by LLMs presents a significant challenge to verification. The discovery underscores the strain on conference review pipelines and raises questions about the rigorousness of academic publishing in the age of AI. The situation is further complicated by the importance of citations as a metric of a researcher’s influence and impact within the field. The issue has even been previously discussed, as noted by a 2025 paper, ‘The AI Conference Peer Review Crisis’

Key Points

  • 100 confirmed instances of fabricated citations were found across 51 papers submitted to NeurIPS, according to GPTZero.
  • Despite NeurIPS' assertion that a small percentage of inaccurate references doesn’t invalidate the research, the large volume of AI-generated citations poses a significant verification challenge.
  • The finding raises concerns about the accuracy of AI-generated content within leading academic research, particularly given the reliance on LLMs and the importance of citations as a career metric.

Why It Matters

This news is crucial for professionals in AI, research, and academia because it exposes a fundamental flaw in the current deployment of LLMs for research tasks. The potential for widespread fabrication undermines the credibility of academic publications and raises serious questions about the reliability of research findings. It has significant implications for the future of AI research, demanding increased scrutiny and development of better verification methods. This highlights the need for a more robust understanding of LLM limitations and potential biases, especially when used for generating critical information like citations.

You might also be interested in