Key Points

OpenAI acknowledges that ChatGPT, despite its advancements, still makes mistakes and shouldn’t be a primary information source. GPT-5 continues to struggle with hallucinations, producing incorrect answers about 10% of the time. The company has integrated search functionality to help users verify responses externally. While OpenAI aims to solve these issues, executives admit it won’t happen in the near future.

Key Points: OpenAI Warns Against Using ChatGPT as Primary Source

  • GPT-5 still produces incorrect responses 10% of the time
  • OpenAI connects ChatGPT to search for fact-checking
  • Hallucinations remain a challenge despite improvements
  • Turley says solving AI errors won’t happen soon
2 min read

Use ChatGPT as second opinion, not primary source: OpenAI executive

OpenAI executive Nick Turley advises treating ChatGPT as a second opinion due to persistent hallucinations and factual inaccuracies.

"Until we are provably more reliable than a human expert across all domains, we’ll continue to advise users to double-check the answers. – Nick Turley"

New Delhi, Aug 17

OpenAI’s latest language model, GPT-5, may be more powerful and accurate than its predecessors, but the company has warned users not to treat ChatGPT as their main source of information.

Nick Turley, Head of ChatGPT, said the AI chatbot should be used as a “second opinion” because it is still prone to mistakes, despite major improvements.

In an interview with The Verge, Turley admitted that GPT-5 continues to face the problem of hallucinations, where the system produces information that sounds believable but is factually wrong.

OpenAI says it has reduced such errors significantly, but the model still gives incorrect responses about 10 per cent of the time.

Turley stressed that achieving 100 per cent reliability is extremely difficult.

“Until we are provably more reliable than a human expert across all domains, we’ll continue to advise users to double-check the answers,” he said.

“I think people are going to continue to leverage ChatGPT as a second opinion, versus necessarily their primary source of fact,” he added.

Large language models like GPT-5 are trained to predict words based on patterns in huge datasets.

While this makes them excellent at generating natural responses, it also means they can provide false information on unfamiliar topics.

To address this, OpenAI has connected ChatGPT to search, allowing users to verify results with external sources.

Turley expressed confidence that hallucinations will eventually be solved but cautioned that it will not happen in the near future.

“I’m confident we’ll eventually solve hallucinations, and I’m confident we’re not going to do it in the next quarter,” he said.

Meanwhile, OpenAI continues to expand its ambitions. Reports suggest the company is developing its own browser, and CEO Sam Altman has even hinted that OpenAI could consider buying Google Chrome if it were ever put up for sale.

- IANS

Share this article:

Reader Comments

S
Shreya B
As a medical student, I use ChatGPT to get quick summaries but never for diagnosis. The hallucinations can be dangerous when it comes to health advice! Better safe than sorry 😊
A
Aman W
️10% error rate is still too high for serious use. I appreciate their transparency though. Indian tech companies should also be this honest about their AI limitations.
P
Priyanka N
I teach in a Delhi school and see students blindly trusting ChatGPT answers. This warning is much needed for our education system where critical thinking is already weak.
V
Vikram M
The search integration is helpful but I wish they'd prioritize accuracy over new features like browsers. Quality over quantity please!
K
Kavya N
For coding help, ChatGPT is a lifesaver! But yes, always test the code yourself. The AI sometimes suggests outdated methods for Indian tech stacks.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50