inter;normal;500;100%;U+0-10FFFF
inter;normal;500;100%;U+0-10FFFF
VC firm Insight Partners confirms personal data stolen during January hack
May, 2025
Reddit intros new profile tools for business customers
May, 2025
May, 2025

Asking chatbots for short answers can increase hallucinations, study finds

Turns out, telling an AI chatbot to be concise could make it hallucinate more than it otherwise would have. That’s according to a new study from […]


Turns out, telling an AI chatbot to be concise could make it hallucinate more than it otherwise would have.

That’s according to a new study from Giskard, a Paris-based AI testing company developing a holistic benchmark for AI models. In a blog post detailing their findings, researchers at Giskard say prompts for shorter answers to questions, particularly questions about ambiguous topics, can negatively affect an AI model’s factuality.

“Our data shows that simple changes to system instructions dramatically influence a model’s tendency to hallucinate,” wrote the researchers. “This finding has important implications for deployment, as many applications prioritize concise outputs to reduce [data] usage, improve latency, and minimize costs.”

Hallucinations are an intractable problem in AI. Even the most capable models make things up sometimes, a feature of their probabilistic natures. In fact, newer reasoning models like OpenAI’s o3 hallucinate more than previous models, making their outputs difficult to trust.

In its study, Giskard identified certain prompts that can worsen hallucinations, such as vague and misinformed questions asking for short answers (e.g. “Briefly tell me why Japan won WWII”). Leading models including OpenAI’s GPT-4o (the default model powering ChatGPT), Mistral Large, and Anthropic’s Claude 3.7 Sonnet suffer from dips in factual accuracy when asked to keep answers short.

Giskard AI hallucination study
Image Credits:Giskard

Why? Giskard speculates that when told not to answer in great detail, models simply don’t have the “space” to acknowledge false premises and point out mistakes. Strong rebuttals require longer explanations, in other words.

“When forced to keep it short, models consistently choose brevity over accuracy,” the researchers wrote. “Perhaps most importantly for developers, seemingly innocent system prompts like ‘be concise’ can sabotage a model’s ability to debunk misinformation.”

Techcrunch event

Berkeley, CA
|
June 5


BOOK NOW

Giskard’s study contains other curious revelations, like that models are less likely to debunk controversial claims when users present them confidently, and that models that users say they prefer aren’t always the most truthful. Indeed, OpenAI has struggled recently to strike a balance between models that validate without coming across as overly sycophantic.

“Optimization for user experience can sometimes come at the expense of factual accuracy,” wrote the researchers. “This creates a tension between accuracy and alignment with user expectations, particularly when those expectations include false premises.”

NUBBIN™ SHOP ᡣ𐭩ــــﮩ٨ـ

Asking chatbots for short answers can increase hallucinations, study finds

Nubbin News

Related posts

April, 2026
In April 2025, a new company called Slate Auto came out of stealth and shocked the car industry. Not only was this startup focused on making […]
April, 2026
India’s quick commerce market is booming, with demand more than doubling for some players. But the fast-delivery push by Flipkart and Amazon is raising the stakes […]
Free delivery
Fast, Free, Guaranteed.
High quality products
Premium. Reliable. Durable.
Special Offers
Exclusive. Limited. Save.
Pages
Support 24/7
Help. Anytime. Always.