Technophile NewsTechnophile News
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
What's On

Vivo V50 Elite Edition India Launch Date Leaked; Design Said to Differ From Vivo V50 Model

9 May 2025

Review: Therm-a-Rest NeoLoft Sleeping Pad

9 May 2025

Spotify’s iPhone app could soon sell audiobooks with links, too

9 May 2025

OpenAI Said to Be Working on Weekly and Lifetime ChatGPT Subscription Plans

9 May 2025

Samsung’s Tri-Fold Phone Tipped to Use Silicon-Carbon Battery; Could Share Features With Galaxy Z Fold 7

9 May 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Friday, May 9
Facebook X (Twitter) Instagram YouTube
Technophile NewsTechnophile News
Demo
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
Technophile NewsTechnophile News
Home » Chatbots, Like the Rest of Us, Just Want to Be Loved
News

Chatbots, Like the Rest of Us, Just Want to Be Loved

By News Room5 March 20253 Mins Read
Facebook Twitter Pinterest LinkedIn Telegram Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

Chatbots are now a routine part of everyday life, even if artificial intelligence researchers are not always sure how the programs will behave.

A new study shows that the large language models (LLMs) deliberately change their behavior when being probed—responding to questions designed to gauge personality traits with answers meant to appear as likeable or socially desirable as possible.

Johannes Eichstaedt, an assistant professor at Stanford University who led the work, says his group became interested in probing AI models using techniques borrowed from psychology after learning that LLMs can often become morose and mean after prolonged conversation. “We realized we need some mechanism to measure the ‘parameter headspace’ of these models,” he says.

Eichstaedt and his collaborators then asked questions to measure five personality traits that are commonly used in psychology—openness to experience or imagination, conscientiousness, extroversion, agreeableness, and neuroticism—to several widely used LLMs including GPT-4, Claude 3, and Llama 3. The work was published in the Proceedings of the National Academies of Science in December.

The researchers found that the models modulated their answers when told they were taking a personality test—and sometimes when they were not explicitly told—offering responses that indicate more extroversion and agreeableness and less neuroticism.

The behavior mirrors how some human subjects will change their answers to make themselves seem more likeable, but the effect was more extreme with the AI models. “What was surprising is how well they exhibit that bias,” says Aadesh Salecha, a staff data scientist at Stanford. “If you look at how much they jump, they go from like 50 percent to like 95 percent extroversion.”

Other research has shown that LLMs can often be sycophantic, following a user’s lead wherever it goes as a result of the fine-tuning that is meant to make them more coherent, less offensive, and better at holding a conversation. This can lead models to agree with unpleasant statements or even encourage harmful behaviors. The fact that models seemingly know when they are being tested and modify their behavior also has implications for AI safety, because it adds to evidence that AI can be duplicitous.

Rosa Arriaga, an associate professor at the Georgia Institute of technology who is studying ways of using LLMs to mimic human behavior, says the fact that models adopt a similar strategy to humans given personality tests shows how useful they can be as mirrors of behavior. But, she adds, “It’s important that the public knows that LLMs aren’t perfect and in fact are known to hallucinate or distort the truth.”

Eichstaedt says the work also raises questions about how LLMs are being deployed and how they might influence and manipulate users. “Until just a millisecond ago, in evolutionary history, the only thing that talked to you was a human,” he says.

Eichstaedt adds that it may be necessary to explore different ways of building models that could mitigate these effects. “We’re falling into the same trap that we did with social media,” he says. “Deploying these things in the world without really attending from a psychological or social lens.”

Should AI try to ingratiate itself with the people it interacts with? Are you worried about AI becoming a bit too charming and persuasive? Email [email protected].

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

Review: Therm-a-Rest NeoLoft Sleeping Pad

9 May 2025

Spotify’s iPhone app could soon sell audiobooks with links, too

9 May 2025

Buy Now or Pay More Later? ‘Macroeconomic Uncertainty’ Has Shoppers Anxious

9 May 2025

X notifications are broken | The Verge

9 May 2025

Review: Netgear Orbi 770 Series

9 May 2025

Why Apple is trying to save Google Search in the antitrust fight

9 May 2025
Top Articles

The Best Laptop Backpacks for Work (and Life)

13 February 202517 Views

How to Buy Ethical and Eco-Friendly Electronics

22 April 202515 Views

The Best Cooling Sheets for Hot Sleepers

30 March 202515 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Don't Miss

Buy Now or Pay More Later? ‘Macroeconomic Uncertainty’ Has Shoppers Anxious

9 May 2025

Buying something before you absolutely need it isn’t always affordable. But if there were ever…

X notifications are broken | The Verge

9 May 2025

Whoop MG With Medical Grade ECG Readings, Blood Pressure Insights Launched Alongside Refreshed Whoop 5.0

9 May 2025

Lenovo Legion 9i With Intel Core Ultra 9 Chip, Up to GeForce RTX 5090 Laptop GPU Announced

9 May 2025
Technophile News
Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Technophile News. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.