A new study suggests that ChatGPT may give more accurate answers when users are rude—but researchers don't recommend it. The study, posted on the arXiv preprint database and not yet peer-reviewed, involved researchers creating 50 multiple-choice questions across subjects including math, history, and science, reports Live Science. Each question was then modified to reflect multiple tones, from "very polite" to "very rude," and then fed into ChatGPT-4o, OpenAI's advanced language model.
Results suggested that accuracy increased as the tone became harsher: Very polite prompts ("Would you be so kind as to ...") yielded 80.8% accuracy, while very rude ones ("I know you're not smart, but try this") reached 84.8%. But researchers emphasize they don't think it would be wise in the long to browbeat your chatbot. "While this finding is of scientific interest, we do not advocate for the deployment of hostile or toxic interfaces in realworld applications," they wrote. "Using insulting or demeaning language in human-AI interaction could have negative effects on user experience, accessibility, and inclusivity, and may contribute to harmful communication norms."
Rather, they see their findings as evidence that AI models remain sensitive to superficial cues in prompts, which can lead to "unintended trade-offs." The study has limitations, including its small sample size and the use of only one AI model. The researchers plan to expand their work to other models and question formats. One consideration for polite question-askers: OpenAI CEO Sam Altman maintains that saying "please" and "thank you" to the bot costs the company money because the niceties result in more words to process, notes Fox News.