Artificial intelligence is increasingly used in corporate communication, but new research shows it may not be suited for sensitive situations. A study in Corporate Communications: An International Journal found[1] that crisis responses attributed to people were judged as more credible and more helpful to a company’s reputation than identical messages said to come from AI.

Testing Trust in Crisis Responses

Researchers built an experiment around a fictional company called Chunky Chocolate, which was described as facing backlash after reports that its products made customers sick. Participants read one of six possible press releases. Each message had the same content but differed in two ways: whether it was written by a person or by AI, and whether the tone was informational, sympathetic, or apologetic.

The study involved 447 students in journalism and communication programs at a Midwestern university. They evaluated the credibility of the message, the credibility of its source, and the company’s reputation after reading the release.

Human Messages Scored Higher

Results showed a clear pattern. Messages labeled as human-written were rated higher across all measures. On a seven-point scale, human sources received an average credibility score of 4.40, compared with 4.11 for AI. For message credibility, human versions averaged 4.82 while AI versions scored 4.38. Company reputation followed the same trend, with averages of 4.84 for human messages and 4.49 for AI.

Because the content of the statements was unchanged, the difference came only from how authorship was presented. Labeling a release as AI-generated lowered trust, even when the words were identical.

Tone Had Little Effect

Researchers expected an apologetic or sympathetic tone to influence perceptions. Participants did notice the different tones, but ratings of credibility and reputation did not vary much. The communicator’s identity carried more weight than the style of the message.

What It Means for Public Relations

AI already plays a role in public relations through tasks like media monitoring, content targeting, and social media management. Some suggest using it to draft press releases or respond to crises. The study points to risks in doing so, since audiences seem less likely to trust a crisis message when it is tied to AI.

Limits of the Study

The experiment used a fictional company and a student sample, which may not represent the wider public. Participants’ familiarity with digital tools and AI could also shape their views. Another factor is the explicit labeling of AI authorship, as real companies may not always disclose when AI is used.

Even with these limits, the research indicates that audiences still place greater trust in human credibility during moments of public scrutiny.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• Artificial Sweeteners Linked to Faster Memory Decline in Midlife[2]

• Study Finds LLM Referrals Convert At 4.87% Versus 4.6% For Search, But Scale Remains Tiny[3]

By admin