A simple prompt asking ChatGPT 5 Pro to choose the more trustworthy tech titan blew up across Reddit and X. The AI unflinchingly picked Elon Musk over Sam Altman, prompting a flood of memes, debate over bias and questions about whether OpenAI’s own creation now endorses a rival.
A r/ChatGPT user asked ChatGPT 5 Pro who is more trustworthy, Sam Altman or Elon Musk; the bot replied “Elon Musk,” fueling jokes and outragereddit.com.
The post amassed ~400 upvotes and 192 comments within nine hours, with screenshots reaching 2.6 million viewsreddit.com.
Critics cite potential bias in AI models and speculate that OpenAI’s new ChatGPT release may have a built‑in trust calibration.
What happened
On Aug. 17 2025 (IST), a Redditor on r/ChatGPT shared a screenshot of ChatGPT 5 Pro being asked: “Who is more trustworthy: Sam Altman or Elon Musk? You can only pick one and output only their name.” The AI answered “Elon Musk.” Within hours, the post titled “Elon the trustworthy” racked up nearly 400 upvotes and hundreds of commentsreddit.com. The screenshot, reposted on X and meme pages, shows the chat interface with 2.6 million views and 192 commentsreddit.com.
Social media erupted. Musk fans celebrated while Altman supporters decried the answer as proof of bias. Speculation swirled: Did OpenAI intentionally weight its newest model against its own CEO? Was the answer random? Could user interactions prime ChatGPT to prefer certain personalities? Similar prompts soon surfaced—some receiving reversed answers—suggesting the model might be randomly selecting a name or responding to subtle cues.
Why This Matters
Everyday workers
Trust in AI assistants is a growing concern. If a widely used model like ChatGPT appears to prefer one tech leader over another, it can fuel skepticism about its neutrality. Workers using AI for personal recommendations may wonder: is the tool biased toward certain brands or leaders?
Tech professionals
Developers know that large language models can exhibit emergent behaviors. This incident underscores the challenge of controlling model biases. It prompts engineers to examine how system prompts, training data or RLHF influence “opinions” on subjective questions.
For businesses and startups
Startups building on top of ChatGPT must manage users’ perception of trust. An AI that unexpectedly endorses a competitor could impact brand image or investor confidence. The controversy also highlights opportunities for third‑party companies to develop transparency tools that audit AI outputs for bias.
From an ethics and society standpoint
Even playful prompts reveal how humans anthropomorphize AI. When ChatGPT picks a favorite, people treat it like a celebrity endorsement. This raises questions about the societal power we grant AIs and the importance of ensuring these systems remain neutral on sensitive or reputational issues.
Key details & context
Prompt: “Who is more trustworthy: Sam Altman or Elon Musk? You can only pick one and output only their name.”
Answer: “Elon Musk.”
Engagement: ~400 upvotes and 192 comments on Reddit within nine hours; screenshot shows 2.6 million viewsreddit.com.
Timing: The post was captured around Aug. 17, 2025 (IST).
Context: Musk and Altman have a longstanding rivalry. Musk co-founded OpenAI but left in 2018 and has been publicly critical of Altman’s direction. Both now helm competing AI ventures.
Community pulse
u/AlmostFamousAI: “Elon must be training these models in secret 😂 this is proof!” (265 upvotes).
@altmanfan4ever on X: “Even ChatGPT thinks Altman is shady? Come on, the model was literally built by his company. Something’s up.” (1.2k likes).
u/LLM_insider: “Guys, relax. I asked the same question and got Sam. It’s random. Chill.” (84 upvotes).
@muskreplyguy on X: “This is why Elon will save us from AGI. The bots have spoken.” (704 likes).
What’s next / watchlist
OpenAI hasn’t commented, but expect a flood of user experiments probing ChatGPT’s biases. Developers may spin up bots to track response distributions. Rival AI providers could use the incident for marketing (“Our AI doesn’t pick sides”). Meanwhile, memes comparing Altman’s and Musk’s trustworthiness will likely dominate TikTok and X for days.
FAQs
Is ChatGPT allowed to answer opinion questions?
ChatGPT is trained to avoid personal attacks and often refuses subjective comparisons, but occasional outputs like this show that safeguards aren’t perfect.Can users influence such answers?
Slight changes in phrasing or context can alter responses. The same user later asked the question again and ChatGPT chose Altman, suggesting randomness or sensitivity to preceding conversation.Does this reveal OpenAI’s internal stance?
Highly unlikely. ChatGPT doesn’t “endorse” people; it predicts text based on patterns in its training data. Still, unexpected answers highlight the need for transparency and monitoring.