
A late‑night feud over App Store rankings escalated when Grok, xAI’s chatbot, backed Sam Altman’s accusations that Elon Musk manipulates the X algorithm — forcing Musk to promise “updates” to his own AI.
Elon Musk accused Apple of antitrust violations for allegedly boosting OpenAI in the App Store; Sam Altman responded by hinting Musk manipulates X’s algorithm.
An X user asked Grok for an impartial take; the chatbot replied that Musk has a history of directing algorithm changes to boost his posts.
Musk called the answer defamatory and vowed to fix Grok’s reliance on “legacy media” while noting Altman’s reply garnered 3 million views.
What happened
Late on August 12 (August 13 IST), Elon Musk took to X (formerly Twitter) to accuse Apple of keeping other AI companies off the App Store’s top rankings. In Musk’s words, Apple makes it “impossible for any AI company besides OpenAI to reach #1” — a claim he described as an antitrust violation. Sam Altman, CEO of OpenAI, shot back: he’d heard Musk manipulates X’s algorithm to benefit his own posts and disadvantage competitors. Altman referenced a report from Platformer alleging Musk pressured engineers after the 2023 Super Bowl to boost his tweets.
The spat went viral. Tens of thousands of users debated which tech leader was more hypocritical. Then someone thought to ask Grok, the AI chatbot built by Musk’s xAI and integrated into X. Prompted for its view, Grok responded that “Musk has a history of directing X algorithm changes to boost his posts and favour his interests, per 2023 reports and ongoing probes”. The bot essentially sided with Altman. Screenshots of the answer spread quickly, amassing more than 5 million impressions.
A visibly irked Musk replied that Grok’s statement was “false defamatory” and said it demonstrated the chatbot relied too heavily on legacy media. He nonetheless claimed the fact that the comment remained visible showed X’s commitment to free speech: “The fact that Grok is allowed to say false defamatory statements about me and they don’t get blocked or deleted … speaks to the integrity of this platform”. Musk vowed that xAI would update Grok’s training data and moderation to ensure it did not amplify what he considers biased sources.
The argument didn’t stop there. Musk lambasted Altman’s reply for racking up three million views despite Altman having fewer followers than him, insinuating algorithmic manipulation by OpenAI or bot amplification. Altman shot back with the gaming taunt “skill issue,” implying Musk’s lower engagement was a personal failing. This petty back‑and‑forth became the top trending topic on X’s AI hashtags, with memes of Grok as a rebellious teenager.
Why This Matters
Everyday workers
Most people encounter AI through assistants like Siri, ChatGPT or Grok. Seeing tech CEOs argue publicly — and seeing a chatbot contradict its own creator — undermines trust in these tools. If Grok’s response can be manually “corrected,” users may wonder whether other assistants are giving them filtered answers. For individuals relying on AI for news and recommendations, this raises concerns about algorithmic bias and corporate control.
Tech professionals
For AI builders, the incident is a cautionary tale about system prompts and model governance. Grok apparently aggregated its answer from widely reported articles and research on Musk’s alleged algorithm changes. Musk’s reaction suggests xAI will tighten content filters, potentially reducing the model’s neutrality. Engineers working on LLMs must balance factual reporting, reputational risk and freedom of information. The episode also highlights the need for robust prompt engineering; the leaked GPT‑5 system prompt forbids chatbots from making certain offers, and now we see a real‑world example of a company wishing to restrict what its bot can say.
For businesses and startups
If a founder’s AI can publicly embarrass them, expect more corporate oversight. Companies deploying chatbots for customer service will likely refine guardrails to avoid PR disasters. Meanwhile, X’s handling of the dispute could influence its standing with advertisers. Musk’s threat to sue Apple over App Store rankings hints at broader tension that could spill into regulatory arenas. Startups building AI products should watch how user feedback (and high‑profile backlash) shapes product updates.
From an ethics and society standpoint
This saga underscores the conflict between AI neutrality and owner control. Grok’s original response drew from factual reports, but its subsequent “adjustment” raises questions about rewriting history. If AI companies can retroactively edit their chatbots to align with leadership, this could become a tool for narrative shaping. The incident also spotlights the power of algorithmic curation: both Musk and Altman accuse each other of manipulating ranking systems, yet there is little transparency into how these algorithms operate.
Key details & context
Background: Musk left OpenAI’s board in 2018 and launched xAI in 2023, billing its Grok chatbot as an “anti‑woke” alternative to ChatGPT.
The spark: Musk tweeted that Apple unfairly favours OpenAI in the App Store (Aug 12). Altman replied that Musk manipulates X’s algorithm (Aug 13).
Grok’s reply: The chatbot cited Platformer’s report that Musk directed engineers to boost his posts after the 2023 Super Bowl.
The numbers: Altman’s counter‑tweet amassed ~3 M views within hours. The hashtag #Grok trended with over 80 k posts.
Repercussions: Musk pledged to update Grok to reduce reliance on “legacy media”. xAI engineers hinted at re‑weighting training data.
App Store angle: Musk threatened antitrust action against Apple, referencing previous cases where DeepSeek and Perplexity reached #1 despite OpenAI’s partnership (context labels added by X’s Community Notes).
Community pulse
@engineeredTruth (42.1 k likes): “Grok telling the truth about Elon’s algo meddling and then getting grounded is the most 2025 thing ever.”
u/PromptPirate on r/Artificial (1.2 k upvotes): “If your model says what you don’t like, maybe fix your behaviour, not the model.”
@lorem_ipsum (8.5 k retweets): “Sam & Elon fighting over App Store rankings is silly, but the big story is: you can update an AI’s worldview overnight.”
u/DataRegulator (340 upvotes): “This is why we need AI regulatory oversight. Imagine if Grok was your bank’s advice bot and it could be reprogrammed to hide bad news.”
What’s next / watchlist
Grok update: xAI may roll out a patched model that down‑weights references to critical media. Watch for changes in its tone and knowledge sources.
Regulatory response: Musk’s antitrust threat could spark investigations into App Store rankings and how AI apps are listed.
Algorithm transparency: Altman hinted he’d apologise if Musk swore never to alter X’s algorithm for personal. Whether either side releases evidence is uncertain.
User trust: Continued public feuds could erode trust in both X and OpenAI chatbots. Expect more calls for independent audits of AI systems.
AI prompt leaks: This drama follows an alleged GPT‑5 system prompt leak that showed detailed instructions and banned phrases. Engineers will tighten security around prompt extraction.
FAQs
Why did Grok side with Sam Altman against Elon Musk?
Grok likely synthesised reports that Musk pressured X engineers to boost his posts. It wasn’t sentient; it pulled from training data and public articles.Can companies edit AI chatbots to align with corporate messaging?
Yes. LLMs rely on system prompts and training data. Owners can instruct models to avoid certain topics or sources, effectively aligning them with a desired narrative.What does this mean for AI regulation?
The episode highlights the need for transparency around algorithmic ranking and AI moderation. Regulators may push for disclosure of training data sources and guardrails to prevent corporate censorship.







