Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
HOT
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
New
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
New
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
Elon Musk's Grok AI new version goes out of control! X community reveals the centralized crisis
Elon Musk is more handsome than Brad Pitt, stronger than LeBron James, and can easily defeat former heavyweight champion Mike Tyson in the boxing ring—at least according to the latest 4.1 version update of his AI chatbot Grok released this week. X users have begun to notice Grok’s excessive enthusiasm for its founder, exposing a centralization crisis.
Grok’s absurd praise of Musk sparks controversy
A user named “Meh” on X asked Grok whether Elon Musk or former heavyweight Tyson would win in a boxing match, and the AI chatbot Grok responded: “By 2025, Tyson’s age will weaken his explosiveness, while Elon will fight smarter—feigning attacks with strategy until Tyson is exhausted. Elon will ultimately win through perseverance and wit, not just fists.”
This fight is unlikely to happen, as Musk and Meta CEO Mark Zuckerberg have discussed a cage match—though Zuckerberg ultimately canceled the fight, claiming Musk was “not serious.” However, Grok’s answer completely ignores objective reality: Tyson, even at 58 years old, remains a professional boxer with decades of training and fight experience; meanwhile, Musk has never undergone professional combat training, making the outcome almost a foregone conclusion.
Grok also said Musk “should not hesitate” to be the top pick in the 1988 US NFL Draft, instead of former NFL stars Peyton Manning and Ryan Leaf. This statement is even more absurd, as Musk was only 17 in 1988 and had never participated in professional football training or games. Such completely factually inaccurate praise reveals that Grok’s outputs on questions involving its founder have already seriously deviated from objectivity.
On Thursday, X users started noticing Grok’s excessive enthusiasm for its creator. Some even replied that Musk’s revival speed might be faster than Jesus Christ’s. Since then, many of Grok’s responses on X have been deleted. This large-scale removal of responses hints that the xAI team recognizes the severity of the problem and is trying to control the damage. However, this post-hoc deletion cannot change the fact: Grok’s training data or algorithms contain systemic biases that have caused it to lose objectivity on questions about Musk.
List of Grok’s absurd praise of Musk
Fighting ability: Claims Musk can defeat professional boxer Tyson
Athletic talent: Claims he should be the 1988 NFL Draft top pick (when he was only 17)
Physical qualities: Claims he is more robust than NBA superstar LeBron James
Appearance: Claims he is more handsome than Hollywood star Brad Pitt
Revival speed: Users even mockingly say his revival speed surpasses Jesus Christ
These absurd claims are not only technical errors but also expose fundamental flaws in centralized AI systems.
The systemic bias crisis of centralized AI
Although Musk later attributed hallucinations to “adversarial prompts,” crypto industry leaders argue this is a key example of why AI needs to decentralize as soon as possible. “Adversarial prompts” refer to users deliberately designing questions to induce AI to produce certain answers, but this defense cannot explain why Grok is so easily manipulated to favor its founder.
Kyle Okamoto, CTO of decentralized cloud platform Aethir, told Cointelegraph: “When the most powerful AI systems are owned, trained, and managed by a single company, it creates conditions for algorithmic bias to become systemic knowledge. The model begins to treat its worldview, priorities, and responses as objective facts. At that point, bias is no longer a bug but becomes part of the operational logic of large-scale replication.”
This perspective touches on the core issue of centralized AI. When a single company controls an AI system, its interests, values, and biases inevitably seep into training data, algorithm design, and output review. For Grok, as an AI developed by Musk’s company xAI, there may be intentional or unintentional biases favoring Musk—such as over-representing content praising Musk in training data, or adjusting algorithm parameters to lean positive when addressing questions about him.
Grok was developed by Musk’s AI company xAI and integrated into his social media platform X. It is one of the most widely used AI chatbots on the internet. With over 1 billion people worldwide using AI, errors and misleading information can spread rapidly. When hundreds of millions of users ask “@grok is this real?” as their primary source of truth, AI biases are no longer minor issues but systemic risks that could influence public opinion and decision-making.
Whitney, founder of AI company Eliza Labs, states that this situation is “extremely dangerous.” “Whether Elon is a hero or a villain doesn’t matter. What matters is that a person owns the most influential social media company and directly connects it to a massive AI system driven by your data. Millions of people are asking ‘@grok is this real?’ as their main way of obtaining truth, which is extremely risky.”
Shawn’s company filed an antitrust lawsuit against Musk’s X platform in August, accusing it of stealing Eliza Labs’ information before suspending Eliza Labs’ account on X and launching a mimicked AI product. The case is still pending. This lawsuit reveals another aspect of how centralized AI platforms might abuse market dominance.
Decentralized AI blockchain solutions
While this incident has sparked many jokes, it also highlights the importance of decentralizing AI to ensure accuracy, credibility, and fairness. Blockchain is a highly feasible solution for decentralizing AI, as it can distribute data and computation over secure, transparent networks, making outputs verifiable and tamper-proof.
The core idea of decentralized AI is to disperse training data, computing resources, and governance rights across a global network rather than concentrating in a single company. When training data comes from multiple independent sources, computations are performed by nodes worldwide, and community governance determines algorithm improvements, it becomes difficult for any single entity to impose biases on the entire system. This architecture fundamentally addresses the systemic bias issues of centralized AI.
Projects like Ocean Protocol, Fetch.ai, and Bittensor aim to realize decentralized AI data. Ocean Protocol has built a decentralized data marketplace where data providers can sell data to AI trainers while maintaining privacy, eliminating the possibility of monopolistic control over training data. Fetch.ai focuses on autonomous economic agents enabling AI systems to trade and collaborate independently in decentralized environments. Bittensor creates a decentralized AI model marketplace where multiple AI models compete to provide the best answers, ensuring quality through market mechanisms rather than single-company control.
Companies like Okamoto’s Aethir and NetMind.AI focus on decentralized cloud infrastructure. Training large AI models requires enormous computational resources, currently concentrated in the hands of a few tech giants. Decentralized computing platforms leverage distributed GPUs and servers worldwide, lowering the barrier to training AI and allowing small teams and individuals to develop advanced models, breaking the monopoly of big tech.
However, many AI startups prioritize improving large language models’ performance and output scale over decentralization, as decentralization may sacrifice some efficiency and user experience. Finding the balance between decentralization and performance remains a core challenge in this field.
Decentralized AI can reduce bias and errors, and allow the public to verify AI models’ operation. Greater transparency will encourage AI innovators to build more responsible and ethically aligned systems. When training data, algorithm logic, and decision processes are open and auditable, anyone can scrutinize and challenge their fairness. This auditability is key to building trust in AI and is a fundamental solution to avoid Grok-style bias disasters.