Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
HOT
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
New
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
New
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
What Is AGI? The AI Goal Everyone Talks About But No One Can Clearly Define
In brief
Artificial general intelligence, or AGI, is one of the most cited milestones in the AI industry. Tech executives predict it, investors dump billions into funding research into it, and critics warn about its risks once it arrives. But what exactly AGI is remains unclear, and researchers still disagree on what counts as “general intelligence,” when it might arrive, and how anyone would recognize it once it does. “There’s a bunch of different definitions,” Malo Bourgon, CEO of the Machine Intelligence Research Institute, told Decrypt. “When we start to talk about, is this system AGI? Is that system AGI? What precisely qualifies as AGI by what definition? I think that’s kind of difficult to do.”
Prominent figures, including OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and xAI CEO Elon Musk, have opined and made predictions about the emergence of AGI. “I think we’ll hit AGI in 2026,” Musk said in December during an interview with the executive chairman of the XPRIZE Foundation, Peter Diamandis. “I’m confident by 2030, AI will exceed the intelligence of all humans combined.” Unlike the generative AI that most people are familiar with thanks to ChatGPT, artificial general intelligence, or AGI, generally refers to an AI system that can understand, learn, and apply knowledge across many different tasks at a human-like level, rather than performing a single specialized function. The concept dates back to the early days of AI research in the 1950s.
Beginning in the early 2000s, researchers such as Ben Goertzel, Shane Legg, and Peter Voss popularized the term “artificial general intelligence” to distinguish the original goal of human‑level, broadly capable AI from the increasingly successful but narrow AI systems being developed in research labs and universities. However, Bourgon said that achieving “human-level intelligence” is not a one-size-fits-all goal. "There are a bunch of reasons from our evolutionary history, how our brains are structured, how slow neurons are, and the limits on our working memory and the speed at which our brains operate, that we should expect that if we can design AI systems that have this property that we have, there’s likely an enormous room above us,” he said. AGI is already here, some say Recent advances in large language models and powerful AI like Gemini, ChatGPT, Grok, and Claude, which can write essays, create images, generate code, and answer complex questions, have led many to argue that AGI has already been achieved. But what they lack, Bourgon said, is autonomy. “Inherent in most people’s definitions of AGI is the sense of autonomy,” Bourgon said. “That these things aren’t necessarily just behaving as tools and chatbots, but that they have this agentic nature where they’re able to accomplish tasks in a wide variety of environments with a large amount of autonomy.” Ben Goertzel, CEO of SingularityNET and one of the figures credited with popularizing the term AGI, said that interpretation stretches the concept. “The term has become rather confused now in the media,” Goertzel told Decrypt. “Tech CEOs find it convenient to say, ‘Hey, we’ve launched AGI already,’ and people sensationalize things.” In theory, Goertzel explained, AGI refers to AI systems capable of learning and performing a wide range of tasks beyond those they were explicitly trained to do. Today’s models, he said, are powerful but fundamentally different from general intelligence.
“They get there not by learning to do all of it,” he said. “They get there by having the whole internet crammed into their knowledge base.” While AI developers invest billions of dollars into building AI data centers to supply ever more compute for increasingly powerful models, a true general intelligence would need to generalize and generate genuinely novel insights that go beyond merely remixing its training data, he explained. “If you took current deep neural net systems and trained them on music up to the year 1900, they will not ever invent hip hop or grindcore,” Goertzel said. Goertzel argued that the shift to AGI is unlikely to show up as a single, clean breakpoint. “There doesn’t have to be a completely crisp boundary between AGI and pre‑AGI,” he said, comparing it to biology’s gray areas around viruses and retroviruses. We still know a dog is alive and a rock is not, he added, even if some edge cases are “fuzzy” like in the case of viruses. Kyle Chan, a researcher at Brookings who studies global AI policy, said the debate has expanded to cover several different scenarios. Development abroad “There’s this whole range of what we mean by AGI,” Chan told_ Decrypt_. “On one end, you have this idea of recursive self‑improvement and an intelligence explosion, and on the other, you have a more ‘mundane’ version—AI that can do many things humans can do, or AI as a normal technology like the internet or computers.” While American AI labs debate the existential implications of AGI, Chan said, the conversation in China looks very different.
“AGI is not such a big thing in China, especially from the policymakers, the broader AI community, the broader tech industry,” he said. “Most people are focused on trying to make money on this thing, and especially on the physical side, which is one area where I think China and a lot of their tech companies feel like they have an edge over the US, where they can build out the robotics or autonomous systems, drones, whatever AI-powered, because they have the hardware supply chains that the US doesn’t have.” Chan did acknowledge that while AI developers in China are not as focused on AGI as their American counterparts, it is still on their radar. “Some of the Chinese AI founders do talk about AGI, and some of them even talk about an ASI kind of thing,” he said. “But in general, AGI is really not such a big thing in China.” Predictions about when AGI might arrive vary widely. For researchers studying the technology, the label itself may matter less than what the systems can do. “What are the effects and the capabilities of these systems?” Bourgon said. “That’s more the frame of mind we want to be in now.”