Tag

Agi

All articles tagged with #agi

AGI claimed, definitions disputed: the race to measure general intelligence
ai12 days ago

AGI claimed, definitions disputed: the race to measure general intelligence

Fortune notes Nvidia CEO Jensen Huang’s claim that AGI has been achieved, a statement that collides with a growing push to define and quantify general intelligence. Recent work from Google DeepMind and Hendrycks–Bengio proposes a scientific framework (a 10-facet cognitive taxonomy) to evaluate AI across domains and compare to well-educated adults, highlighting a current “jagged” profile where models excel in some areas but lag in others. Other benchmarks like ARC-AGI and debates dating back to Turing illustrate how hard it is to define intelligence, while tech giants push AGI branding for marketing and financial aims (OpenAI/Microsoft contracts referencing profit thresholds) even as leaders like Altman caution that AGI is a sloppy term. Overall, experts agree there is no universal consensus on what AGI means or how to measure it, despite ongoing progress and hype in the field.

Huang's AGI claim sparks debate as Nvidia touts token-backed AI push
technology18 days ago

Huang's AGI claim sparks debate as Nvidia touts token-backed AI push

Jensen Huang sparked a debate by saying 'we've achieved AGI' in one interview, then tempered the claim by acknowledging current AI systems still require significant human guidance; at the same time he pushes a token-based approach to scale AI work, suggesting engineers should spend more on tokens and even proposing token-based compensation, a theme that surfaced in two interviews days apart as the industry weighs what qualifies as AGI.

Huang Says AGI Is Here—But Only Under a Narrow Definition
technology18 days ago

Huang Says AGI Is Here—But Only Under a Narrow Definition

NVIDIA CEO Jensen Huang claims AGI has arrived, but his reasoning rests on a narrow, test‑like definition that counts a single viral, monetizable AI moment as proof. While Fridman pushes for a broader, more transformative standard, Huang describes a scenario where an AI creates a viral app and monetizes briefly, then fades—hardly the kind of sustained, institutional intelligence people expect from AGI. The piece argues Huang’s conclusion shows how definitional flexibility can make a “yes, we’re there” answer easy, even if the real, long‑term impact remains uncertain.

The World’s Toughest AI Exam Tests Reasoning, Not AGI Yet
technology1 month ago

The World’s Toughest AI Exam Tests Reasoning, Not AGI Yet

A new benchmark called Humanity’s Last Exam aims to measure how close today’s AI models come to human-level knowledge by presenting 2,500 carefully vetted, PhD-level questions across 100+ subjects. Launched in 2025, it has been attempted by top models like GPT-4o, Google Gemini The top score reported so far is 48.4% (Gemini 3 Deep Think), far below typical human expert performance (~90%). The test prioritizes precise, non-searchable knowledge and verifiable answers, filtering out questions AI could answer via web search. While a high score would indicate expert-level capability in specific domains, researchers say it does not by itself signal AGI or autonomous, general intelligence.

AI’s rapid march prompts caution and oversight, says Altman
technology1 month ago

AI’s rapid march prompts caution and oversight, says Altman

OpenAI CEO Sam Altman warned in New Delhi that artificial general intelligence could arrive sooner than many expect and that the world is not prepared. He says progress is accelerating, with OpenAI aiming to build an intern-level AI research tool by September 2026 and a fully automated AI researcher by March 2028. A quicker pace could boost demand for data centers, chips, and cloud tools from companies like Nvidia, Microsoft, and Alphabet, while also flagging potential job losses and calling for global oversight to prevent over-centralization of AI technology.

technology1 month ago

Behavioral success isn’t proof of AI’s general intelligence

In a Nature correspondence, Quattrociocchi, Capraro, and Marcus argue that Chen et al.’s claim that success in behavioural tests (including Turing-test variants) demonstrates artificial general intelligence is problematic. They present three grounds for skepticism, stressing that such performance reflects statistical pattern matching or task-specific competence rather than true general intelligence or understanding, and warn against equating behavioural mimicry with AGI.

Experts warn AI could erase almost all jobs by 2027
technology2 months ago

Experts warn AI could erase almost all jobs by 2027

AI safety expert Dr. Roman Yampolskiy warns that as early as 2027 up to 99% of human jobs could disappear due to artificial general intelligence and automation, highlighting a potential tectonic shift in economies, education, and policy. While many tasks may be automated, only a small set of human-centric roles may persist, sparking ongoing debate about which occupations survive and how society should adapt—potentially accelerating changes in training, safety nets, and employment strategies.

AI at Davos 2026: Leaders push useful deployment, caution against 'not really human' AI
technology2 months ago

AI at Davos 2026: Leaders push useful deployment, caution against 'not really human' AI

At Davos 2026, Microsoft CEO Satya Nadella and leaders from Anthropic and Google DeepMind, alongside Yuval Harari and Yoshua Bengio, debated AI’s path: it should be useful and broadly accessible, while warning against mistaking AI for human-level thinking. They urged humility about AI limits, called for gradual, internationally coordinated safety standards, and warned of governance risks as AI could reshape work, geopolitics (including chip sales to China), and society in the coming years.

Microsoft's AI Vision: Building Human-Centric Superintelligence
technology5 months ago

Microsoft's AI Vision: Building Human-Centric Superintelligence

Microsoft has established a new superintelligence team led by Mustafa Suleyman to pursue artificial general intelligence (AGI) independently, following a renegotiated partnership with OpenAI. The team aims to build advanced in-house AI models, focusing on fundamental research and safety, with ambitions to impact sectors like healthcare and transportation, and to position Microsoft as a responsible leader in AI development.