Tag

Llm

All articles tagged with #llm

AI Writing Footprint: Heavy AI Use Alters Meaning and Voice
technology22 days ago

AI Writing Footprint: Heavy AI Use Alters Meaning and Voice

A peer‑reviewed study from West Coast universities finds heavy reliance on large language models (LLMs) reshapes both meaning and style in human writing. In experiments on the money–happiness question, essays written with heavy AI use were neutral far more often (69% higher) and participants produced 50% fewer pronouns with fewer personal anecdotes. LLM edits also replaced more words than human edits, often changing the essays’ meaning. Researchers warn of long‑term impacts on thought, language, and institutions, and say ideal LLMs would mirror a writer’s voice rather than overwrite it.

The Agent Wave: AI's Real Demand, Not a Bubble
technology25 days ago

The Agent Wave: AI's Real Demand, Not a Bubble

Not in a bubble: the rise of agentic AI—where a harness guides the model and verifies results—drives sustained, higher compute demand and shifts value to integrated AI providers. Thompson traces three inflection points (ChatGPT, o1 reasoning, Opus 4.5/Codex/Claude enabling agents) and shows how enterprise adoption (e.g., Microsoft's Copilot Cowork) will amplify productivity and compute demand, while Apple leans on licensing. The result is lasting demand and fewer people needed to unlock AI's impact, making the investment case for AI capex more durable than hype suggests.

Adaptive drafting speeds up reasoning LLM training using idle compute
technology1 month ago

Adaptive drafting speeds up reasoning LLM training using idle compute

MIT researchers introduce Taming the Long Tail (TLT), an adaptive speculative-decoding framework that trains a lightweight “drafter” on idle processors to predict the outputs of large reasoning LLMs, with an adaptive rollout engine selecting the best strategy for each batch. This speeds reinforcement-learning–based training by 70–210% while preserving accuracy, and the drafter can also be reused for efficient deployment. The approach aims to reduce training cost and energy for complex AI models and has been tested across multiple models and datasets.

Boeing Unveils Space-Grade AI, Pushing BA Higher on Edge-Computing Breakthrough
business1 month ago

Boeing Unveils Space-Grade AI, Pushing BA Higher on Edge-Computing Breakthrough

Boeing engineers demonstrate a space-qualified edge AI by running a compact large language model on standard hardware to autonomously analyze satellite telemetry, a development that helped BA stock rise about 2%. The story also covers a Supreme Court denial to hear a Southwest pilots’ union case, while analysts still rate BA as a Strong Buy with roughly 18.8% upside based on a $278 target after a year of gains.

AI-assisted Arkanix Stealer: a fleeting dark-web info-stealer experiment
technology1 month ago

AI-assisted Arkanix Stealer: a fleeting dark-web info-stealer experiment

Kaspersky researchers say Arkanix Stealer, promoted on dark-web forums in Oct 2025, was likely an AI-assisted, short-lived information-stealer project with Python and native C++ versions, a Discord community, and a referral scheme. It could harvest browser data (including 0Auth2 tokens), cryptocurrency wallet data, and credentials from Telegram and Discord, plus local-file exfiltration and modular plugins. The premium variant added anti-sandbox/debugging, RDP credential theft, and advanced post-exploitation tools like ChromElevator to bypass protections. The operation’s unclear purpose points to rapid, low-cost AI-driven malware development rather than a sustained campaign, with IoCs published by Kaspersky.

Training AI on Low-Quality Data Causes Cognitive Decline
technology5 months ago

Training AI on Low-Quality Data Causes Cognitive Decline

Researchers from Texas A&M, the University of Texas, and Purdue University have proposed the 'LLM brain rot hypothesis,' suggesting that training large language models on low-quality 'junk' data, such as trivial or sensationalist tweets, can cause lasting cognitive decline in these models, similar to human attention and memory issues caused by internet overuse.

Apple research reveals LLMs gain from classic productivity techniques
technology7 months ago

Apple research reveals LLMs gain from classic productivity techniques

A study by Apple researchers demonstrates that large language models (LLMs) can significantly improve their performance and alignment by using a simple checklist-based reinforcement learning method called RLCF, which scores responses based on checklist items. This approach enhances complex instruction following and could be crucial for future AI-powered assistants, although it has limitations in safety alignment and applicability to other use cases.

Anthropic revokes OpenAI's access to Claude over unauthorized tool usage
technology8 months ago

Anthropic revokes OpenAI's access to Claude over unauthorized tool usage

Anthropic revoked OpenAI's access to its Claude large language models after discovering that OpenAI was using the models to benchmark and develop its own competing AI, violating the terms of service. While OpenAI can still perform safety evaluations, its ability to use Anthropic's tools for development has been cut off, highlighting tensions in AI model sharing and competition.