×
Intel’s new feature boosts AI performance by allocating more RAM to integrated graphics
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Intel has introduced “Shared GPU Memory Override,” a new feature for its Core Ultra systems that allows users to allocate additional system RAM for integrated graphics use. This capability mirrors AMD’s earlier “Variable Graphics Memory” feature and targets compact laptops and mobile workstations that rely on integrated solutions rather than discrete GPUs, potentially improving AI workload performance where memory availability is often the limiting factor.

What you should know: The feature requires the latest Intel Arc drivers to function and is specifically designed for systems without dedicated graphics cards.
• Bob Duffy, who leads Graphics and AI Evangelism at Intel, confirmed the update and emphasized its role in enhancing system flexibility for AI tools and memory-dependent workloads.
• Unlike AMD’s implementation, which was primarily marketed as a gaming enhancement, Intel’s approach appears more focused on AI and professional applications.

Mixed gaming results: Testing reveals that additional shared memory doesn’t universally improve gaming performance and can sometimes cause performance drops.
• Some games may load larger textures when more memory is available, which can actually reduce performance rather than enhance it.
• AMD’s earlier Variable Graphics Memory feature showed similarly mixed results, with benefits varying significantly depending on the specific software being used.

The AI advantage: Users running local AI models could see more substantial benefits from Intel’s memory allocation approach than gamers.
• Running large language models locally is becoming increasingly common, and these workloads are often constrained by available memory rather than processing power.
• By extending the RAM pool available to integrated graphics, Intel enables users to handle larger AI models that would otherwise be limited by memory constraints.
• This could allow more of an AI model to be offloaded onto VRAM, reducing bottlenecks and improving stability when running local AI tools.

Why this matters: The move signals Intel’s commitment to remaining competitive in the integrated graphics space while positioning its systems for the growing demand for local AI processing capabilities, particularly among researchers and developers without access to discrete GPUs.

Gamers face mixed signals as Intel mimics AMD’s graphics memory tweak, risking slower play while AI workloads quietly take the spotlight

Recent News

AI assistants misrepresent news content in 45% of responses

Young users increasingly turn to AI for news instead of traditional search engines.

Starship raises $50M to deploy 12,000 delivery robots across US cities

The suitcase-sized bots have already completed over 9 million deliveries across European cities and college campuses.

Idaho hunters fined after trusting AI for hunting regulation dates

AI pulled proposed dates from commission meetings rather than final published regulations.