Adobe has unveiled “Corrective AI,” a new tool that can alter the emotional tone and style of existing voice-over recordings without requiring complete re-recording. The technology, demonstrated at Adobe’s MAX Sneaks event, allows creators to transform flat vocal performances into confident, whispered, or other emotional styles by simply highlighting text and selecting preset emotions, addressing common audio production challenges that typically require costly and time-consuming re-recording sessions.
How it works: Corrective AI builds on Adobe’s existing generative speech capabilities in Firefly but applies emotion modification to existing recordings rather than creating entirely AI-generated voices.
- Users can highlight transcript text and choose from preset emotions to instantly change the vocal performance from flat delivery to confident, whispered, or other styles.
- The tool represents a more practical workflow compared to completely AI-generated voices, allowing creators to enhance rather than replace human performances.
Additional audio innovations: Adobe also demonstrated Project Clean Take, which uses AI to separate complex audio tracks into up to five distinct components.
- The system can isolate voices, ambient noise, sound effects, and background music from a single recording with surprising accuracy.
- In one demo, the AI successfully removed overwhelming drawbridge bell sounds while preserving the host’s voice, with the ability to individually adjust separated track levels.
- The technology can replace copyrighted background music with similar Adobe Stock tracks while maintaining the original’s reverb and ambient characteristics.
Automated sound design: Adobe showcased AI-powered sound effect generation that analyzes video content and automatically adds appropriate audio elements.
- The system breaks down videos into scenes, applies emotional tags, and generates commercially safe sound effects based on visual analysis.
- Examples included automatically creating alarm clock sounds and car door closing effects based on visual cues in the footage.
- A conversational interface allows users to request specific changes, such as adding car ambient sounds to driving scenes, which the AI locates and implements automatically.
What they’re saying: Adobe’s demonstrations revealed both capabilities and limitations of the current technology.
- Some generated sounds weren’t realistic, such as an unconvincing alarm clock effect, and the AI added unnatural clothing rustling during a hugging scene.
- The conversational interface successfully found specific scenes and placed requested sound effects accurately when given clear instructions.
Industry implications: These developments arrive amid ongoing tensions between AI advancement and creative professional protections.
- Video game voice actors recently ended a nearly year-long strike that secured consent and disclosure requirements when companies want to recreate voices through AI.
- Voice actors have been preparing for AI’s impact on their industry, and Adobe’s tools represent another shift toward AI-assisted creative workflows.
Timeline: Based on Adobe’s historical pattern, these experimental features typically migrate from Sneaks demonstrations to full creative suite integration within months.
- Photoshop’s Harmonize feature, which automatically places assets with accurate color and lighting, was shown at last year’s Sneaks and is now available in the full software.
- The new audio tools are expected to appear in Adobe’s suite sometime in 2026.
Adobe's ‘Corrective AI’ Can Change the Emotions of a Voice-Over