AI Briefing: Watermarking AI content doesn’t go far enough, researchers warn

By Marty Swant

As the tech industry rallies around watermarking AI-generated content and other commitments, some experts warn more work needs to be done.

In a new report from Mozilla, researchers suggest popular methods for disclosing and detecting AI content aren’t effective enough to prevent risks related to AI-generated misinformation. In an analysis released today, researchers note current guardrails used by many AI content providers and social media platforms aren’t strong enough to fight malicious actors. Along with “human-facing” methods — such as labeling AI content with visuals or audible warnings — researchers analyzed machine-readable watermarking methods including using cryptography, embedding metadata, and adding statistical patterns.

There are inherent risks related to scaled AI-generated content intersecting with the internet’s current distribution dynamics. Focusing on technical solutions could distract from fixing broader systemic issues such as hyper-targeted political ads, according to Mozilla, which also noted that self-disclosure isn’t enough.

Continue reading this article on digiday.com. Sign up for Digiday newsletters to get the latest on media, marketing and the future of TV.

…read more

Source:: Digiday

      

Aaron
Author: Aaron

Related Articles