As AI regulations loom, tech companies add new ways to improve their standards

By Marty Swant

With government officials exploring ways to rein in generative AI, tech companies are looking for new ways to raise their own bar before it’s forced on them.

In the past two weeks, several major tech companies focused on AI have added new policies and tools to build trust, avoid risks and improve legal compliance related to generative AI. Meta will require political campaigns disclose when they use AI in ads. YouTube is adding a similar policy for creators that use AI in videos uploaded. IBM just announced new AI governance tools. Shutterstock recently debuted a new framework for developing and deploying ethical AI.

Those efforts aren’t stopping U.S. lawmakers from moving forward with proposals to mitigate the various risks posed by large language models and other forms of AI. On Wednesday, a group of U.S. senators introduced a new bipartisan bill that would create new transparency and accountability standards for AI. The “Artificial Intelligence Research, Innovation, and Accountability Act of 2023” is co-sponsored by three Democrats and three Republicans including U.S. Senators Amy Klobuchar (D-Minn), John Thune (R-S.D.), and four others.

Continue reading this article on digiday.com. Sign up for Digiday newsletters to get the latest on media, marketing and the future of TV.

…read more

Source:: Digiday

      

Aaron
Author: Aaron

Related Articles