AI giants pledge to introduce watermarks to protect content from fakes
The world’s leading companies specializing in artificial intelligence, including such giants as Open AIAlphabet and Meta Platforms have pledged to the White House to voluntarily watermark AI-generated content to improve the security of the technology.
Other key AI players have joined the commitment, including Anthropic, Inflection, Amazon.com and Microsoft, which is an OpenAI partner. All of them confirmed their willingness to conduct thorough testing of systems before release, as well as share information on methods to reduce risks and the need to invest in cybersecurity.
“We appreciate the president’s efforts to engage the technology industry in developing concrete steps to make AI more secure, secure, and socially meaningful,” Microsoft said in a blog post on Friday.
Generative AI, used to create new content based on collected data and mimic texts created by humans, has become a real sensation this year. Therefore, legislators around the world are faced with the need to develop measures to prevent possible risks that this new technology may pose for national security and the economy of countries.
In matters of artificial intelligence regulation, the US lags behind the European Union. In June of this year, EU lawmakers agreed on a draft set of rules that would require systems like ChatGPT to disclose information about AI-generated content, help distinguish so-called deep-fake images from real ones, and provide protection from illegal content.
Democratic Senate Majority Leader Chuck Schumer called in June for the development of “comprehensive legislation” to ensure progress and ensure the safety of the use of artificial intelligence.
Congress is considering a bill that would require creators of political ads to disclose whether AI has been used to create images or other content.
President Joe Biden, who met at the White House with executives from seven key AI companies on Friday, is also working to develop executive order and bipartisan AI technology legislation.
As part of this effort, seven leading AI companies have pledged to develop a watermarking system for all forms of content, from text and images to AI-generated audio and video. This will allow users to know when the technology has been used.
It is expected that a watermark embedded in content at a technical level will make it easier for users to recognize deep-fake images or audio recordings that may, for example, illustrate non-existent violence, enhance the effect of deception, or distort the image of a politician, presenting him in an unfavorable light.
It remains unclear how the watermark will appear in the exchange of information.
The companies also pledged to focus on protecting user privacy as AI evolves and to ensure that the technology is free from bias and is not used to discriminate against vulnerable groups.
Other commitments made by companies include developing AI solutions for scientific challenges, including medical research, and climate change mitigation.
Source link
www.securitylab.ru