in

The United States government will let AI companies self-regulate for their own good

[ad_1]

A hot potato: Since taking office, the Biden-Harris Administration has seemingly been busy developing a code of conduct of sorts for AI companies. The code, which has now been signed by seven “leading” corporations working in the generative AI business, should provide the entire AI market with a safer, more secure and transparent development process.

The White House recently persuaded Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to sign a document dictating several voluntary commitments for ensuring a “Safe, Secure, and Trustworthy” AI. Many things could change in the months to come, starting with a yet unknown watermarking system that should provide users with an easy way to detect AI-generated content.

Companies developing AI systems have a responsibility to ensure that their products are safe, the White House said, upholding the “highest standards” to nurture innovation while preserving Americans’ rights and safety at the same time. The commitments, which companies have seemingly chosen to undertake “immediately,” should mark a critical step toward developing responsible AI technology.

To ensure safety, the US Administration said, AI companies commit to conducting internal and external security testing of their products before releasing them to the public. Companies will also share information across the industry, academia, and government to manage AI risks, invest in cyber-security and will facilitate “third-party discovery” and reporting of vulnerabilities in AI systems.

Furthermore, a “robust” watermarking system will be needed to earn the public’s trust and reduce the dangers of fraud and deception, so there should be fewer fake, yet convincing, photos of notorious politicians being arrested going around. AI companies are committing to develop and deploy their systems to help society’s greatest challenges like cancer research or climate change mitigation, the White House stated, and not to simply exploit the tech to make shareholders even richer.

Organizations quoted by the White House official website are seemingly on board with the new commitments: on its corporate blog, OpenAI remarked how the company has agreed to create watermarking systems to easily recognize AI-generated “audio or visual content.” Google will also use metadata and other “innovative techniques” in addition to watermarks to mark fake content, while Microsoft praised the Biden Administration initiative as a foundation for a new AI era.

According to a statement quoted by the Financial Times, making AI safer and trustworthy is a “high priority” for the US President and his team. The unbearable amounts of deepfakes, misinformation and other misleading content which have invaded the internet in these few months have finally acquired political connotations in Washington DC and elsewhere in the world.

[ad_2]

Source link

Probiotic combo stops bacteria that cause toxic shock syndrome — ScienceDaily

Amazon’s return-to-work plan now includes relocation for some employees