Biden-Harris Administration Persuades Top Tech Companies to Manage the AI-Related Risks
18 September 23
Tech giants Adobe, IBM, Nvidia, and five others have signed voluntary commitments to manage the risks posed by Artificial Intelligence (AI). These agreements build on voluntary commitments that were announced earlier this summer. The 8 new signees join Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI that signed similar voluntary commitments back in June.
What do these commitments entail? Biden-Harris administration’s goal is safe, secure, and transparent development of AI technology.
The safety aspect is covered by:
- Companies’ commitment to conduct thorough security assessments, both within their organization and outside of it, for their AI systems prior to their launch. This testing by independent experts will encompass such AI risks as biosecurity, cybersecurity, and its broader societal effects.
- Companies’ commitment to share information within the industry and to governmental bodies, non-governmental organizations, and educational institutions regarding the management of AI-related hazards. This encompasses the dissemination of safety best practices, insights into efforts to bypass protective measures, and fostering technical cooperation.
The security aspect is covered by:
- Companies’ commitment to spend on cybersecurity and measures to prevent insider threats. This is to keep their important and not-yet-released AI model information safe. They agree it’s crucial to only share this information when planned and when it’s secure.
- Companies’ commitment to help outsiders discover and report any weaknesses in their AI systems. Having a good way for users to report problems makes it feasible to find and fix these problems quickly.
The transparency aspect is covered by:
- Companies’ commitment to create strong technical systems that let users easily identify AI-generated content, possibly by using a watermarking system.
- Companies’ commitment to openly share information about what their AI systems can and cannot do, as well as when and where it’s suitable or unsuitable to use them.
- Companies’ commitment to make sure AI systems don’t unfairly favor some people and protect their privacy.
- Companies’ commitment to deploy AI systems to help address society’s biggest problems. That includes, cancer research, climate change research and so many more.