Tech Giants Agree to Voluntary AI Safety Checks by Governments Before Public Release
-
Top tech firms like Meta, Google, and OpenAI will allow governments to vet their AI tools before public release, announced by UK PM Sunak at AI safety summit.
-
Sunak says AI poses grave threat to humanity, likening it to nuclear war; convened diverse summit including Elon Musk.
-
Firms agree to test AI models with governments against dangers like national security and societal harms.
-
Sunak announced international support for expert AI safety panel inspired by IPCC, to be chaired by Yoshua Bengio.
-
Agreements reached reduce AI threat says Sunak, but testing is voluntary as legislation would be too slow.
