CEOs from OpenAI, Google, and Microsoft to Join Federal AI Safety Panel Alongside Other Tech Leaders

Editor

The US government has reached out to leading artificial intelligence companies for advice on how to use AI technology to defend critical infrastructure, such as airlines and utilities, from AI-powered attacks. The Department of Homeland Security announced the creation of a panel that will include CEOs from major companies and industries, including tech giants like Google and Microsoft, as well as defense contractors and air carrier executives. This collaboration between the private sector and government reflects the urgent need to address both the benefits and risks of AI in the absence of targeted national AI legislation. The panel will provide recommendations to various sectors on the responsible use of AI and prepare them for AI-related disruptions.

DHS Secretary Alejandro Mayorkas emphasized the transformative potential of AI technology while acknowledging the real risks it poses, stressing the importance of adopting best practices and concrete actions to mitigate these risks. The 22-member AI Safety and Security Board, established as a result of a 2023 executive order by President Joe Biden, consists of industry leaders, government officials, academics, and civil rights groups. The board’s mandate is to make recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure. The executive order also led to government-wide regulations on the purchase and use of AI in federal agencies’ systems.

The US government currently uses machine learning and AI for various purposes, such as monitoring volcano activity, tracking wildfires, and identifying wildlife from satellite imagery. However, officials are concerned about the rise of deepfake audio and video, which use AI to create fake content. This poses a significant threat to the security of events like the 2024 US election, with a fake robocall imitating President Biden’s voice causing alarm among officials focused on election security. There are fears that foreign adversaries like Russia, China, or Iran could exploit this technology to influence elections. Mayorkas highlighted the importance of countering efforts by adversarial nations to unduly influence US elections using AI-generated content.

The AI advisory board will play a vital role in addressing the risks associated with AI technology and helping critical infrastructure sectors navigate AI-related disruptions. By bringing together industry leaders, government officials, and experts in AI, the board aims to provide guidance and recommendations on how to use AI responsibly. The collaboration between the private sector and government reflects a commitment to leveraging AI technology for national security while mitigating potential risks. This partnership underscores the need for coordinated efforts to address the challenges and opportunities presented by AI in critical infrastructure and beyond.

Overall, the US government’s decision to engage with leading AI companies and establish the AI Safety and Security Board demonstrates a proactive approach to addressing the evolving threats posed by AI technology. By harnessing the expertise of industry leaders and experts in AI, the government aims to develop strategies for enhancing security, resilience, and incident response in critical infrastructure. The focus on responsible AI use and preparedness for AI-related disruptions reflects a commitment to leveraging technology for national interests while safeguarding against potential risks. The collaboration between government, industry, and academia in this initiative underscores the importance of coordinated efforts in securing critical infrastructure and combatting emerging threats in the AI landscape.

Share This Article
Leave a comment