Sunday, December 22, 2024

To minimize AI’s cyber risks to energy infrastructure, start with the design phase

Must read

This audio is auto-generated. Please let us know if you have feedback.

Leo Simonovich is the vice president and global head of industrial cyber and digital security at Siemens Energy.

A year ago, President Biden issued an executive order ambitiously aimed at “the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The order attempts to set in motion the government agencies that must grapple with the reality of an AI revolution already in progress. Across a broad set of industries, AI posed poses an open question – will this lead to more opportunities, or more threats?

For the energy sector, this is a complex question, because AI has multiple impacts. It will demand more electricity, will make energy production more efficient, will escalate cyber threats, and – if left unprotected – it could create new vulnerabilities at the heart of energy infrastructure. Energy leaders need to address these challenges – and to build the answers into all the new infrastructure that will be required to sustain AI’s productivity across every industry it touches.

AI data centers are power hungry and their demand for energy is going up. Companies like Google, Amazon, and Microsoft are investing in new power production, including new nuclear power. The clear demand for new electricity infrastructure heightens the need to think through AI’s potential boosts to productivity, and its double-edged impact on cybersecurity.

AI can make the energy sector more efficient and productive. Already, AI helps locate underground energy resources, inspect renewable energy facilities, and find new ways to optimize turbines. In the future, AI could automate tasks for power production and distribution – but only if we can protect and trust AI algorithms. In other words, strong cybersecurity will be essential to making energy resources more productive with AI.

AI can also contribute to stronger cybersecurity. Machine learning makes unprecedented cybersecurity monitoring techniques viable. A human analyst can’t look at every relevant datapoint generated in a power plant on a minute-by-minute basis, but AI can. Applying machine learning and other AI techniques to data analysis promises to improve cybersecurity monitoring capabilities.

We fully expect malicious actors — especially those backed by nation-states — will harness AI to launch cyberattacks. Generative AI can compose more plausible phishing attacks, can spoof the voices of executives, and can lower the level of technical skill needed to produce or disguise malicious code. Deployed with malicious intent, AI may be able to penetrate and map critical infrastructure networks to reveal attack pathways.

Across the public and private sector, cybersecurity researchers are working to ensure AI security benefits outweigh rising threats – with tangible progress over the past year.

Public sector efforts are grappling with the implications of AI for energy. An April report published by the U.S. Department of Energy outlines how AI can improve planning, permitting, operations and reliability, and resilience — key steps in maximizing the service of existing infrastructure and accelerating the deployment of new capacity. Accelerating permitting may be particularly important in the face of rising demand. There is speculation that if new power production and transmission lines cannot be added to the grid quickly enough, companies in the U.S. will choose “behind-the-meter” strategies to directly power data centers.

Behind-the-meter strategies could have multiple potential impacts, depending on the power production sources in question. Moving existing power production off the grid would have impacts on how the electric grid is managed day to day. Building new power production entirely behind-the-meter makes it less clear how cybersecurity standards and other federal regulations of the electricity sector — which is regulated at the bulk distribution level — would or should apply. In both situations, investors would need to carefully consider how cyber risks change.

Proposed reporting requirements for AI models attempt to address its complex impacts on cybersecurity —including both the possibility of weaponization and the need to build cybersecurity into AI models.

Perhaps most exciting for the energy sector are the Department of Energy’s new AI testbeds. Testing different chip and server configurations helps develop AI for industrial purposes — like managing the digital devices that run energy systems. But just as important, test beds let engineers safely test AI capabilities — including red-team attacks on AI. The teams that must defend critical infrastructure like power plants and pipelines will better understand the risks that come with AI.

Latest article