AI governance in the energy sector: Cybersecurity, bias, and black-box risks
Artificial intelligence (AI) is reshaping the global energy landscape in two critical ways: as a tool for utilities and retailers to optimise operations, and as a driver of unprecedented new demand for electricity. The US-based Franklin Templeton Institute estimates AI may account for 5% of global electricity use by 2035, with hyperscale data centres straining generation, transmission, and storage infrastructure. For operators and regulators alike, this makes the governance and resilience of AI tools inside the grid itself just as critical as securing new supply.
This dual role – AI as both an asset and a stressor – frames a central challenge: how to govern complex, opaque systems that promise efficiency and innovation, but also carry risks of bias, cybersecurity vulnerabilities, and accountability gaps.
Retail applications: Human-in-the-loop at Mercury
All along the energy pipeline – from developers and generators to retailers, AI is already integrated.
Mercury NZ, an electricity generation and multi-product utility retailer, is moving early to integrate AI into customer-facing and operational systems. Paulo Gottgtroy, Head of Decision Science and Analytics at Mercury, says the company applies techniques from machine learning to generative AI across channel management, compliance monitoring, and service optimisation.
Gottgtroy is speaking at the upcoming Future Grid Summit in December, where senior leaders will debate how energy companies are harnessing AI and what risks need to be managed.
“We use AI in multiple areas to support customer services and retail operations,” Gottgtroy explains. “This includes to support channel management, quality and compliance management, as well as operational processes to support service offering, cross-sell, among others. We apply different AI techniques from traditional machine learning algorithms to generative AI techniques.”
In one application, the company can harness AI to identify customers in hardship earlier and provide them support – meaning that according to Gottgtroy the company has had no post-pay disconnections for customers since June 2024: a significant benchmark of success for our customer care programme.
Yet Mercury has deliberately drawn a line at delegating decisions entirely to algorithms. Gottgtroy stresses the primacy of human oversight, particularly in areas that touch pricing, hardship, or service disconnection.
“I think the main risk is using AI without human oversight. At Mercury, our people validate and make decisions based on insights generated by AI techniques. These techniques have helped us to understand customer circumstances better and then find solutions that suit their needs,” Gottgtroy told Energy Insights.
Many AI systems are “black boxes”, meaning that customer outcomes are determined by algorithms. To ensure privacy, Mercury embeds “privacy by design”, using Privacy Impact Assessments (PIAs), anonymisation, and encryption, alongside audits and data quality frameworks. Gottgtroy notes that all models are treated as decision-support tools rather than autonomous managers, and no decisions are made without human oversight.
“We have a strong model management and data quality framework, which creates observability over aspects of our analytical models,” Gottgtroy said. “We make decision support tools, rather than decision management tools.”
Looking forward, Gottgtroy sees opportunity in AI-enabled personalisation – systems that listen, interpret, and design services around customer preferences to find solutions that meet their needs while generating value for businesses. As with any technology, the primary risk, he says, lies not in the technology itself but in delegating decision-making to systems without a comprehensive understanding of their impact on customer experience and operational outcomes.
“Artificial Intelligence will continue to evolve into a highly efficient support tool. Its role should remain that of an assistant – enhancing human decision-making, not replacing it.”
Regulatory lens: Data centres as critical loads
If retailers are experimenting with AI at the customer edge, system operators are already grappling with AI’s demand-side consequences.
Anna Collyer, Chair of the Australian Energy Market Commission (AEMC), highlighted the destabilising potential of large-scale data centres as they proliferate in response to the AI boom:
“The rise of artificial intelligence is driving unprecedented demand for data centres in Australia, with some facilities potentially requiring as much electricity as small cities.”
AEMC’s new comprehensive overhaul of the technical requirements for connecting to the national electricity grid, aim to ensure such facilities can ride through system disturbances rather than compounding them, ensuring “facilities can respond appropriately during power system disturbances and don't inadvertently make problems worse during system events”.
The potential need for reforms in this space was documented by the North American Electric Reliability Corporation, in a recent incident in the United States where 60 data centres consuming 1,500 MW of power disconnected simultaneously during a system disturbance, compounding grid stability issues.
Because of the complexity (in particular around large customer loads, power system security, protection system requirements), the AEMC has formed a Technical Working Group to refine standards, with meetings scheduled for the end of October and mid-November 2025, and draft rules expected to be published in March 2026. The process illustrates how regulators are beginning to connect AI demand and digitalisation trends with grid resilience governance.
Cybersecurity and Adversarial AI
For critical infrastructure operators, AI can both respond to and exacerbate cyber risk. Adjunct Professor Shivaji Sengupta, Principal at CyberWorX Energy, argues that while AI strengthens detection and response, it also multiplies vulnerabilities.
“AI is transforming cybersecurity in the energy and oil and gas sector, offering unparalleled capabilities to detect, respond to, and predict cyber threats,” Sengupta said. “Real-world applications, from smart grid protection to ransomware defense , highlight its potential to safeguard critical infrastructure. However, challenges like data quality, legacy system integration, and ethical concerns require careful navigation.”
Sengupta points to adversarial AI as a growing risk: “Cybercriminals can use AI to develop sophisticated attacks, such as deepfake phishing or adversarial machine learning to evade detection,” Sengupta said.
“AI’s autonomous decision-making raises questions about accountability. For instance, who is responsible if an AI system mistakenly shuts down a critical pipeline?”
What good practice looks like
Unfortunately, most corporate governance models are too static for AI’s dynamic behaviour.
Responsible AI Australia argues the problem is systemic. The organisation warns that black box AI carries ethical risks (bias, unfairness), legal risks (privacy, discrimination, consumer law breaches), and operational risks (model drift, hidden vulnerabilities, cyber exposure).
“AI is quickly becoming a cornerstone of corporate strategy, but it brings complexity and risk that cannot be managed with static policies,” the certification provider said. “Australian organisations face significant ethical, legal and operational challenges if they continue to treat AI governance as a one-off checklist.”
Their governance framework embeds continuous monitoring, documentation, bias testing, and stakeholder engagement throughout the AI lifecycle. Importantly, it treats governance as an iterative, adaptive process rather than a compliance checklist – aligning with international standards like the EU AI Act but tailored for Australian conditions, supported by a broader national network, the Responsible AI Network (RAIN) – a program of Australia’s National AI Centre.
“By embedding continuous oversight, accountability and adaptability into the AI lifecycle, companies can harness AI's benefits without sacrificing control or ethics. The choice is clear: adopt dynamic governance now to secure trust and stay ahead in the age of AI,” the certifier said.
Another framework, produced by CSIRO’s Data61 in partnership with Alphinity Investment Management and drawing on the AI Ethics Principles, calls for transparency, explainability, and clear governance structures to assess AI risks – principles that energy companies can adapt for retail operations or system planning.
The framework – The intersection of Responsible AI and ESG: A Framework for Investors – illustrates how investors are beginning to demand governance evidence, mirroring the trajectory seen in climate and ESG reporting. For energy companies, this means AI assurance will soon be part of both regulatory compliance and capital market expectations.
Looking ahead
The next five years will test the sector’s ability to capture AI’s benefits while mitigating its risks. On the demand side, AI-driven data centres threaten to become new “critical loads,” shaping connection standards and transmission investment. On the operational side, AI-enabled decision support is expanding into forecasting, digital twins, and retail automation – areas where governance must keep pace with innovation.
The opportunity is in personalisation and customer empowerment, but only if human accountability remains central. Adversaries will also exploit AI, making cyber resilience and constant oversight a non-negotiable.
For senior energy leaders, the imperative is clear: AI cannot be treated as a plug-and-play tool. It must be governed as a critical system of systems, with transparency, security, and trust built in from the start.
Energy Monthly
.png?width=46&height=35&name=Group%20(2).png)
Get a different perspective on energy with our monthly newsletter.
Up next
.png)

Creating clarity during the energy transition.
Get a different perspective on energy with our monthly newsletter.

All rights reserved Energy Insights Pty Ltd