A Cool Idea: Northern Datacenters for Canadian AI

Dylan McKibbon details Canada’s path to global leadership in sustainable artificial intelligence.

__________________________________________

If you’ve been thinking about Artificial Intelligence (AI) lately, it’s safe to say you’re not alone. Global investment into cancer research from 2016 to 2023 was $24.5 billion, but global investment for AI? – $35 billion in the first half of 2024 alone, projected to approximately $200 billion by 2025. Although it remains unclear just where we are headed, wherever we are going, we are going there very, very fast. In the context of this breakneck speed, there is an opportunity for Canada to firmly establish itself as a global leader in AI by embracing a broad and comprehensive national AI strategy that makes bioethical considerations a core feature of its deployment. I’ve touched on this briefly earlier this year and, since then, I’ve thought a great deal about the deployment of AI and it’s growing environmental impact. The wicked problem borne from the convergent phenomenon of water scarcity and AI, for example, will soon render impossible our ability to speak of one without the other.

This explosive growth in large-language model and generative pre-trained transformer (GPT) investment doesn’t even scratch the surface of what’s to come, and the capabilities we’ve witnessed thus far will pale in comparison to the introduction of AI “Agents” in 2025. Whereas language models like ChatGPT function reactively and episodically, and are most computationally intensive during training, their energy usage is mostly tied to the cost of generating responses to user queries. Agentic AI systems such as the OpenAI Agent codenamed “Operator”, on the other hand, are designed to autonomously perform complex tasks and make decisions with minimal human intervention. The AI scientist, for example, is a research agent that automates the entire research lifecycle (formulating hypotheses, writing code, designing and executing experiments, analyzing the data, publishing the research manuscript). The eventual integration of agentic AI with other critical systems will require constant connectivity and processing, which will involve significantly higher computational demands for continuous monitoring and decision-making. These systems will not only be deployed in critical infrastructure (such as power grids, and transportation) but also via commercial applications like digital twins and professional services such as therapists, doctors, and teachers.

Photo Credit: Mike MacKenzie/flickr. Image Description: An illustration symbolizing artificial intelligence (AI).

All of this is to say that we will require massive amounts of energy to power datacenters, and these datacenters in turn will require enormous amounts of water for cooling – a staggering 6.6 billion m³ by 2027, according to some estimates – and in some cases, 57% of the water used for cooling is drawn directly from potable sources.  The expansion of AI systems will accelerate this problem. IBM reports that 82% of large enterprises have either deployed AI, or are experimenting with doing so. Companies like Microsoft , Amazon, and Google have their own nuclear powerplants to provide the energy needed for their AI systems, but this means that even more water (billions of gallons per year) will be required to cool those nuclear fuel rods. Herein lies one of the problems: the entire data center industry suffers from a lack of transparency. Large companies like Google and Microsoft have a legal requirement to publish their greenhouse gas emissions, but there are no such requirements for water usage. Such a requirement could go a long way to highlight the intensifying crisis of water scarcity and the problem of thirsty data centers but, without bold action on corporate water stewardship, the loss of one of our scarcest and most vital natural resources will continue unchecked.

A network of global orbital datacenters, governed by international laws and treaties, could theoretically leverage infinite amounts of solar power and cooling from the vacuum of space. Efforts are already underway to make this a reality, and the European Union’s $2.1 million ASCEND study concluded that launching data centers into orbit is technically, economically, and environmentally feasible; but for now, solutions are earthbound. It is true there have been efforts to regulate AI, but they always fall short. A sufficiently comprehensive AI strategy should involve introducing legislation that not only considers the physical infrastructure upon which AI relies as a primary focus, but should also explicitly bring that infrastructure under its authority and purview. Given that Canada is the coldest nation in the world, has more lakes than any other country on earth combined, and 20% of the world’s freshwater resources, we have a clear national interest in, and are well positioned to be a global leader of, the construction and stewardship of an ethical, sustainable, and sovereign AI infrastructure that leverages our unique climate and geography and is supported by Indigenous co-governance. By strategically situating datacenters in northern regions of Canada, natural cooling and plentiful solar energy can significantly reduce resource demands and water consumption.

Meeting these challenges with bioethical thinking might compel our collective moral duty toward each other and toward future generations, emboldening us to challenge our leaders, businesses, and policymakers to ensure that the technologies we use today do not irreparably harm the ecosystems and societies of tomorrow. We should take care to responsibly wield our tools, lest we be wielded by them; the opportunity is ours to seize, and the responsibility is thus ours to bear.

 __________________________________________

Dylan McKibbon is a Health Privacy Officer with a background in Philosophy and Research Ethics. He is also a member of the Health Sciences North Research Ethics Board, and a Clinical Ethics Intern in the Ethics Quality Improvement Lab at William Osler Health System.