About
This Climate Does Not Exist is an AI-driven experience based on empathy, allowing users to imagine the environmental impacts of the current climate crisis, one address at a time.
How does the visualization tool work?
The visualizations you see are created using generative adversarial networks (GANs), a class of machine-learning frameworks that allow a computer to create and transform images. To find out more, please see the Science behind the project section below.
Origin of the name
We named this project This Climate Does Not Exist to emphasize that climate change is having dire consequences all around the world right now, even if you aren’t experiencing it in your own backyard.
It is also a reference to a trend that uses artificial intelligence to generate images of things that aren’t actually real, such as people, cats, butterflies and even feet.
We chose this name to illustrate that while the AI-generated images depicting the impacts of the climate crisis are not real, they serve to raise awareness and incite action.
Origin of the project
Many people contributed to this project. We are a group of AI scientists based at Mila, the Quebec AI Institute. We started working on this project in March 2019 and launched the website in 2021.
While we are not climate scientists, we have done our very best to gather credible sources from partners such as the Climate Action Network. We are computer scientists with a keen interest in climate change and environmental impacts, and we want to use AI to make the world a better place.
Connecting the dots: cognitive bias and climate change
Climate change is a major challenge for humanity. Preventing its catastrophic consequences will require changes in both policy-making and individual behaviours.
However, many cognitive biases prevent us from taking action, given that climate change is an abstract phenomenon that can be hard to perceive as a direct threat to ourselves. This leads to psychological distancing, where this perceived distance can be due to both time (since the effects of climate change are in the future) and space (since many of its effects happen far away from us). Research suggests that showing people images of the impacts of climate change—flooded cities, fire-ravaged forests and smog-bound cities—can help reduce this psychological distance, especially if the places are familiar to the viewer.
We believe that harnessing AI to create images of personalized climate impacts will be especially powerful in overcoming the barriers to action and raising awareness of this important issue. Our hope is that this project will empower viewers to rethink their cognitive biases and take action, as individuals and members of the global community, to stop climate change.
For more information about the research on behavioral science and climate change, please see this short literature review that we prepared.
The science behind the project
Using AI to generate images
Generative adversarial networks, or GANs, were invented in Montreal in 2014, giving AI the ability to generate new content, such as images, text and even music. At first, GANs learned how to generate an image of a person from a set of examples, such as images of people (e.g. This Person Does Not Exist). They were then improved to enable the transformation of one group of images to another group. This process was pioneered through an architecture called CycleGAN, which enables the transformation of horses to zebras, apples to oranges and winter scenes to summer scenes.
We are using a new type of GAN (see our Publications) to generate the images of climate change that you see on our website.
To learn more about AI and GANs:
Online Courses:
Books:
Methodology
Generative Adversarial Networks or GANs, are AI models composed of two neural networks: a generator and a discriminator.
These networks are in competition with each other. The goal of the generator is to fool the discriminator by creating images that are as realistic as possible, for instance flooded streets, while the discriminator tries to distinguish the images created by the generator from real images of flooded scenes. This process leads the generator to gradually improve the quality of the images it creates, and to be able to fool the discriminator more and more often.
At the end of the GAN training process, the images created by the generator should be indistinguishable from the real images. This is called “convergence.” In practice, convergence is challenging to achieve, since images have many characteristics and attributes, so the generator has to learn a very complex mathematical function in order to generate realistic images. For this reason, our GAN took several months to design and train. We tried a variety of approaches before settling on the current one.
More specifically, our approach splits the problem of creating the events into several parts. The first part focuses on learning a shared “representation,” which is a set of numbers condensing the information contained in the images. The second part uses this information to create the various events from the same representation. This enables an efficient processing pipeline where we do not need to process the input image several times to produce the visualizations of the various events. Instead, we reuse this intermediate step of “encoding” the content of a picture.
To render smog, we use the intermediate representation to produce a depth map that predicts the distance of each pixel from the camera, so that we can properly scale the smog, with objects farther away being less clear than closer ones.
To create an image of a wildfire, we use the shared representation to build a “sky mask” that predicts which pixels in the input image belong to the sky, so that we can turn them orange, apply some blur, and then tweak the overall contrast.
Finally, to simulate a flood, the shared representation is processed to output a flood mask that shows where water should be added in the original scene. This mask is then used by the last piece of our puzzle, another neural network, whose job is to paint water on the input image based on the flood mask and taking the input image’s context into account.
- Flood
- Wildfire
- Smog
- Input image
- Compute flood mask
- Paint water
- Paste initial context
- Input image
- Increase contrast
- Darken picture
- Warm picture
- Segment sky
- Increase seg map
- Add gaussian blur
- Input image
- Infer pseudo-depth map
- Compute transmission
- Scale by airlight
- Compute input irradiance
- Add 4. and 5.
Related published work and documents
The Team
Project Leadership
Machine Learning and Programming
Communications and Website Content
Climate Science
Behavioral Science
Presented by
Acknowledgements
We would like to thank Google, the National Geographic Society, MIT Creative Commons, Climate Outreach and Borealis.AI for their support.
Contact Us
To contribute to our project or to obtain usage permissions, write to us at thisclimate@mila.quebec
About Mila
Founded in 1993 by Professor Yoshua Bengio of the Université de Montréal, Mila is a research institute in artificial intelligence that rallies over 500 researchers specializing in the field of machine learning. Based in Montreal, Mila’s mission is to be a global pole for scientific advances that inspire innovation and the development of AI for the benefit of all.
Mila, a non-profit organization, is internationally recognized for its significant contributions to machine learning, especially in the areas of language modelling, machine translation, object recognition and generative models.
Learn moreProject sources
Curious to know where our information comes from? Here is a list of our sources.
Climate Change
State of the Global Climate 2020 - Unpacking the indicators (Organisation météorologique mondiale) 2020 est en passe de devenir l’une des trois années les plus chaudes jamais enregistrées (Organisation météorologique mondiale) (In French) Smog : causes and effects (Government of Canada)Floods
Floods Overview (World Health Organization Could Floating Cities Help People Adapt to Rising Sea Levels? (Floodlist) Global warming and population change both heighten future risk of human displacement due to river floods (IOP Science) 2021 The impact of disasters and crises on agriculture and food security (Food and Agriculture Organization of the United Nations)Wildfires
La sécheresse et la chaleur exacerbent les feux de friches (Organisation météorologique mondiale) (In French) Economic Losses, Poverty & Disasters 1998-2017 (United Nations Office for Disaster Risk Reduction) 2021 The impact of disasters and crises on agriculture and food security (Food and Agriculture Organization of the United Nations) 2021 The impact of disasters and crises on agriculture and food security (Food and Agriculture Organization of the United Nations) 6 Graphics Explain the Climate Feedback Loop Fueling US Fires (World Resources Institute) Les incendies participent au réchauffement (Le Devoir) (In French) State of the Global Climate 2020 Provisional Report (World Meteorological Organization) It’s Not Just the West. These Places Are Also on Fire. (New York Times)Smog and other forms or air pollution
Smog : causes and effects (Government of Canada) Smog (National Geographic) Human Health (Chapter 8, IPCC) Smog et particules : les changements climatiques et les niveaux de smog dans l’air ambiant (Institut national de santé publique du Québec) (In French) State of the Global Climate 2020 Provisional Report (World Meteorological Organization) Air Pollution (World Health Organization) Global Platform on Air Quality and Health (World Health Organization) Changement climatique et santé (Organisation mondiale de la santé) (In French) Particle Pollution (American Lung Association) Causes and Consequences of Air Pollution in Beijing, China (Environmental ScienceBites)