Visualize climate change What now? About FR

About

This Climate Does Not Exist is an AI-driven experience based on empathy, allowing users to imagine the environmental impacts of the current climate crisis, one address at a time.


How does the visualization tool work?

The visualizations you see are created using generative adversarial networks (GANs), a class of machine-learning frameworks that allow a computer to create and transform images. To find out more, please see the Science behind the project section below.

Origin of the name

We named this project This Climate Does Not Exist to emphasize that climate change is having dire consequences all around the world right now, even if you aren’t experiencing it in your own backyard.

It is also a reference to a trend that uses artificial intelligence to generate images of things that aren’t actually real, such as people, cats, butterflies and even feet.

We chose this name to illustrate that while the AI-generated images depicting the impacts of the climate crisis are not real, they serve to raise awareness and incite action.

Origin of the project

Many people contributed to this project. We are a group of AI scientists based at Mila, the Quebec AI Institute. We started working on this project in March 2019 and launched the website in 2021.

While we are not climate scientists, we have done our very best to gather credible sources from partners such as the Climate Action Network. We are computer scientists with a keen interest in climate change and environmental impacts, and we want to use AI to make the world a better place.

Connecting the dots: cognitive bias and climate change

Climate change is a major challenge for humanity. Preventing its catastrophic consequences will require changes in both policy-making and individual behaviours.

However, many cognitive biases prevent us from taking action, given that climate change is an abstract phenomenon that can be hard to perceive as a direct threat to ourselves. This leads to psychological distancing, where this perceived distance can be due to both time (since the effects of climate change are in the future) and space (since many of its effects happen far away from us). Research suggests that showing people images of the impacts of climate change—flooded cities, fire-ravaged forests and smog-bound cities—can help reduce this psychological distance, especially if the places are familiar to the viewer.

We believe that harnessing AI to create images of personalized climate impacts will be especially powerful in overcoming the barriers to action and raising awareness of this important issue. Our hope is that this project will empower viewers to rethink their cognitive biases and take action, as individuals and members of the global community, to stop climate change.

For more information about the research on behavioral science and climate change, please see this short literature review that we prepared.

The science behind the project

Using AI to generate images

Generative adversarial networks, or GANs, were invented in Montreal in 2014, giving AI the ability to generate new content, such as images, text and even music. At first, GANs learned how to generate an image of a person from a set of examples, such as images of people (e.g. This Person Does Not Exist). They were then improved to enable the transformation of one group of images to another group. This process was pioneered through an architecture called CycleGAN, which enables the transformation of horses to zebras, apples to oranges and winter scenes to summer scenes.

We are using a new type of GAN (see our Publications) to generate the images of climate change that you see on our website.

To learn more about AI and GANs:

Online Courses:

Elements of AI AI for Everyone Deep Learning Essentials (requires some programming experience) GANs specialization (requires previous AI knowledge)

Books:

Deep Learning (Goodfellow, Bengio and Courville, 2016)

Methodology

Generative Adversarial Networks or GANs, are AI models composed of two neural networks: a generator and a discriminator.

These networks are in competition with each other. The goal of the generator is to fool the discriminator by creating images that are as realistic as possible, for instance flooded streets, while the discriminator tries to distinguish the images created by the generator from real images of flooded scenes. This process leads the generator to gradually improve the quality of the images it creates, and to be able to fool the discriminator more and more often.

At the end of the GAN training process, the images created by the generator should be indistinguishable from the real images. This is called “convergence.” In practice, convergence is challenging to achieve, since images have many characteristics and attributes, so the generator has to learn a very complex mathematical function in order to generate realistic images. For this reason, our GAN took several months to design and train. We tried a variety of approaches before settling on the current one.

More specifically, our approach splits the problem of creating the events into several parts. The first part focuses on learning a shared “representation,” which is a set of numbers condensing the information contained in the images. The second part uses this information to create the various events from the same representation. This enables an efficient processing pipeline where we do not need to process the input image several times to produce the visualizations of the various events. Instead, we reuse this intermediate step of “encoding” the content of a picture.

To render smog, we use the intermediate representation to produce a depth map that predicts the distance of each pixel from the camera, so that we can properly scale the smog, with objects farther away being less clear than closer ones.

To create an image of a wildfire, we use the shared representation to build a “sky mask” that predicts which pixels in the input image belong to the sky, so that we can turn them orange, apply some blur, and then tweak the overall contrast.

Finally, to simulate a flood, the shared representation is processed to output a flood mask that shows where water should be added in the original scene. This mask is then used by the last piece of our puzzle, another neural network, whose job is to paint water on the input image based on the flood mask and taking the input image’s context into account.

  • Flood
  • Wildfire
  • Smog
  • Input image
  • Compute flood mask
  • Paint water
  • Paste initial context
Flood image processing
  • Input image
  • Increase contrast
  • Darken picture
  • Warm picture
  • Segment sky
  • Increase seg map
  • Add gaussian blur
Wildfire image processing
  • Input image
  • Infer pseudo-depth map
  • Compute transmission
  • Scale by airlight
  • Compute input irradiance
  • Add 4. and 5.
Smog image processing

The Team

Project Leadership

Yoshua Bengio, Scientific director
Sasha Luccioni, Postdoc
Victor Schmidt, PhD

Machine Learning and Programming

Mélisande Teng, Intern
Tianyu Zhang, Intern
Alexia Reynaud, Intern
Sunand Raghupathi, Intern
Vahe Vardanyan, Postdoc
Nicolas Duchêne, Intern
Gautier Cosne, Intern
Adrien Juraver, Intern
Alex Hernandez-Garcia, Postdoc
Jason Ardel, Intern
Léopold Herlaud, Intern
Léonard Boussioux, App developer
Steven Bocco, App developer
Charles Guilles-Escuret, App developer
Mike Arpaia, Engineer
Shivam Patel, Intern
Sahil Bansal, Intern

Communications and Website Content

Marie-Claude Surprenant, Digital strategist
Brigitte Tousignant, Editor
Caroline Brouillette, Content Advisor
Silvie Harder, Content Advisor

Climate Science

S. Karthik Mukkavilli, Postdoc
Ata Madanchi, Intern
Hassan Akbari, Intern
Vitoria Barin Pacela, Intern
Yimeng Min, Intern

Behavioral Science

Erick Lachapelle, Collaborator
Thomas Bergeron, Collaborator
Ayesha Liaqat, Intern

Presented by

Partners and collaborators

Acknowledgements

We would like to thank Google, the National Geographic Society, MIT Creative Commons, Climate Outreach and Borealis.AI for their support.


Contact Us

To contribute to our project or to obtain usage permissions, write to us at thisclimate@mila.quebec


About Mila

Founded in 1993 by Professor Yoshua Bengio of the Université de Montréal, Mila is a research institute in artificial intelligence that rallies over 500 researchers specializing in the field of machine learning. Based in Montreal, Mila’s mission is to be a global pole for scientific advances that inspire innovation and the development of AI for the benefit of all.

Mila, a non-profit organization, is internationally recognized for its significant contributions to machine learning, especially in the areas of language modelling, machine translation, object recognition and generative models.

Learn more

Project sources

Curious to know where our information comes from? Here is a list of our sources.