← Back to Challenges
Computer Vision

Challenge 02 — Climate Observer

Problem Statement

The data to monitor our planet exists. The problem is that nobody can read it fast enough.


Satellites photograph every corner of the Earth every few days. Cameras watch our cities around the clock. Thermal sensors measure heat our eyes can't see. But the sheer volume of imagery — from orbit to street corner — makes manual monitoring impossible. Illegal logging happens between satellite passes. Solar installations multiply faster than governments can count them. Urban neighbourhoods become heat traps while the measurements that could trigger action go unprocessed.


The challenge is to use Computer Vision to do what humans can't — automatically process raw imagery and sensor data, extract what matters, and turn it into something a conservationist, urban planner, or climate auditor can act on.

Data Available

The following datasets are suggested as starting points. They lean towards satellite and aerial imagery, but any visual or sensor data source is equally valid — ground-level camera feeds, thermal imaging, LiDAR, and more. Use whatever best supports your solution.

  • Hugging Face — Earth Observation Datasets — Labelled satellite and aerial imagery for land cover classification, deforestation detection, and flood mapping.
  • Microsoft Planetary Computer — Petabytes of environmental data including Sentinel-2, Landsat, and NAIP imagery, accessible via a unified API.
  • Sentinel Hub Open Data — Sentinel-1, Sentinel-2, and Landsat imagery. Sentinel-2 is the gold standard for global vegetation and land-use monitoring.
  • Google Earth Engine Catalog — Earth observation data including MODIS for monitoring thermal anomalies and urban heat.
  • Dynamic World — Near real-time, 10-metre resolution global land use/land cover dataset by Google and World Resources Institute.
  • Global Forest Watch — Forest change, tree cover loss, and satellite-based fire alert datasets.
  • DeepSolar Dataset — Solar installations identified via deep learning, useful for benchmarking solar adoption tracking.
  • FLIR Thermal Dataset — Thermal infrared imagery free for research use, covering environmental scenes for ground-level heat detection.
  • swisstopo — Free Geodata — High-resolution aerial imagery (SWISSIMAGE), digital surface models, and land-use data for Switzerland.
  • Swiss Territorial Data (opendata.swiss) — Swiss solar potential, forest boundaries, and building footprints across all cantons.

Expected Output

  • Detection Pipeline — A functional Computer Vision model or script (e.g. PyTorch, TensorFlow, or OpenCV) that processes imagery or sensor data to identify a specific environmental feature or change.
  • Insight Dashboard — A visual output — map or chart — showing the results (e.g. "Solar panels detected in District X" or "Estimated forest loss in Region Y").
  • Approach Summary — A brief explanation of your method: what you are detecting, what data you used, and how you validated it. No specific architecture required.
  • User Scenario — A clear demonstration of who would use this tool and what decision they could make with the output.

Demonstrating Reliability

Computer Vision models fail in predictable ways — clouds obscure imagery, shadows mimic objects, seasonal changes confuse classifiers. Your presentation must include a live demonstration of at least one of the following:

  • The tool correctly rejects or flags a known false positive (e.g. a skylight identified as a solar panel, a cloud misread as a deforested patch)
  • The tool displays a confidence score or uncertainty indicator alongside its output, rather than presenting results as absolute
  • The tool identifies a known limitation (e.g. "results in high-cloud-cover regions should be treated as estimates") and communicates this clearly

This will be evaluated by judges during the 5-minute presentation.

Success Criteria

  • Detection Accuracy — The model correctly identifies its target feature and demonstrates awareness of where it may fail.
  • Actionability — The output provides a clear metric a real decision-maker could act upon — not just a visualisation.
  • Data Efficiency — Teams are judged on how effectively they used available data, not model sophistication. A well-applied pre-trained model is valued equally to a custom architecture.
  • Scalability — The solution demonstrates or credibly describes how it could be applied beyond the sample area used in the demo.
Register via Climate Week Zurich