The Planetary Computer: putting global-scale geospatial data to work for environmental sustainability
Webinar Speaker: Dan Morris, Microsoft AI for Earth, USA
About the Webinar
Environmental science depends on large geospatial data sets, particularly remotely-sensed imagery and climate forecast data. However, conservation practitioners and environmental policy-makers are often unable to use this type of data at a global scale, due to the niche expertise required in both geospatial analysis and distributed computing. Microsoft’s Planetary Computer aims to reduce this friction – and to put large geospatial data to work for sustainability – by combining a multi-petabyte data catalog, a set of APIs for querying and processing data from that catalog, and a managed data science environment that allows users to scale their analyses without worrying about computing infrastructure. This talk will include an overview of the Planetary Computer platform, as well as a discussion of some of the applications that our partners are building that put the platform to work for environmental sustainability and conservation decision-making.
Dan Morris is a Principal Scientist with Microsoft’s AI for Earth program, focused on accelerating innovation at the intersection of machine learning and environmental sustainability, particularly through the Planetary Computer platform. When he’s not moving geospatial data around on the cloud, his work includes computer vision applications in wildlife conservation, for example the AI for Earth Camera Trap Image Processing API. Prior to joining AI for Earth, he worked in Microsoft’s Medical Devices Group, developing signal processing and machine learning techniques for cardiovascular health monitoring, along with earlier work on signal processing and machine learning for input systems, making medical information more useful to hospital patients, automatic exercise analysis from wearable sensors, and generating musical accompaniment for vocal melodies (the “Songsmith” project). Before coming to Microsoft, he studied neuroscience at Brown, and developed brain-computer interfaces for research and clinical environments. His PhD work at Stanford focused on haptics and physical simulation for virtual surgery.