Since 2010s, the Earth science community has seen rapid growth in the interests and practice of adopting machine learning in research and discovery. The fast accumulating Earth science data and mature cloud technology have accelerated the growth of machine learning applications in Earth sciences in both academia and government agencies as well as industries.
Until recently, there is still a lack of systematic strategies and community-driven standards to steward and coordinate machine learning applications for Earth sciences. Recently, NOAA Research Council released its strategy for artificial intelligence outlining its vision to “dramatically expand the application of artificial intelligence in every NOAA mission area by improving the efficiency, effectiveness, and coordination of AI development and usage across the agency.” Similarly, other government agencies and private sectors have also formulated their own vision and practice for adopting ML/AI to advance their missions and put data to work.
In this session, we invite representatives from various government agencies and organizations to share their perspectives on adopting machine learning for Earth sciences (ML4ES). The session will have a set of brief presentations from speakers to outline the current landscape of ML4ES and followed by a panel discussion on how the ESIP community can contribute to and shape this landscape.
This stand-alone session will also serve to inform a follow on session, which is a conversation about possible Cluster activities and outputs.
PanelistsPete Doucette, Integrated Science and Applications Branch, Earth Resources Observation and Science Center, USGS
Eric Kihn, Director, NCEI’s Center for Coasts, Oceans, and Geophysics (CCOG), NOAA
Dan Morris, Program Director, Microsoft AI for Earth
Catherine Nakalembe, Department of Geographical Sciences, University of Maryland, NASA Harvest Project
Dan Pilone, CEO/CTO, Element 84
View Recording
View Session Notes
View Presentations: See attached
Takeaways- Big need for ARD, analytics ready data. But what does that mean? Except at the most basic level, readiness is relative to a specific problem
- Maturity level criteria are needed for both datasets and models: ORL-type assessments, evaluation metrics.
- We need best practices, standards for putting data into a data lake so that datasets are interoperable. ESIP is best place to develop cross organizational standards!