logo

About the Global Education Observatory

  • Sub-menu Item
  • Sub-menu Item
  • Sub-menu Item
Founded in 2018 at the William & Mary geoLab, the Global Education Observatory's (GEO) mission is to consolidate and enrich global educational data, transforming it into actionable insights and tools for educators, policymakers, and researchers worldwide.

A Singular Repository of Diverse Educational Data

At GEO, we recognize the power of information in shaping the future of education. Our primary endeavor is to amalgamate data from open government databases and other authoritative sources into a single, comprehensive repository. This unique collection not only offers a panoramic view of global education but also serves as a valuable resource for comparative analysis and trend identification.

Student-Led

Our team comprises dedicated graduate and undergraduate students from the geoLab, bringing together diverse skills and perspectives. This vibrant academic environment fosters innovation and creativity, allowing us to approach challenges with fresh ideas.

Innovating with Machine Learning

We employ advanced machine learning techniques to unearth patterns and fill gaps in educational data. Our most recent work demonstrates our capability to estimate school test scores using publicly available imagery. By employing convolutional neural networks (CNN) and multi-source ensembles, we have achieved predictive accuracies of 76% to 80% for individual schools in countries like the Philippines and Brazil. This approach underscores our commitment to leveraging technology for educational insights. Our research not only adds to the academic discourse but also paves the way for operational applications of CNN-based methodologies in educational assessment.
School Level Data Explore data on resources, personnel, assessments and more at the school level.
API Download any of the data we publish through our API.
Machine Learning Out team trains a suite of machine learning models to fill in gaps in education data globally. Get the resources for those models on GitHub below.