Impact AI: Evaluating the impact of AI for sustainability and public interest
Artificial intelligence (AI) presents significant opportunities to advance sustainability and serve the public interest. It can, for instance, assist in monitoring biodiversity, clearing the seabed of waste, and tracking carbon emissions. However, these promising capabilities come with challenges, including high energy and resource consumption. While AI holds great potential, fundamental questions remain: How sustainable are these systems? And how can their societal benefits be effectively assessed?
The research project Impact AI: Evaluating the social impact of AI systems for sustainability and the public interest seeks to address these questions by analysing AI-driven sustainability initiatives designed for the public interest. The project’s objective is to develop a transdisciplinary methodology for assessing AI projects in terms of their contribution to social transformation and environmental sustainability. The goal is not only to quantify the impact of AI systems but also to identify concrete measures for their responsible deployment.
The resulting evaluation methodology will be made publicly accessible, ensuring broad applicability—whether by trained experts or directly by organisations. Through this approach, Impact AI will not only contribute to tangible sustainability objectives but also foster the informed and responsible development of AI systems in the public interest.
This project is being carried out in collaboration with Greenpeace e.V. and Gemeinwohl-Ökonomie Deutschland e.V.
Project goals and approach
The project integrates scientific research with practical applicability to evaluate the social and environmental impact of AI initiatives. Its core objectives and approach can be outlined as follows:
The methodology will integrate quantitative measures, such as data and metrics, with qualitative assessments of ethical and social dimensions to ensure a comprehensive evaluation.
Method development
Developing a robust and reliable evaluation methodology is crucial for systematically assessing both the potential and limitations of AI projects. The methodology follows a structured approach based on the following steps:
Preliminary evaluation model
In collaboration with partner organisations, a tailored model will be developed specifically for AI projects focused on public interest and sustainability.
Integration of quantitative and qualitative methods
The methodology will combine data-driven analysis with assessments of ethical and social dimensions to provide a comprehensive evaluation.
Testing and refinement
Three case studies will be conducted annually to rigorously test and refine the methodology, ensuring its robustness and applicability.
-
Theresa Züger, Dr.Research Group Lead: Public Interest AI | AI & Society Lab, Co-Lead: Human in the Loop
Funding
Duration: | 2025 to 2029 |
Fundiung: | VolkswagenStiftung as part of the Change! Fellowship programme |
Header image: David Clode via unsplash