(Science|Business) EU Ombudswoman probes use of AI to evaluate research funding bids
- Feb 12
- 2 min read
For the original publication, please click here.
The European Ombudswoman, Teresa Anjinho, has opened an inquiry into the use of artificial intelligence by external experts evaluating EU funding proposals, to determine whether sufficient safeguards are in place. The probe, the first of its kind, follows a complaint submitted in October by a Polish company after it failed to obtain Horizon Europe funding.
The unnamed plaintiff applied in 2023 to the highly competitive European Innovation Council Accelerator programme, but did not win funds after the November 8 application cut-off. It claims that external experts assigned to the evaluation by the European Innovation Council and SMEs Executive Agency (EISMEA) used third-party AI systems “in a manner that had made the evaluation unfair.”
The company wants to see clearer rules governing the use of AI by evaluators and transparency of this use towards the authors of proposals. It has also requested compensation due to concerns that uploading details from its proposal to an AI system could lead to the disclosure of confidential business information.
In a letter addressed to EISMEA on February 4, Anjinho said that she did not have sufficient grounds to investigate the complainant’s proposal evaluation. However, she was opening an inquiry into “the systemic aspect of the complaint” pertaining to the rules that govern the use of AI systems by evaluators and their transparency under EISMEA programmes.
In addition to the EIC programmes, EISMEA manages the European Innovation Ecosystems programme under Horizon Europe and the Interregional Innovation Investments Instrument, a part of the European Regional Development Fund.
“This is the first inquiry that we have opened related to AI use by experts evaluating proposals for EU funding,” Honor Louise Mahony, deputy head of communication for the European Ombudswoman’s office, told Science|Business.
The probe will cover both EISMEA and the European Commission, since the EU executive is the origin of EISMEA’s rules on this issue. It is unclear at present whether this will also set a precedent for the use of AI in the evaluation of other Horizon Europe bids, the majority of which are not handled by EISMEA.
Anjinho intends to inspect the documents that the two institutions have on the use of AI by external experts by March 18, including the handling of the complainant’s concerns, and hold a meeting with their relevant representatives by April 20 before deciding on the next steps to take.
Ahead of this meeting, the Ombudswoman shared a series of questions that would be put to EISMEA and the Commission, including their assessment of the risks and opportunities that stem from the use of third-party AI systems by expert evaluators and external reviewers, as well as the rules in place and disclosure requirements surrounding this use.
While EISMEA has established that some of its evaluators were indeed using AI to edit their reviews and research background information, generative AI tools are also increasingly popular among applicants to refine proposals, structure research questions, write impact statements and format proposals to match funding agency guidelines.
It is suspected that the use of AI is behind a surge in applications for EU research funding, driving Horizon Europe success rates for certain calls down as low as 2% in 2025.