Reflections from the selection panel: What AI Is teaching us About fairness and the future of opportunity

Dec 8, 2025

AI is helping young Africans sound more polished, but it may also be quietly reshaping who gets recognised. If fluency becomes the filter for opportunity, fairness is no longer guaranteed.

Serving on the selection committee for the Class of 2026 of what is arguably Africa’s most prestigious scholarship was, as always, an inspiring experience. I am proud to be an alumnus of this programme, and it remains close to my heart. Each time I have had the privilege to serve, I leave deeply energised by the passion, brilliance, and potential of the young African applicants. It is an encounter that consistently renews my hope in this continent and in our young people’s capacity to shape its future for the better.

But this year felt different. Something subtle yet significant had shifted.

Out of more than 2,500 applications, only about 90 candidates made it to the interview stage. I had the privilege of reviewing and interviewing roughly a quarter of these finalists. The remaining applicants were filtered out through three earlier rounds of review, each conducted by different alumni reviewers. It’s a rigorous process, and rightly so, but one that made me reflect deeply on who might be getting left behind, and why.

AI is levelling and tilting the playing field at the same time

As I read through applications, I began to notice something striking. The tone and structure of many essays carried a kind of polish and fluency that didn’t always align with the applicants’ spoken expression. In one case, for example, an applicant’s essay displayed remarkable coherence and eloquence that stood in stark contrast to how she articulated the same ideas during the interview.

By my estimation, more than 80% of the applications I reviewed were, to varying degrees, AI-augmented. The use of AI is understandable given how accessible AI tools have become. However, it introduces a set of ethical and procedural challenges that deserve deliberate attention. This presents both opportunity and risk. On one hand, AI can help applicants express their ideas with greater clarity and confidence. On the other hand, it can mask authentic voices and amplify inequities between those who have access to AI tools and those who don’t.

And here lies the deeper concern: what about the 2,400 applicants who never reached the interview stage? If AI-influenced writing played a role, consciously or not, in how reviewers perceived quality, authenticity, or coherence, then we must ask whether talented candidates may have been filtered out not because of their potential, but because of their digital access or choices around using AI.

Without clear guidance, reviewers may unconsciously reward polished essays over originality or penalise authenticity that lacks linguistic sheen. What was once a level playing field is becoming, quietly, an uneven one, not because of intent, but because of how pervasive the use of these technologies have become. 

Why foundations and scholarship programs must pay attention

Around the world, universities and grant-making institutions are struggling with the same dilemma. Some have banned AI outright; others are experimenting with transparency statements or reviewer training to ensure fair assessment.

For many of these institutions rooted in fairness, access, and excellence, this is an important moment to pause and ask:

  • How do we define authenticity in an age where AI can help us sound more articulate than we are?


  • How do we maintain fairness when digital literacy and access vary so widely?


  • How might we use AI ethically to enhance, rather than distort opportunity?

These are not just policy questions. They are ethical ones, and how we answer them will shape the future of access and opportunity. 

Moving from reaction to readiness

In 2023, I completed my PhD in AI Ethics, a journey that explored precisely these intersections between technology, fairness, and human judgment. Out of that work, I co-founded AlgoViva, a company dedicated to helping organisations unlock the value of AI and emerging technologies in ways that are ethical, safe, and human-centred.

At AlgoViva, we see again and again that the challenge is not technology itself, but the alignment between people, processes, and values. Most data breaches, for instance, stem not from malicious code but from human behaviour. Similarly, most ethical failures in AI come not from bad algorithms, but from systems that weren’t designed with care and foresight.

For foundations and institutions shaping the next generation of leaders, this is a chance to model digital maturity. It is time to show that ethics and innovation are not competing goals, but complementary ones. It is indeed possible to balance ethics and institutional mandates. 

An invitation to thoughtful leadership

The question is no longer whether AI belongs in education and scholarship processes, it’s how to use it responsibly. The goal isn’t to police technology but to equip people including staff, reviewers, and applicants with clarity, guidance, and confidence.

A clear AI policy, training for assessors, and open conversation about acceptable use can go a long way in ensuring that fairness keeps pace with innovation.

For me, being part of the selection process this year reaffirmed something simple but profound: technology doesn’t replace human judgment, it tests it.

And that’s a test we can pass, if we approach it with curiosity, humility, and the courage to lead in complexity.