Artificial intelligence (AI) systems can significantly impact democracy: on the one hand, they can analyse at scale the online expression of citizen needs and expectations, while on the other hand, they can help malpractice such as mass disinformation (e.g. through AI-based fake news generators). Vice versa, democracy deeply affects what AI systems each society will implement and use. The EU carries the burden of supporting its citizens in reaching a consensus, hoping to achieve AI democratisation.  

The polarised expectations of the interaction between AI and Democracy are a challenge for domain experts and everyday people. The discussions and mass media coverage cause information overload, mingling conspiracy theories with fact-supported claims. This significantly hinders the implementation of AI democratisation initiatives because it reduces trust in AI systems and experts alike. However, how does the EU appear to battle the pitfalls of this environment? It has been selected to contribute primarily through regulation. Is this enough? 

This is, unfortunately, not the first time we have lived through such (information-harsh) times: the COVID pandemic offered polarised, populist phenomena across the socio-political board. Policies were undermined and misinterpreted, imposed and counter-attacked at different scales, all due to a lack of trust. The medical practice’s miscommunicated capacity and (in)ability created false expectations and caused a barrage of (perceived) failures. This is precisely the kind of polarisation we face on AI now. 

The EU actively approaches the darkest scenarios through regulation to prove that human vigilance can mitigate the misuse of AI and make it trustworthy. Thus, it brings the AI Act into the regulatory landscape. This work has built upon a long line of working groups (e.g. AI expert group, European AI alliance) and related publications (Ethics guidelines for trustworthy AI, a European approach to excellence and trust, and more), including public consultations (e.g. on a European approach to excellence and trust in AI) and sandboxing (e.g. the AI regulatory sandbox in Spain).  

Based on estimated levels of AI application risk, the EU prohibits dangerous uses of AI in exemplary cases, including “Cognitive behavioural manipulation of people or specific vulnerable groups” – a case also applicable to election scenarios. This way, the EU aims to build trustworthy AI systems because the law will impose them. However, is this -necessary- regulation the only missing piece to achieve trust? Or do we ask people (including legislators) to outline what is socially and politically acceptable? At the same time, they still need an understanding of what AI is and is not and what it can and cannot achieve. 

An ethics-by-design requirement for AI was outlined in Europe through a 2019 resolution of the European Parliament stating that “any AI model deployed should have ethics by design”. This led to numerous initiatives such as ethics checklists, upcoming standards, related committees, and tool sets for the technically inclined. These initiatives offer cross-disciplinary contributions to evaluate AI systems and applications throughout their life-cycle, from ideation to the final implementation and deployment. Ethics by Design for AI methodology (EbD-AI) has become an integral part of the Horizon Europe ethics review framework, directly affecting AI systems development and fundamental research in AI. 

What is critical in such ethical design approaches and frameworks is to allow actual co-creation with the final users. This implies support for different ethical expectations and rules across the multi-cultural landscape of the EU. It also implies a strong need for interaction between users and various system versions to achieve what the multi-million beta-testing of LLM platforms achieved: to identify the unacceptable and pursue the acceptable.  

This iterative personal experience is needed at scale, possibly in sandbox environments, such as the Testing and Experimentation Facilities (hybrid physical-digital testing spaces) or the Living Labs and related campuses across the EU. The interaction would allow the people to determine how to request and define a trustworthy system instead of asking the expert to certify it through a (black-box) scientific process. Coupled with citizen-training initiatives, such as the Pioneers for AI in Greece, we can envision actual democratisation from the ground up, i.e. the only way to have a sustainable AI-powered future. 

With the EU elections close at hand, significant effort has -quite ironically- been invested in using AI to counter the potential impact of AI on such deliberation events. From media observatories to EU-funded projects that support media professionals and policymakers or coach citizens, the arsenal against mass misinformation increases. However, we need more, more even than the AI Act and its use case of “Cognitive behavioural manipulation”. 

History teaches us that misinformation urges for another line of defence: the citizens themselves. This line goes beyond information processing capacity, technical prowess and an AI-aware attitude to pre-election communications. It is based upon significantly more ontological and deep roots: the related societal and personal values.  

The EU’s opportunity is to tap into the non-technical potential of its multi-cultural mosaic. We need a socio-anthropological understanding of the causes of the citizens’ spasmodic reactions to misinformation “thorns”, a new Paradigm of democratic awareness based on values. We must strive for a multi-disciplinary and historically aware systematic effort to interact with the basis of our EU community, update our experience with emerging wisdom, and invest in qualitative research. Finally, we need to facilitate knowledge sharing across the EU communities to increase the mutual understanding between peoples and respect for related idiosyncrasies to know which AI the EU democracy can raise and embrace.  

All of the above highlights the critical role of policymakers in ascertaining the above paradigm: thought and action leaders, supporting their knowledge with scientific evidence, shaping a future where AI supports EU democratic values. 

whois: Andy White Freelance WordPress Developer London