skip to content

Department of Computer Science and Technology

Date: 
Wednesday, 9 July, 2025 - 09:00 to 10:30
Speaker: 
Dr. Nicola Palladino, University of Salerno
Venue: 
Computer Laboratory, William Gates Building, Room FW26

AI governance is increasingly shaped by a complex interplay of normative approaches. While high-level principles such as fairness, transparency, accountability, and safety are widely recognized across governance frameworks, their implementation varies significantly. The growing geopolitical significance of AI has driven governments to develop distinct strategies and policies, giving rise to 3 main models of AI governance. The Neoliberal Model, championed by the United States, prioritizes market-driven innovation, industry self-regulation, and minimal government intervention. Digital Sovereignty, exemplified by China, reflects a state-controlled and security-driven approach that emphasizes data localization and algorithmic transparency tailored to government priorities, particularly in information control and social stability. The European Union’s Digital Constitutionalism model embeds fundamental rights and democratic oversight into AI regulation, aiming for human-centric, trustworthy, and accountable AI governance. However, the boundaries between these governance paradigms are increasingly blurring. Under the Biden administration, the U.S. briefly moved closer to the EU model before reverting to a neoliberal stance, leveraging Big Tech as proxies of power and security actors. The EU struggles to balance its ambition to lead in Trustworthy AI with competitiveness and security concerns. China, while maintaining strict state control, has introduced selective innovation incentives and consumer rights protections with distinct "Chinese characteristics." Rather than fostering a cross-fertilization of these models, these shifting boundaries appear to reflect escalating geopolitical tensions, making international consensus on AI governance increasingly difficult to achieve.

Seminar series: 
Foundation AI