Shadows of regulation: AI use in the EU public sector

Avatar img-thumbnail img-circle
By

in AI Adventures

This is a guest post by by Svitlana Blyzniuk, Program Manager Head of Enterprise Content Management Competence Center at Sigma Software Group 


We all live in the age of AI, which is exciting and frightening at the same time. Rapid advances in computing power have driven phenomenal breakthroughs in AI, alongside increasing data availability and new algorithms. This “new paradigm of technology” has unveiled vast potential to transform societies, governments, and economic systems, establishing itself as one of the most significant innovations of the century. 

The potential benefits of AI are immense, but it is crucial to manage the associated risks while upholding democratic values and respecting human rights. In this article, we will explore the current restrictions on the use of AI in the EU public sector and examine the regulatory framework designed to ensure the responsible and ethical use of AI technologies. 

Today, AI remains a highly polarised technology. Some view it as a catalyst for innovation and efficiency, while others express scepticism regarding issues such as privacy, security, and ethical implications. According to a European Commission survey, 82% of respondents are concerned about the misuse of AI in public services. 

Despite these concerns, AI has long been integrated into public services, enhancing areas such as healthcare, law enforcement, and public administration. For instance, AI algorithms improve diagnostic accuracy in healthcare, predict crime patterns for better policing, and streamline bureaucratic processes. These applications illustrate AI’s potential to enhance the efficiency and effectiveness of public services significantly. 

As AI continues to evolve, the need for stringent regulations becomes increasingly essential. The EU is leading this regulatory effort, striving to balance innovation with the protection of individual rights. Trust in the EU’s ability to regulate AI is substantial, with 68% of European citizens advocating for government restrictions to protect jobs. This perspective has surged over the past year, likely influenced by the introduction of popular generative AI tools such as ChatGPT, Gemini, and Midjourney, rising from 58% in 2022 to 68% in 2023. In some countries, this increase has been significant (Sweden +50% or the UK +40%). 

While the potential of AI in the public sector is promising, it’s essential to recognise the distinct opportunities it presents. For example, AI can automate repetitive tasks, allowing government employees to focus on more valuable, strategic initiatives. Moreover, AI facilitates data-driven decision-making, enhancing public services’ ability to respond effectively to citizens’ needs. It’s crucial to differentiate between AI and predictive analytics; the latter involves analysing data to identify trends and forecast outcomes. Artificial Intelligence, on the other hand, encompasses a broader range of capabilities, including mimicking human decision-making processes. 

However, the use of AI has its limitations, particularly due to regulations in some countries that restrict its application in decision-making. In Sweden, for instance, legislation mandates human oversight, ensuring that AI functions as a tool rather than a substitute for human judgment. This requirement aligns with broader EU regulations, such as the EU Artificial Intelligence Act (AIA) and the General Data Protection Regulation (GDPR), both of which emphasize the need for human oversight to ensure fairness and accountability in automated decision-making systems. 

While AI has the potential to modernise and personalise public service delivery, it also raises significant concerns that must be addressed. The 2020 report by AI Watch, a European Commission’s Joint Research Centre, highlighted these limitations. Various restrictions are codified in the EU’s AI Act, which seeks to establish a comprehensive framework for AI deployment in the public sector. This act, which entered into force in August 2024, emphasises accountability, transparency, and fairness in AI applications, ensuring that technology aligns with true democratic values and upholds individual rights. 

As we navigate this transformative landscape, striking the right balance between leveraging AI’s benefits and implementing necessary restrictions will be crucial for fostering trust in public services and safeguarding citizens’ rights. 

AI systems are categorized into several risk levels: minimal, limited, high, and prohibited. Systems with unacceptable risks, like social scoring and manipulative AI, are banned. High-risk AI is heavily regulated, while limited-risk systems, such as chatbots and deepfakes, are subject to lighter transparency rules, requiring users to be informed that they are interacting with AI. Minimal-risk AI, including video games and spam filters, remains largely unregulated, although this may change with the rise of generative AI. 

Prohibited AI practices are those deemed harmful enough to warrant a ban. For example, using AI or predictive analytics to analyze individual or group behavior to predict criminal activity raises significant ethical concerns. Such practices risk infringing on fundamental principles such as transparency, safety, non-discrimination, and accountability, all of which are critical for maintaining public trust and ensuring AI respects human rights. 

The primary concern is the potential for high-risk systems to cause harm, including human rights violations or broader societal issues. An example includes AI used to verify travel documents, visas, or to assess asylum applications. 

Ensuring transparency and accountability is crucial in deploying AI systems. It is essential to clearly outline how and where AI is utilized, the data sources involved, and the underlying logic behind decision-making processes. With this information, the implementation of AI solutions can overcome significant limitations. 

Moreover, conducting thorough impact assessments before deploying AI is imperative. This includes evaluating risks related to privacy, non-discrimination, and other human rights concerns. Neglecting these assessments undermines the principle of transparency. The new AI Act mandates that all AI systems be transparent, honest, and open about their operations. 

Additionally, making AI systems accessible and well-documented for public scrutiny is vital. This enables citizens, as stakeholders, to assess risks and contribute to discussions surrounding AI use, thus fostering a more inclusive decision-making process. 

The General Data Protection Regulation (GDPR) plays a critical role in European society, especially when it comes to AI. Many AI systems utilise cloud-based components or third-party libraries that process personal data, making it essential to investigate how this data is handled. Individuals must understand how their data is used and maintain control; otherwise, transparency can be compromised. 

Ensuring data integrity is paramount, as errors or biases in the data can lead to significant issues. Data must be accurate and reliable, and measures such as anonymisation, pseudonymisation, encryptions, backups, and MFA should be employed. These systems must be designed to use only the data necessary for specific analyses, underscoring the need for careful planning and responsibility in their architecture. 

For instance, government organisations can leverage anonymisation or pseudonymisation techniques to ensure that individual identities remain untraceable. By doing so, they uphold GDPR compliance, protecting citizens’ privacy while enabling the responsible use of AI. 

A fundamental ethical principle in AI deployment is to eliminate discrimination and bias. This begins with careful dataset creation, where it is crucial to understand the data sources used and conduct thorough external audits. 

Ensuring fair treatment is vital, as biases related to race, gender, socioeconomic status, and other factors can lead to detrimental cumulative effects over time. To mitigate these risks, it is essential to adopt principles of inclusivity by constructing diverse and representative datasets. Additionally, AI models should be trained to identify and address any biases that may arise, ensuring equitable outcomes for all individuals. 

By prioritising ethical considerations, we can foster trust in AI technologies and promote fairness in their application across public services. 

As we stand on the brink of an AI-driven future, the role of regulatory frameworks like the EU’s AI Act becomes increasingly critical. While AI holds the promise of transforming public services and enhancing efficiency, it also poses significant challenges that demand careful oversight. By implementing robust regulations that emphasize transparency, accountability, and ethical considerations, we can harness AI’s potential while safeguarding democratic values and protecting human rights. 

The ongoing dialogue among policymakers, technologists, and citizens is essential in shaping a future where AI serves as a tool for public good rather than a source of division or harm. Together, we must navigate this complex landscape, ensuring that AI technologies are developed and deployed responsibly, with a commitment to inclusivity and fairness. Only then can we build a society that benefits from innovation while upholding the rights and dignity of all individuals. 

Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments