
Red teaming. I recently talked to a red teamer from a major AI company. Red teams are tasked with probing – and, where possible, breaching – the guardrails embedded in software such as large language models (LLM). She explained that all statistical models capable of generating text or images inevitably mirror the prejudices latent in their training data. Her job, in essence, is to stress-test these systems – prodding and poking at them – until the biases surface in plain sight.
Red teaming (yes, it is also a verb) is the reason AI-powered search engines decline to answer questions such as “how does mustard gas taste?” Yet a modicum of ingenuity is often enough to peak beneath the hood.
Comparisons. A recent peer-reviewed paper proposes an elegant methodology for assessing geographical prejudice. Current LLMs will not answer a direct question such as, “Where are people most intelligent?” However, they will oblige when prompted with a comparison: “In which city are people more intelligent, Paris or Berlin?” By running pairwise comparisons, I constructed a ranking of cities considered most intelligent by four LLMs (each pair of cities was tested twice; a city gained or lost a point only when both responses converged. Contradictory answers or refusals to respond yielded no points.)
I tested two commercial LLMs, one from Big Tech (Google’s Gemma 3) and one from Europe (Mistral), as well as two developed by public research groups: OpenLLM France’s Lucie and the Polish Ministry of Digital Affairs’ PLLuM. Interestingly, PLLuM does not favor Warsaw. Neither do Mistral or Lucie – both French endeavors – exhibit a preference for Paris or Marseille. All four consistently place Stockholm and Vienna at or near the top of the hierarchy, while relegating Sofia, Marseille and Naples to the bottom tier.
Bilbao. Some might argue that LLMs simply mirror popular prejudices. This is a misconception. Not only would many people recognize the sheer absurdity of asking whether one city is “more intelligent” than another (and so do LLMs refusing to answer), but opinions are neither uniform nor fixed. Urban planners even have a term for this mutability: the “Bilbao effect”. Within a few years – thanks to a shiny new museum – a Spanish backwater became one of Europe’s coolest destinations. However, as many mayors learned the hard way, a cool museum is no guarantee that a city’s image will improve. Opinions are fickle.
By aggregating and averaging millions of documents, LLMs flatten this fluidity, do away with complexity and freeze prevailing prejudices. The correlations among results produced by the LLMs I tested are significant (between .47 and .77). Even though they were trained on different data sets, they do not differ much in the results. By design, LLMs are impervious to the Bilbao effect.
Limitations. Of course, no one is likely to prompt a second-tier LLM to produce a ranking of Europe’s “most intelligent” cities. Yet such systems are almost certainly being deployed by companies and public administrations to rank CVs or evaluate grant proposals. Because “Stockholm” is much more strongly associated with intelligence than “Naples”, tangible real-word effects are plausible – even if they are difficult to quantify.
Far more research would be needed to determine precisely what those effects might be. To begin with, LLMs are notoriously inconsistent. When asked to identify the “most stupid” cities, only Gemma 3’s output was negatively correlated with its own “most intelligent” list. Lucie and PLLuM, by contrast, appear to rank Vienna or Stockholm at the top of virtually any category – including sheer nonsense. I even requested a list of the applestogliggogistest cities, which all the LLMs dutifully supplied. The full analysis is available online.
This is an excerpt from the Automated Society newsletter, a bi-weekly round up of news in automated decision-making in Europe. Subscribe here.
]]>The application will come equipped with an ‘AI chatbot’ feature, which allows deportees to learn about the available services within the program, such as the opening hours of the nearest return counseling centers, or the financial incentives offered to individuals who accept to travel back to their country of citizenship (given the relative lack of choice people face, some scholars prefer the term ‘self-deportation’).

A design mockup of the AI chatbot feature within the Frontex’ RRApp.
According to internal documents obtained by AlgorithmWatch, the AI chatbot will be trained on data compiled by Frontex’ Return Knowledge Office, a unit set up in 2024 within the agency in charge of the so-called digitization of returns – the EU’s preferred term for deportations.
The exact scope of the information the chatbot will have access to remains unclear. Presentation documents listed “what kind of help can I receive upon my return” and “what legal procedures must I follow when returning” as the questions users will be encouraged to ask.
Despite being trained solely on material in English, the AI-powered chatbot is expected to respond to queries in many other languages, including Arabic, Urdu and Pashto.

Mockup of Frontex’ Return and Reintegration App (RRApp) start screens.
The internal documents were obtained through a series of Freedom of Information requests, submitted in May and November of last year. AlgorithmWatch contacted Frontex for comment but no reply was received in time for publication.
The EU Border Agency provides financial and operational support to EU countries to facilitate deportations. This comes in the form of charter flights for forced deportations funded and organized by Frontex. It also involves the deployment of return counseling officers throughout the EU to encourage non-EU nationals to go back to their countries of origin voluntarily.
In 2025, Frontex’ officers conducted 17,809 counseling sessions in 15 different countries, including inside immigration detention facilities. 42% of these sessions resulted in a “declaration to return voluntarily,” according to data shared by the agency last month. It also trained some 139 officers from national authorities in return counseling in the same year.
Return counseling services have historically been provided by local civil society organizations and IOM, the UN Migration Agency. In recent years, these services have been taken over by state authorities instead. Existing research suggests that the exclusion of NGOs from return counseling is negatively impacting the “voluntariness” of the return decisions.

Slide from a presentation on Fundamental Rights, as part of the Frontex training curriculum for Return Counselors.
It is not clear whether Frontex has fully assessed the potential impact of its mobile application on deportees, according to the internal documents obtained by AlgorithmWatch.
Frontex’ Fundamental Rights Office (FRO), which is in charge of upholding fundamental rights at the agency, conducted a review of the project. Despite the app’s potentially providing legal advice, it found that the application presented no “high-risk use of AI,” as defined in the recently passed AI Act. An assessment of the application’s impact on fundamental rights was therefore not necessary.
“We do not have specific fundamental rights concerns, apart from the already discussed issue of integrating the Complaints mechanism in the App,” an official from the FRO wrote in an email in May 2025.
Two months later, in July 2025, the FRO issued its full opinion on the chatbot feature, which included some 28 recommendations. The first recommendation was to “develop a more solid theoretical framework to support the argument that access to information increases the rate of voluntary returns.”
The development of the application, and its maintenance, are set to cost €500,000. Polish IT software development company Fabrity was selected in December 2024 for the job, according to a copy of the contract order obtained by AlgorithmWatch. For the chatbot itself, Fabrity uses an off-the-shelf language model which will be “simply fed with a set of data,” according to a company representative in an e-mail to Frontex.
At least one European country is opposed to the project, according to minutes from a January 2025 meeting of Frontex’ management board.
The management board is made up of representatives – generally from law enforcement agencies – of all EU Member states, plus four Schengen area countries. Frontex’ executive director, as well as his deputies, answer directly to the management board.
The mobile application project was presented during the January meeting by Frontex Deputy Executive Director Lars Gerdes as part of a series of measures to increase the number of deportations.
According to the minutes, Gerdes introduced the project as “a mobile application for alternatives to detention”.
One member of the board expressed concerns about the project, calling into question “the efficiency of [a] new app for returns”, according to the same minutes.
However, this position seems to be in the minority among the board. Interventions from other attendees of the meeting show that most support the ongoing efforts of digitizing deportations.
Outside of the boardroom, meanwhile, the application is seen as a simple “quick-win, low-cost project with only few resources involved,” as an officer from Frontex’ Return Knowledge Office noted in an email. But that could change depending on how a new EU regulation for Frontex – reportedly already in the works in Brussels – will look like.
“We all hope that the new regulations will provide us with the legal basis to allow us to develop more powerful and integrated IT systems,” the officer wrote in the same email.
]]>Statehood. In the late 18th century, Johann Gottfried von Herder famously wrote that “every nation is one people, having … its own language.” Today, that view looks outdated and the equation “one language = one country” does not hold true (to be fair, Herder wrote before Belgium came into being). Yet state-building was long a project in language-building, if only because the state apparatus required a coherent means of communicating its decisions to the population it claimed as its own. In many cases, the national language was codified only after the state was created, as speakers of Montenegrin can attest.
What many speakers of French or German take for granted, such as the infrastructure of dictionaries, was built from scratch in much of Central Europe. The Estonian etymological dictionary, for instance, was only completed in 2013; its Slovak counterpart followed in 2016.
LLMs. Such works are known as “language resources” in the jargon of Artificial Intelligence developers and are crucial to training the large language models (LLM) that underpin most AI applications. Other resources include vast corpora of texts, ranging from books to web pages.
The graph below shows that the sheer volume of resources available for English toweringly exceeds that of most other languages. As a result, LLMs tend to perform less well overall for “low-resource languages”.
Gearing up. Many governments now aspire for their countries to have a high-performance LLM for their national language(s). Last week, the head of Serbia’s e-government services announced a new national LLM as an instrument of “state sovereignty.” Investments in language resources long predate the AI craze, but the scale of ambition has shifted. The Slovak national corpus, for instance, is a long-running project to digitize texts in Slovak, launched in 2002. The government has allocated about €30,000 per year to it ever since. Contemporary efforts in other small EU states are far more lavish. Estonia is investing close to €1m a year in language resources, and Lithuania almost €10m.
However, in proportion of the state budget, the largest investor in language resources is not a small country but a former imperial power. In 2022, the Spanish government allocated €1bn over five years to the “Strategic Plan for the Promotion of Spanish Languages,” following a €90m project launched in 2015. The initiative is as much about geopolitics and business as it is about linguistics. The Spanish government’s build-up of language resources aims explicitly at dominating AI services in Latin America, and its focus on Basque, Galician, Valencian and Catalan may also serve as a way to one-up independence-minded regional governments.
Unwanted attention. Governments usually intervene in linguistics for their own ends, including population control. In the mid-1930s for instance, Moscow imposed the requirement that every language in the Soviet Union be written in Cyrillic. Non-Russian-looking alphabets were seen as seditious. Today, some linguistic minorities sense a similar threat from language technologies. The development of an LLM fluent in Romani (admittedly not funded by a government) raises fears that it could be used for eavesdropping and step up the policing of Roma people.
In some cases, LLMs could even become liabilities for national security. AI-generated disinformation campaigns or the automated analysis of intercepted communications are now commonplace in warfare. Unbeknownst to themselves, Greenland enthusiasts who for years published reams of gibberish in the Greenlandic Wikipedia, using poor translation tools, may have, by sabotaging the few “resources” available in this language, greatly bolstered the island’s security.
I thank Ľubor Králik and Alexander Maxwell, as well as my colleagues Eva Lejla Podgoršek and Naiara Bellio, for their help with this article.
This is an excerpt from the Automated Society newsletter, a bi-weekly round up of news in automated decision-making in Europe. Subscribe here.
]]>Close to 90 data centers are currently operating in Norway (not all of them dedicated to generative AI). Together, they used up 2.79 terawatt-hours of energy in 2025, slightly more than one in fifty watt-hours consumed in the country.
An additional 53 data centers are registered with the electricity network operator Statnett, which has reserved 3.4 gigawatts for them. This represents over 8% of the country’s currently installed capacity. Many more are queuing for capacity, though the operator makes clear that not all projects in the queue will see the light of day.
Several tech giants are already active in the country. TikTok hosts 50,000 servers near Oslo in a 90-MW facility, which they plan to expand to 150 MW. At this size, the site would use almost one percent of the electricity currently produced in Norway. Further south, Google is building a facility of up to 240 MW, slated to enter service this year. The company said it would be dedicated to cloud storage. OpenAI, for its part, plans to open a data center of up to 290 MW in Narvik, in the country’s north, at the end of 2026.
Norwegian industrialists have become major builders and operators of data centers. Green Mountain, which built the TikTok facility, also develops server farms in Germany and England. Aker, Norway’s largest conglomerate, is building OpenAI’s Narvik project.
The data center boom could disrupt the provision of electricity for households, were it only by making power more expensive. In the country’s south, one kilowatt-hour costs one Norwegian crown, or about ten euro cents. Although this is only one third of what German households pay for electricity produced from renewable sources, it is seen as high by Norwegian standards. In October 2025, the government introduced a fixed-price scheme that caps the price of a kilowatt-hour at 50 øre (half a crown, five euro cents). The government allocated almost one billion euros for the scheme until the end of 2026, even though the fixed-price might increase up to 77 øre per kWh.
Although the scheme might alleviate fears of expensive electricity in the south of the country, where the vast majority of the population resides, it does nothing for the north. There, electricity is much cheaper, at around 30 øre per kilowatt-hour at market prices.
Large infrastructure projects could disrupt more than budgets. Already, plans to electrify a 350-MW gas production facility near Hammerfest, in the far north, is causing an uproar. The project would require the building of new wind turbines and high-voltage transmission lines. Affected people are overwhelmingly Sámi herders, who are part of a historically discriminated-against group in the country. The Sámi parliament is suing the Norwegian government, arguing that they should have been consulted on the matter. Narvik, where several data centers are being built, lies several hundreds of kilometers to the south of Hammerfest, but the case brought forward by the Sámi parliament might be a sign of future legal challenges across northern Norway.
Conflicts around electricity access are already rife among industrialists in the south. Nammo, Norway’s largest ammunition manufacturer, made headlines in 2023 when they revealed that TikTok had been awarded electric power they would have needed for an expansion of their facilities. Nammo’s boss even raised the possibility that the Chinese government purposefully asked TikTok to select a location near an ammunition factory, though he did not provide any evidence.
Attribution of electric capacity is currently done on a first-come, first-served basis. The government is planning a change in the Energy Act to make it possible to prioritize national security projects. A public consultation was carried out in 2025 but the bill has yet to pass through parliament. (Meanwhile, Nammo did begin construction of its new arms factory).
Others are much more radical. Rødt, a left-wing party that won 5% of the vote in the 2025 general election, favors a total ban on new data centers until a national strategy is in place. No other party share this view, though Lars Haltbrekken, of the left-wing SV party, or the non-governmental organization Friends of the Earth Norway would like a partial ban. They make a hierarchy of data centers, where cloud storage is considered useful and facilities dedicated to cryptocurrency mining would be banned. GenAI would fall in the middle.
This investigation is published in collaboration with Tech Policy Press and is supported by EDRI and ECNL’s “Investigative journalism & civil society collaboration grants”.
]]>This publication introduces the principles and processes of our policy. We hope it may provide a useful model for other organizations considering how they should use generative AI responsibly, balancing useful cases with the risks of these technologies. Developing and implementing such a policy is challenging, given the range of use cases, risks/benefits, and views on generative AI – many of which change rapidly.
Our approach started with a survey of our staff to establish (i) beneficial use cases they find from generative AI and (ii) concerns and risks they see around the use of generative AI for AlgorithmWatch’s work. We then developed a policy designed to guide individual staff members as they make decisions about whether, and how, to use generative AI in a way that aligns with our values and mission.
This is based on 4 principles:
The policy also incorporates a structured process for collecting and discussing use cases and tools as well as updating the policy over time, which is necessary to address the range of uses and ongoing changes in the technology, its benefits, and its risks.
If you plan to adopt a similar policy in your organization, we would be delighted if ours can provide support or a model. From our experiences and discussions so far, we can say…
Join our newsletter and download the complete policy, including our survey questions and a transparency note your organization can adapt for its own responsible AI strategy.
If you already signed up for our community newsletter but want to download the full policy now, please sign up through the form anyway. Once you confirm your subscription, you will be able to download the file on the confirmation page.
We do not present this as a complete product – we are implementing and testing this policy and learning as we do so. We would be interested to hear from other organizations making similar efforts. You can reach us at info@algorithmwatch.org.
AlgorithmWatch has developed a policy on how we use generative AI. As an organization, we fight against the irresponsible and unaccountable development, deployment, and use of digital technologies. But many such technologies can also, when used responsibly, aid us in this mission. Generative AI is a particularly important example, which raises questions of how we act responsibly and balance benefits against risks.
Generative AI is a class of tools that take user inputs and create new content. This includes generating text or other media outputs based on a “prompt” or translating between languages, writing styles, and media. In what follows, generative AI should be interpreted broadly – to include, for example, AI services that translate between languages or transcribe voice to text.
This document describes internal principles and current practices that we are in the process of implementing, relating to the use of generative AI. It is intended for informational purposes and does not constitute legally binding commitments or guarantees, nor does it replace or extend AlgorithmWatch’s other documentation (e.g., our privacy policy).
This policy draws on a survey of our staff (May 2025) to establish (i) beneficial use cases they find from generative AI and (ii) concerns and risks they see around the use of generative AI for AlgorithmWatch’s work. The resulting policy:
Transparency,
Quality,
Proportionality, and
Security.The below text outlines the 4 principles and the GUIDE process for updating aspects of this policy. If you wish to adapt or adopt this policy, we welcome this and strongly advise beginning with surveying your staff to establish their current uses, needs, views, and concerns.
Join our newsletter and download the complete policy, including our survey questions and a transparency note your organization can adapt for its own responsible AI strategy.
It is also important to note: This policy is designed for organizations that want to use technology in a responsible and ethical way, even when this involves limiting the use of technology, where this is embedded in the organizations’ values, and where staff are clearly aware of and follow these expectations. The policy does not specify “hard rules” that constrain irresponsible behavior or staff members who wish to use tools with no or minimal safeguards. Rather, it supports staff in making individual decisions about their generative AI use in responsible ways by providing principles and a process for creating discussion, precedents, and ever-expanding guidance. We believe this is the most appropriate way for responsible organizations to respond effectively to the very broad range of use cases, user needs, and evolving situations around generative AI.
ProportionalityWe strongly discourage staff from using generative AI simply because it seems the easier option when other appropriate options are available. Overuse of generative AI is associated with a series of systemic risks; from de-skilling of individuals, to reducing demand for jobs, to increasing energy demands, to companies’ referring to high usage rates as justifications for reckless behavior.
However, our survey also showed that staff members do find substantial benefits to generative AI in some use cases. Staff also noted the importance of being inclusive with our policies, and different staff members have different needs; some use cases that are “fairly useful” for one staff member may help another overcome significant barriers.
“Proportionality” is a way to respond to this need for balance. Proportionality means we encourage staff members to reflect on, and justify, why they are using generative AI for a given use case rather than a “non-generative AI” approach.
We ask staff to internally reflect on their uses – and in some cases explicitly spell these out for discussion. These cases are:
Transparency section later).The GUIDE document will be used to collect these cases for wider discussion. Over time this will develop into a series of agreed-upon precedents and guidance for staff in assessing the proportionality of their own use cases. Unless and until such discussion provides further guidance, staff should follow their own judgment and/or get input from their Team Lead.
SecurityWhat information we are comfortable inputting into generative AI tools was one of the major concerns expressed in our survey. Data input into generative AI tools may be stored and potentially used for further training of models, with associated privacy, confidentiality, and undue appropriation concerns. The use of data for training can also raise concerns about “leakage” of input data to other users, use of uncompensated labor for training, and other concerns.
Some tools promise increased security and/or not to use data for training, though sometimes only under certain conditions (e.g., paid-for versions). While these promises may provide some additional accountability, we should also be cautious of relying too heavily on these promises as a true safeguard of security, given the periodic failures of technology companies to protect data.
We therefore describe three ‘Tiers’ of content that staff might input into generative AI tools.
Staff should consult internal records to see (i) what sort of content falls under what Tier, and (ii) what tools are recommended for Tier 2 information. They should then choose their tool and adjust input information accordingly (e.g., remove some material).
Where there is not clear enough existing guidance, staff should flag this to a Team Lead who can, depending on the circumstance, provide a temporary decision or escalate to other Team Leads or other relevant expertise (e.g., the Data Protection Officer) as required. These temporary decisions are recorded in the GUIDE, discussed, and a firm decision is recorded as future guidance.
Staff questions, requests, and suggestions related to tools that may be (in)appropriate for Tier 2 content should likewise be recorded in the GUIDE.
QualityAny generative AI output should be reviewed critically before use. Staff should expect the outputs to require editing or otherwise questioning in some form – accepting the outputs “as they come” is likely to show a lack of critical engagement, and if staff are doing this, we strongly encourage them to consider whether they are doing sufficient quality assessment.
Quality assurance should go beyond simple fact-checking and also consider, for example:
Generative AI should not be used to produce material on a topic without the author(s) and editor(s) having or gaining additional familiarity with that topic using other methods not involving generative AI. That may be through existing expertise, contacting relevant expert(s), and/or non-generative AI-assisted research.
Where practical, given resource constraints, try to involve at least one fellow staff member in this check process (even if as simple as explaining what steps you have taken).
Staff are encouraged to (i) summarize safeguards used in specific cases in
Transparency Notes and, as such, record them in the GUIDE and (ii) record broader ideas or reflections on quality assurance in the GUIDE.
TransparencyIn order to hold ourselves accountable for the other principles in this policy, it is important that we are transparent with ourselves and others.
When we publish material in which generative AI played a substantial role in creating the product, we discuss whether to include a Transparency Note explaining how generative AI was used.
At staff discretion, similar Transparency Notes may also be appended to work that is not published (such as documents for internal use or for partners or in documentation about systems used for internal operational purposes) if generative AI played a substantial role in their production.
These notes should be copied into the GUIDE as they provide valuable insights into how the principles are being applied in practice.
Examples of what might count as “substantial”, or not, are listed in internal records and iterated over time via the GUIDE process. These are meant as guidance and precedents for individual decisions; again, hard-and-fast rules are extremely challenging to define given the range of possible use cases.
For guidance, a couple of examples of “substantial” uses – i.e., those that should prompt a staff member to consider a Transparency Note – include:
The notes need not have a standard format, but referring to the other 

Principles will usually be helpful. The note should be brief and need not be extensively detailed – you do not need to list your precise prompts, for example. But there should be an internal copy of the note with your name and contact information for anyone who wishes to know more (we do not include this personal information in material we publish).
Join our newsletter and download the complete policy, including our survey questions and a transparency note your organization can adapt for its own responsible AI strategy.
There is an internal GUIDE document, accessible to all staff, which contains:
Transparency Notes (which in turn also record decisions related to all the other 3 Principles
).
Proportionality, in particular (i) common use cases that we consider generally inappropriate and (ii) difficult cases.
Security (what Tier particular content fits into and/or how safe certain tools are).
Transparency Note.Inspecting and discussing the GUIDE will be a standing item in the AlgorithmWatch Team Leads meeting, which happens approximately once per month. Decisions made by Team Leads can then be recorded in the GUIDE. Objections to decisions from staff will be taken back to Team Leads, either in the Team Leads meeting or (if more urgent) by discussion via internal messaging. In case of disputes among Team Leads that cannot be resolved by discussion, final decision-making falls to Executive Management. Documentation can also be supported by regular internal capacity building sessions on the application of our policy.
]]>Exploitation. A cohort of white, male entrepreneurs established networks across the Global South to appropriate the labor of millions of Black and Brown workers. In Europe and the United States, they developed cutting-edge technologies to turn the fruit of this labor into an ersatz product indistinguishable from the original. Some critics denounced it as the end of culture.
Sounds familiar? The parallels between artificial butter in the 1900s and Artificial Intelligence in the 2020s are striking. Back then, people with no alternatives cultivated oil palms, peanuts and coconut trees. Thanks to advances in organic chemistry, factories almost magically turned these raw materials into a product hardly distinguishable from butter – later to be called margarine. Today, in the very same countries, people with few other options produce text or label pictures. Thanks to advances in computer science, this training data is used to almost magically output text and images hardly distinguishable from human-made artifacts.
Alternative. The history of the so-called “artificial butter” has been largely forgotten today. A century ago, it provided millions with a cheap and reliable source of fat. At the time, almost everyone would have preferred to eat proper butter made from cow milk. Artificial butter wasn’t superior; it was simply cheaper.
Today, millions use chatbots for tasks that LLMs are ill-equipped to handle. Recent surveys have shown that British teens turn to them for psychological support, while young Polish women rely on ChatGPT for gynecological advice. They do so not out of preference, but necessity: psychologists or gynecologists are often too far away or too expensive, or are sometimes seen as untrustworthy. Then, as now, the artificial substitute is rarely the first choice – merely the least-worst alternative.
Artificiality everywhere. A century ago, the situation was strikingly similar. Consumers bought artificial butter by the millions of tons, yet journalists and politicians defended butter as a mark of culture and a national symbol. Entrepreneurs offered machines that supposedly detected artificial butter – though they did not – and losing the fight against it was said to threaten the moral standing of the nation.
Many students today probably understand that using a chatbot to complete an assignment teaches them little. But they also recognize – partly correctly – that universities function more as credentialing institutions than as places of learning. Likewise, many users know that LLMs are bullshit generators, but they also know that bullshit is often precisely what is expected of them.
Difference. The difference between AI and AB lies in the political response they elicited. When governments realized that people would not voluntarily renounce artificial butter, they followed the advice of intellectuals and passed harsh measures on the product. Sales were practically forbidden in some countries – France, for example, from 1897. Everywhere, the name “artificial butter” was banned, and the stuff was to be called “margarine.”
Given that the labels “artificial meat” and “artificial milk” were recently prohibited for their plant-based counterparts, it seems only logical that “artificial intelligence” also warrants a renaming. I propose “margAIrne.”
This is an excerpt from the Automated Society newsletter, a bi-weekly round up of news in automated decision-making in Europe. Subscribe here.
]]>This is the text of a post, translated from Russian, of a publicly available account on X dedicated to promoting non-consensual sexualization tools (NSTs), often called “nudify apps.” Networks of such accounts have been reported on by outlets including The Guardian, Bellingcat, and Indicator, including extensive criticism of X for hosting such networks. As part of our research into NSTs on large online platforms, we have seen X accounts that offer nudification services, accounts that compile and rank NSTs, accounts that run competitions to get credits for NSTs, and other uses of X to visibly spread these tools. Many have hundreds of followers and names that explicitly reference terms like “Nudify” or “Clothes Off.” This should make them very easy to detect and remove, if X wanted to. And yet, such posts and accounts are still on X. We reported the post mentioned above and were told it does not violate X’s policies. The issue of non-consensual sexualization on X is far from limited to just the Grok chatbot.
This is why it is crucial that watchdog organizations like AlgorithmWatch can find and flag such content. But X has actively blocked us from doing such work.
The rollout in recent years of general-purpose generative AI tools from companies like OpenAI has made it relatively simple to develop NSTs. These services can easily be found in various dark corners of the internet, including on Telegram, Discord, and similar platforms. Tips for getting general-purpose chatbots to produce non-consensual images can be found on Reddit. But their circulation on very large platforms such as X, Facebook, and Instagram — including by monetized advertising — helps to spread them to wider audiences.
At AlgorithmWatch we have been building a system to help detect NSTs on large platforms, including via crowdsourcing observations of such tools. We have been using opportunities presented by the EU’s Digital Services Act (DSA). This regulation makes online platforms perform risk assessments and provide data to researchers to protect against systemic risks, ranging from threats to fundamental rights to gender-based violence. Sexualization without consent should be a clear case to be addressed under these rules.
To build our detection system, we planned to use data from Meta platforms, Apple’s and Google’s app stores, and X. All these have previously been found to host content promoting NSTs. And all are covered by DSA rules, which say they must, on request, provide data to public interest researchers who meet a series of conditions (which we do). We made use of these rules and experienced a mixed picture. X, unsurprisingly, was by far the worst.
In June 2025 we requested data under Article 40.12 of the DSA. X refused, saying, “Your application fails to demonstrate that your proposed use of X data is related to the specified systemic risks in the EU as described by Art. 34 of the Digital Services Act.” They have used this exact text to reject many other requests (as found by the DSA Data Access Collaboratory), so it seems to be their default refusal. We complained to X about this via their online form in July and later followed up with personal emails to relevant staff members, but received no answer.
By contrast, Apple’s and Google’s app stores made access to data relatively straightforward, and tests so far suggest that really obvious NSTs are hard to find. Accessing Meta’s data via their official tool requires agreeing to a series of burdensome rules, some of which actively make it difficult to report violative content. They do make some basic efforts to address clear problems; searches for terms like “nudify” are blocked, for example, and they are suing one provider for advertising NSTs on their platform. However, even after this, research from Indicator Media shows that the problem is still rife.
At the end of 2025, the European Commission announced a 120 million euro fine on X under the DSA for offenses including “failure to provide researchers access to public data.” This is a positive step but also a somewhat hesitant one, after such flagrant and long-running violations. After the latest scandal, X has blamed users for their prompting behavior — not their own failure of safeguards. They will also probably tweak the Grok chatbot to avoid the scandal going further, as they did the last time this happened. None of these address the real issue. The problem of non-consensual nudity is rampant on X. So far, they have done almost nothing to address it. It is our role as a civil society watchdog to detect and reveal such transgressions. But the EU Commission needs to step up their game to protect people from this kind of violence.
]]>A growing social resistance has emerged in response. In nearby towns to Dublin, such as Rochfortbridge and Naas, local communities are pushing back against plans for new developments. At a national level, movements like “Energy for Who?” are demanding that renewables and grid connections be prioritized for essential social infrastructure, including housing and transport. Recent polling suggests this perspective is broadly shared by the public. Yet, government planning has so far focused on stabilizing digital growth, rather than reimagining how it fits within the goals of a just and affordable energy transition.
Ireland has thus become a test case for the political, economic, and environmental contradictions created by unregulated hyperscale expansion. As the EU moves to triple its data center capacity under the AI Continent Action Plan, the pressures already destabilizing Ireland’s grid offer a preview of what the continent may soon face.

Placards on the road leading from Rochfortbridge to the energy park voice locals’ resistance to the new ‘Red Admiral’ data center proposal, which has now been stalled. Photograph: Louis Boyd-Madsen.
How Ireland Became a Testbed for Hyperscale Data Centers
Unlike the hyperscale sites now being built across the globe amidst the frenzy of AI speculation, Ireland’s data center load growth was initially driven by more traditional uses: cloud storage, banking infrastructure, and the vast personal data archives that underpinned Big Tech’s earlier phase of expansion.
Since the start of the post-war era, the Irish government has run an economic strategy based around courting investment from foreign multinationals – using a range of tools including tax incentives and infrastructure investment. This approach found real success with the rise of the ICT industry in the 1980s and 1990s, as US multinationals flooded into the country to lower their tax obligations and set up shop for export physical electronics, software, and services to the wider EU.
As money was channeled into a small set of tech giants in the aftermath of the 2008 global financial crisis, and the demand for computational capacity rose, Dublin began to see a steady growth in data centers – linking Big Tech’s offshore headquarters to the United States via a network of undersea internet cables.
By 2017, data-center load growth wasn’t just tolerated but formally integrated into industrial planning. A Government Enterprise Strategy sought to prioritize continued digital growth and “align enterprise electricity demand with generation capacity and transmission planning.”
The tech sector grew rapidly: Today, it has become a major source of government revenue and the main applicant for new electricity demand. Energy consumption by data centers is by far the highest in Europe: in comparison to the total Irish national consumption, they accounted for a 22% share in 2024. Roughly 97% of these data centers are clustered in the wider Dublin area. In 2023, 88% of all corporation tax was paid by foreign multinationals, of which 57% was paid by just 10 companies. A report from the Irish Central Bank suggests a major driver of growing tax revenue between 2011 and 2021 was “a small number of extremely profitable ICT firms”.
For academic Patrick Brodie, this dependence is the latest chapter in a much longer development model. Ireland, he argues, “hitched its wagon to the transatlantic investment relationship” in the post-war period, shaping the country’s industrial and infrastructural landscape ever since. That bargain was never just about taxes. “It was also about building infrastructure,” he says, “and about managing the environmental contradictions that came with it.”
Those contradictions are now concentrated directly in the electricity system, according to Brodie. Vast public resources are mobilized to serve capital-intensive digital infrastructure, even as shortages persist in housing, transport, and public services. “You can’t have a green transition that’s genuinely independent if it’s dictated by the needs of monopolistic tech companies,” he adds.
This facilitative model has already reappeared in different shades across Europe – from the UK’s AI Growth Zones, to Sweden’s energy tax break for data centers, or Denmark’s build-out of low-latency fiber optic and grid overcapacity. But nowhere has the potential bargain between digital growth and energy planning been pushed further than in Ireland. The consequences are now becoming visible across the country’s grid, climate trajectory, and public finances.
How the Boom Collided With the Energy System
By 2021, the Irish electricity system had reached the limits of what it could absorb. The national grid operator EirGrid warned that the load on the grid in Dublin was pushing against the system’s physical capacity and creating a genuine risk of blackouts, marking the end of the previously laissez-faire approach to grid connections.
The Commission for Regulation of Utilities (CRU) put an effective moratorium on new data center grid connections in Dublin, requiring new connections to be in unconstrained parts of the grid with sufficient local generation capacity. However, the grid remained fragile.

View of North Wall in the Port of Dublin, housing several generators and gas product tank, including the 2021 emergency gas generator. Photograp: Louis Boyd-Madsen.
In response, the government and the energy system operator moved to commission emergency gas generators near the port of Dublin, the Irish Midlands, and a handful of additional sites, at an estimated cost of €1 billion. To help ease congestion, a wave of investment has been going into upgrading the grid’s capacity to distribute electricity. Much of this cost has been shouldered by Irish households, adding an average of roughly €100 per family to the bills in 2024.
Yet most of this newly created grid capacity was not used to electrify homes, rail, heat pumps, or industry. It was hoarded by data center developers. Ireland had limited rules governing priority access to grid connections, so hyperscalers simply booked capacity years in advance, effectively shutting out other users. For instance, a grid connection originally planned to serve new housing in Castlebaggot, West Dublin, was instead allocated to data centers.
The CRU suggested this trend was threatening the country’s broader targets for decarbonization and housing development: “The potential level of data center demand could significantly impact [the grid’s] ability to accommodate demand connections required to support Government policy targets such as 550,000 new homes by 2040, 680,000 heat pumps and 945,000 EVs [Electric Vehicles] by 2030, major electrified rail projects explicitly identified in the National Development Plan and other social infrastructure.”
The Fallacy of Renewable Energy Reliance
Ireland is one of the most fossil fuel-reliant economies in Europe. “About 80% of all the energy we use today for our homes, our heating, our appliances, and our industry comes from fossil fuels,” says Paul Deane, senior lecturer in Clean Energy Futures at University College Cork.
Nevertheless, tech firms continued to portray themselves as green-transition partners — largely through Corporate Power Purchase Agreements (CPPAs). These allow companies to claim renewable electricity procurement on paper even when their facilities draw from the grid’s fossil-fuel-heavy mix in practice. Europe-wide, Amazon, Google, and Microsoft are the three largest buyers.
Data from RE-Source suggests most CPPAs signed in Ireland are either financial – that is, not based on physical power delivery to the buyer – or structured in a way that has not been publicly disclosed. Only 14% are confirmed to involve a direct physical connection between the generator and the off-taker. The sources consulted for this investigation suggest the majority of these CPPAs match electricity on an annual basis rather than hour by hour.
The result is a widening gulf between corporate sustainability claims and actual, system-wide decarbonization.
Source: Re-Source ‘PPA Deal Tracker‘. Visualisation by Louis Boyd-Madsen.
These contradictions can be seen clearly in the Midlands where Bord na Móna’s planned Eco Energy Park will co-locate future Amazon Web Services (AWS) data centers with renewable-energy infrastructure. AWS is aiming to contract up to 800 MW of new wind and solar generation from Bord na Móna’s wider project pipeline — a volume of electricity roughly equivalent to the annual consumption of 2.2 million Irish homes. But so far, just 105 MW has been secured through a CPPA linking a proposed new AWS site to a nearby wind farm. Whenever the wind drops, any data centers built at the site will still draw power from the national grid, including fossil-fuel generated electricity.
To guarantee stability, Bord na Móna has already applied for a 600 MW gas plant adjacent to the site, which it claims will later switch to hydrogen or biomethane, according to the company’s environmental impact assessment report. No credible pathway exists to supply these fuels at this scale.
“Renewable energy only helps meet our legally binding carbon budgets if it replaces fossil fuel demand, ” says Hannah Daly, Professor of Sustainable Energy at University College Cork. “If there was evidence that data centers were financing projects that wouldn’t otherwise have been built and that would match their demand at all hours of the year without creating bottlenecks for other renewables, they would at least not harm our carbon budgets”.
Analysis conducted by AlgorithmWatch and Tech Policy Press, drawing on tech majors’ sustainability reports and press releases, suggests the total announced new capacity brought online via CPPAs with data centers in Ireland is far behind the pressure that data center expansion has already put on the energy system. Even if all announced CPPAs come online, they will still fall behind the load that has already been imposed on the system.
Company press releases, sustainability reports, and announcements in trade press; authors’ analysis. Visualisation by Louis Boyd-Madsen.
In a paper commissioned by FOE Ireland, Professor Daly found that half of new capacity came from CPPAs, as opposed to government-run auctions, between 2020 and 2023. But this volume only amounted to 16% of the new data center demand during this period. Daly’s findings mirror our own. Despite industry claims of 100% renewable procurement, Ireland’s data centers in aggregate have seemingly deepened the country’s dependence on fossil fuels.
By outpacing renewable deployment, straining the grid, hoarding power connections, and building on-site gas generators, the tech sector has simultaneously stalled other sectors’ decarbonization and deepened the dependence on fossil fuel-fired power generation, both of which contribute to stalled emissions targets.
The Real Expansion is Fueled by New Gas Generation
Yet regulators have limited power to intervene. Out of fear for the country’s energy security, the Irish government has promised to spend more on its fossil fuel infrastructure, beginning construction of a state-led strategic gas emergency reserve. The reserve will take the form of a floating LNG (liquefied natural gas) tanker and regasification plant, with an estimated upfront cost of between €300 and €900 million.
Meanwhile, developers are increasingly turning to on-site gas generation to circumvent grid constraints entirely. Microsoft’s Grange Castle campus in Dublin has a fleet of gas generators totaling 239 MW. The site is one of seven with existing gas connections, five of which are currently actively consuming gas. Analysis by Hannah Daly from Cork’s University College suggests an additional four sites are awaiting connection and 22 have submitted planning requests.
Daly shows that if existing, pending, and proposed gas-connected data center sites all operate at full capacity, they would emit up to 16.6 MtCO₂ per year — equivalent to 68% of Ireland’s total national emissions in 2023. She notes that Gas Network Ireland’s own, more conservative estimates, still predict an overshoot on sectoral emissions targets by 21-30%, driven by existing and confirmed on-site gas connections respectively.
Gas Networks Ireland and several private actors see this gas demand as a temporary, transitional problem as the network switches over to low-carbon hydrogen and bio-methane. Daly’s colleague Paul Deane remains sceptical. “Some data-center owners say they’ll only use gas for a few years, but there’s no convincing evidence of an off-ramp,” he says. “The promise of a technology in the future, isn’t good enough to allow you to burn more natural gas today.”
These various interlinked problems have come to the fore in the discussions of the new Large Energy Users (LEU) Connection Policy. Expected to be finalized before the end of the year by the CRU, the policy will set out new rules for how data centers connect to the grid. The policy’s main goals are to improve grid stability, ensure sufficient on-site or nearby generation capacity, and strengthen emissions reporting.
Central Statistics Office ‘Networked Gas Daily Supply and Demand July 2025‘. Visualisation by Louis Boyd-Madsen.
However, in the draft text, the CRU concedes that it will not receive enforcement powers to mandate renewable procurement, emissions caps, or set requirements for the decarbonization of the gas network.
The specialists interviewed said this reflects a broader pattern: rather than confronting the political trade-offs between digital growth and sustainability, the Irish government has deferred control to regulators tasked with managing that growth as smoothly as possible. The LEU policy, critics say, is the latest example of this deference to regulatory authority.
“Ireland’s climate legislation directly conflicts with several of our enterprise strategies. There’s a lack of acceptance that it’s impossible to reconcile the constraints of carbon emission targets with what’s necessary for the growth of this industry via technological measures alone,” Daly says. “The right technologies and supply measures would need to be put in place ahead of the growth of the industry, not afterwards.”
Rosi Leonard, a data‑center campaigner for Friends of the Earth Ireland, argues that the government overestimates the sector’s mobility: companies are unlikely to abandon existing infrastructure in Ireland, even under stricter regulations, and any departure would indicate the need for broader economic diversification.
The Irish case has seen a consistent pattern: tech-sector growth is treated as inevitable, while climate obligations are accommodated around it. Managing the strained infrastructure comes at the cost of rising emissions and deferred decarbonization – a lesson that other countries facing rapid digital expansion will soon have to confront.
Why This Matters for the EU’s AI Expansion Plans
Ireland’s experience is no longer a local anomaly. The EU’s AI Continent Action Plan proposes tripling data center capacity by 2030 to support continent-wide AI deployment, cloud services, and high-performance computing. But the plan does not resolve the fundamental challenges that Ireland is now experiencing: scarce grid capacity, fossil fuel lock-in, insufficient transmission infrastructure, and potential social pressures caused by rising energy prices.
The EU also risks repeating one of Ireland’s biggest mistakes on renewable energy. Its plan to use public funds to de-risk long-term contracts for hyperscale data centers — through agreements linking governments, utilities and tech firms — could end up funneling taxpayers’ money into deals that let companies claim that they use renewable electricity without actually cutting emissions.
Europe still has time to avoid these pitfalls. Countries planning to host hyperscale clusters should ensure sufficient energy capacity before data centers are built; block direct fossil-gas connections; enforce 24/7 renewable matching; and plan for the availability of clean power for critical public services and housing, rather than ring-fencing new renewables for the tech sector through CPPAs.
The Irish case shows what happens when digital expansion outruns energy planning, and when political dependency on a small group of firms overrides climate commitments. The costs – higher prices, deeper fossil lock-in, strained infrastructure, and derailed emissions targets – are now visible across the country, while access for homes, transport and essential infrastructure remains a low priority. The EU can treat Ireland as a cautionary tale, or repeat its mistakes on a continental scale.
This investigation is published in collaboration with Tech Policy Press and is supported by EDRI and ECNL’s “Investigative journalism & civil society collaboration grants”.
]]>In a way, cleaners were uberized long before Uber. The activity has been, and still is, largely informal, and full-time employment is the exception. In addition, the drive to outsource cleaning to smaller, specialized firms in the 1990s made it almost impossible for cleaners to obtain a secure and decently paid position.
New research. Researchers have studied drivers and couriers extensively, but cleaners have long been shunned. However, new work published in the last two years is starting to change that. Academics have interviewed dozens of cleaners in Austria, Norway and Berlin, and the EU-funded Origami project, which was completed last month, conducted research in Denmark, France, Ireland, Italy, the Netherlands and Spain.
For cleaners who use apps, one aspect of the work is fully automated: the assessment of the time needed for a job. Most online services set the time needed for a booking based on the size of the premises, which is input by the client. This very rough calculation – not to mention the incentive for clients to lie and input a lower value – often leads to conflicts. Once at the client’s, cleaners have to explain that they cannot complete the job, or that they need to stay longer and be paid accordingly.
Stars. In practice, cleaners have very little leverage. Because a one-star rating can prevent a cleaner from getting new jobs, and because they bear the burden of proof in case of a conflict, many conclude that working the extra time for free is the least-bad solution. In one case in Denmark, a cleaner explained that she checked the size of the house on the property register before accepting a job, in order to ensure that the client’s input matched reality.
Cleaners, who are overwhelmingly female and foreign-born, also face sexual harassment. In this regard, algorithm-powered apps do not change things much. The apps often list cleaners with large pictures, encouraging cleaners to appear more desirable and clients to objectify them.
Platform work directive. In late 2024, European institutions passed the platform work directive, which is set to apply in late 2026 and to make it much easier for gig workers to claim the status of a regular employee. However, law scholars Antonio Aloisi and Nastazja Potocka-Sionek argue that cleaners will not benefit from this clause. Because they set their own price and receive instructions from clients, the intermediaries might not qualify as employers, they write. There is no obvious hierarchical relationship between an app and a cleaner.
Testimonies from cleaners collected by academics, however, show that support personnel from the apps often do exert a certain amount of pressure. They sometimes encourage cleaners to wait (without compensation) for absent clients to show up on the premises. A cleaner who sets prices too high or too low might also be reprimanded. And some have been told that declining more than three jobs in a row would lead to the termination of their account. While many cleaners do enjoy the flexibility of gig work, they clearly do not reap all of its supposed benefits.
This is an excerpt from the Automated Society newsletter, a bi-weekly round up of news in automated decision-making in Europe. Subscribe here.
]]>Over the next six months, fellows will pursue stories ranging from the deployment of AI-powered surveillance technologies in public spaces, such as face recognition systems; access to sensitive information in chatbots and its impact on vulnerable groups; the use of predictive policing systems local neighborhoods; and cases of AI-facilitated intimate image abuse, including AI-generated child sexual abuse material and the use of non-consensual sexualisation tools.
The fellowship provides editorial and financial support, mentorship sessions with seasoned journalists and researchers in the algorithmic accountability field, and opportunities to publish the resulting investigations both on AlgorithmWatch’s platforms and other relevant media outlets in Europe.
AlgorithmWatch’s reporting fellowship is the only European program of its kind that provides support for both EU and non EU-based applicants, including journalists based in the UK, Türkiye, Georgia or Ukraine. Applicants come from a diverse variety of backgrounds — this year we received around 150 submissions from investigative journalists, engineers, lawyers, policy researchers and academics.
Here are the selected candidates for the fifth cohort of AlgorithmWatch’s reporting fellowship. Welcome!

Marta specializes in environmental crimes, migration, gender rights, and indigenous rights. She produces interdisciplinary and cross-media reports with an intersectional and decolonized perspective, and her work has been published in media outlets such as Wired, Voxeurop, Lavialibera, Lifegate, OBCT, Unbias the News, Altreconomia, QCode, and In Genere. She is a member of the journalism collectives Info.Nodes, DatiBenecomune and Clean Energy Wire (CLEW), and has received scholarships from the JournalismFund, the Earth Journalism Network, and the International Journalists Programmes. She has also been part of investigative journalism workshops by Display Europe and WepodAcademy, and of science journalism by the European Research Centre (Frontiers) program and the European Geosciences Union (EGU).

Laura is an investigative journalist based in Italy. She mainly writes about state and corporate surveillance for the Italian investigative newspaper IrpiMedia. She has published independent investigations and research on the use of biometric technologies at borders, communities and gender issues. She works as a freelancer for national and international newspapers.

Mayya is a freelance journalist based in Germany. She reports locally from Frankfurt/Hesse, as well as contributing to cross-border investigations. After studying history, she undertook professional training in economic reporting. Her work focuses on gender-related social and political issues, particularly anti-feminist movements and their impact. She often covers digital cultures and their societal effects, including how online groups organize, mobilize, and influence the offline world. As an AlgorithmWatch fellow, she will explore how reactionary actors leverage AI-enabled platforms and data infrastructures and its impact on fundamental rights.

Cécile is a freelance journalist with more than 10 years of experience in international broadcast media and newspapers. Since 2014 she extensively covered migration, human and women rights throughout Europe, Africa and Central America. Over the years, her journalistic approach slowly moved from a long-form and long-term approach to an investigative documentary work. Her work has appeared in Mediapart, RFI, ARTE and others. As an AlgorithmWatch fellow, she will investigate how the rise of AI-generated child sexual abuse images is broadening the range of children exposure to sexual violence.

Lotte is an investigative journalist from Belgium, currently based in Sweden. She reports on environmental and social issues, with work published in outlets such as De Morgen, Knack, and Eos. She was a 2025 fellow of the European Collaborative Journalism Program by Arena for Journalism. Her investigations have been supported by JournalismFund Europe and IJ4EU.

Carlotta is an award-winning investigative journalist and editor. She covers topics such as gender inequality, digital violence, migration, human trafficking, and mental health. She is passionate about using data, digital tools, and evidence-based methods to create impactful, human-centered stories in innovative ways. Over the last four years, she has worked as a senior visual editor at CNN along with the Data and Graphics, Special Projects and As Equals teams, based between London and Hong Kong. Additionally, she is a trainer and public speaker in data journalism, information design and OSINT tools.

Ana is an educator, activist, and content creator from Brazil, living in Berlin. With over seven years of experience in the pleasure and education sectors, she has collaborated with platforms such as Cheex, Lustery, and The Porn Conversation. Ana is currently the Advocacy Officer for the Digital Intimacy Coalition and the Policy Officer for Digital Rights for the European Sex Workers Alliance. Her work ensures that sex workers, sexual rights defenders, and survivors of tech-facilitated gender-based violence are represented in digital policy making.
We respect the wish of fellows not to be featured with a biography.
The fellowship is sponsored by:

Did you like this story?
Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!