Sustainable peace requires international justice for all victims of all crimes in Israel and the OPT

States must demonstrate their commitment to international justice to ensure genuine accountability for victims of war crimes, crimes against humanity and genocide for all those in the Occupied Palestinian Territory (OPT) and in Israel, said Amnesty International following the recent conclusion of the International Criminal Court’s (ICC) Assembly of States Parties in the Hague.

“The international justice system is under attack and faces existential threats. There is no greater litmus test for this than in Israel and the Occupied Palestinian Territory. States must demonstrate their commitment to international justice by supporting institutions such as the ICC and protecting their ability to pursue accountability,” said Agnès Callamard, Amnesty International’s Secretary General. 

Amnesty International has extensively documented how Israel has committed and is continuing to commit genocide against Palestinians in Gaza, even despite the ceasefire, and how its ongoing system of apartheid amounts to crimes against humanity. Today the organization has also published in-depth research documenting war crimes and crimes against humanity committed by Hamas and other armed groups during and after the attacks launched on 7 October 2023. 

“World leaders hailed last month’s UN Security Council resolution setting out a plan for Gaza as a blueprint for sustainable peace. But decades of international crimes cannot be swept under the carpet with deals that ignore accountability and entrench injustice. Truth, justice and reparations are the bedrocks of lasting peace,” said Agnès Callamard.

“Amnesty calls on all those in Israel and the Occupied Palestinian Territory, as well the international community concerned about the evident flaws of the UN Security Council Resolution, to develop and commit to a roadmap for justice and reparations. This roadmap should aim to end Israel’s genocide, its system of apartheid and unlawful occupation of Palestinian territory, while also addressing crimes under international law by Hamas and other Palestinian armed groups.”

To guarantee genuine, effective and meaningful justice and non-recurrence, Amnesty International recommends that the roadmap be predicated on the complementarity of a variety of justice institutions and mechanisms.

These include ICC investigations into Israeli and Palestinian crimes, which must take place free from any obstruction and with access to investigators and other justice actors. Such investigations should consider Israel’s genocide and crimes against humanity of apartheid, as well as crimes committed by Palestinian armed groups before the 7 October 2023 attacks, during the attacks and since, with a view to ensuring that all individuals, including – where they are still alive – those most responsible, are brought to justice.

Victims of atrocities in Israel and the Occupied Palestinian Territory deserve genuine justice. This does not just mean seeing perpetrators prosecuted and convicted but ensuring adequate and effective remedy and delivering guarantees of non-repetition

Agnès Callamard, Secretary General

The roadmap should commit states to support and fully cooperate with bodies such as the UN Commission of Inquiry and the ICC. They should enforce ICC arrest warrants and take all necessary steps to ensure the lifting of sanctions and restrictions imposed on Palestinian human rights organizations, which for decades have been documenting violations of international law and representing victims regardless.  

In parallel to international mechanisms, states can chart a new course for peace rooted in justice by exercising domestic, universal or other forms of extraterritorial criminal jurisdiction for international crimes committed in the Occupied Palestinian Territory and Israel.  

“Victims of atrocities in Israel and the Occupied Palestinian Territory deserve genuine justice. This does not just mean seeing perpetrators prosecuted and convicted but ensuring adequate and effective remedy and delivering guarantees of non-repetition. There is no escaping the reality that these are crucial steps towards lasting peace and security,” said Agnès Callamard.

Israel’s ongoing genocide, apartheid and unlawful occupation

Two months since the ceasefire was announced and all living Israeli hostages were released, Israeli authorities are still committing genocide against Palestinians in the occupied Gaza Strip with total impunity by continuing to deliberately inflict conditions of life calculated to bring about their physical destruction, without signalling any change in their intent.

Amnesty International recently published a legal analysis of the current situation showing how the crime of genocide continues, along with testimonies from local residents, medical staff and humanitarian workers highlighting the dire ongoing conditions for Palestinians in Gaza. The organization found that despite a reduction in the scale of Israeli attacks, and some limited improvements, there has been no meaningful change in the conditions Israel is inflicting on Palestinians in Gaza and no evidence to indicate that its intent has changed.

At least 370 people, including 140 children, have been killed in Israeli attacks since the ceasefire was announced on 9 October. As part of its genocide for more than two years, Israel deliberately starved Palestinian civilians, restricting critical aid and relief provisions, including medical supplies and equipment necessary to repair life-sustaining infrastructure, despite some limited improvement. It has subjected them to wave after wave of inhumane forced displacement compounding their catastrophic suffering. Overall, more than 70,000 Palestinians were killed and over 200,000 injured, many of whom have sustained serious, life changing injuries.  

The objective probability that the current conditions would lead to the destruction of Palestinians in Gaza persists. Yet Israeli authorities have not signalled a change in their intent: they have ignored three sets of binding decisions by the International Court of Justice; they have failed to investigate or prosecute those suspected of responsibility for acts of genocide or hold accountable officials who have made genocidal statements. Israeli officials responsible for orchestrating and committing genocide remain in power, effectively granting them free rein to continue to commit atrocities.

Israel’s genocide against Palestinians in Gaza has taken place in the context of pervasive impunity for its ongoing crime against humanity of apartheid alongside decades-long unlawful occupation of Palestinian territory.

“It is against this backdrop of apartheid and unlawful occupation that Israel deliberately unleashed mass starvation, unprecedented bloodshed, apocalyptic levels of destruction, massive, forced displacement and placed a deliberate stranglehold on humanitarian aid – all illustrations of the ongoing crime of genocide,” said Agnès Callamard.

In the West Bank, including East Jerusalem, Israel’s cruel apartheid system and unlawful occupation have exacted a heavy toll on Palestinians. Israeli military operations, including aerial attacks, have killed at least 995 Palestinians including at least 219 children, displaced tens of thousands and caused extensive damage to essential civilian infrastructure, homes and agricultural land. The last two years have been marked by an escalation in state-backed settler attacks, leading to the killing, injuries and displacement of Palestinians. OCHA has documented more than 1,600 settler attacks that resulted in casualties and/or property damage since January 2025. And Palestinian herding communities in Area C are particularly affected by this wave of unrelenting state-backed violence. Despite international condemnations and some restrictive measures adopted by third states against individual settlers and settler organizations, settler violence continues to increase due to Israeli government backing and virtually total impunity.

The Trump peace plan is the latest in a series of fatally flawed initiatives, which seek to propose ‘solutions’ that sideline international law, implicitly rewarding Israel for its unlawful occupation, illegal settlements, and its system of apartheid, which are the root causes of the continuous atrocities Israel inflicts upon Palestinians.

The conditions established during the current ceasefire further entrench Israel’s system of apartheid and its unlawful occupation and compound injustice. Israel’s imposition of a ‘security perimeter’ (buffer zone) in Gaza risks making Israel’s unlawful occupation permanent and deprives Palestinians of their most fertile land. It also risks perpetuating the territorial fragmentation that underpins Israel’s system of apartheid by failing to ensure freedom of movement for Palestinians with the rest of the occupied territory.

Similarly, impunity is enjoyed by Israeli forces responsible for arbitrarily detaining, forcibly disappearing and systematically torturing Palestinian detainees. In a recent review of Israel’s record the UN Committee against Torture described “a de facto state policy of organized and widespread torture and ill-treatment, which had gravely intensified since 7 October 2023” and expressed grave concerns over “widespread allegations of sexual abuse of Palestinian detainees, both men and women, amounting to torture and ill-treatment.”  

The international community’s willful inaction towards holding Israel accountable for its crimes under international law and the failure to press it into adhering to the recommendations of UN mechanisms and international human rights organizations have entrenched Israel’s unlawful occupation and apartheid and have directly enabled Israel’s genocide against Palestinians in Gaza

Agnès Callamard

“The international community’s willful inaction towards holding Israel accountable for its crimes under international law and the failure to press it into adhering to the recommendations of UN mechanisms and international human rights organizations have entrenched Israel’s unlawful occupation and apartheid and have directly enabled Israel’s genocide against Palestinians in Gaza today,” said Agnes Callamard.

Crimes against humanity committed by Hamas and other armed groups

It is critical to also ensure accountability for crimes committed by Palestinian armed groups. More than two years after the Hamas-led attacks on southern Israel on 7 October 2023, accounts of the atrocities committed by Palestinian armed groups on that day and their subsequent treatment of those held in captivity in Gaza are still emerging. Survivors of the attacks, including former hostages, as well as their families, continue to shed light on their own experiences, while calling for justice and redress.

Amnesty International is publishing a report today that sets out how Hamas’s military wing, the Al-Qassam Brigades, and other Palestinian armed groups committed war crimes and crimes against humanity during their assault on southern Israel, and against hostages held in Gaza thereafter.

Amnesty International has documented how, in the early hours of 7 October 2023, Hamas forces and other Palestinian armed groups conducted a coordinated attack targeting mostly civilian locations. Around 1,200 people were killed – more than 800 of them civilians, including 36 children. The victims were primarily Jewish Israelis, but also included Bedouin citizens of Israel, and scores of foreign national migrant workers, students and asylum seekers. More than 4,000 people were injured, and hundreds of homes and civilian structures were destroyed or rendered uninhabitable. 

Through the analysis of the patterns of the attack, evidence and the specific content of communications between fighters during the attack, as well as statements by Hamas and leaders of other armed groups, the organization found that these crimes were committed as part of a widespread and systematic attack against a civilian population. The report found that fighters were instructed to carry out attacks targeting civilians.

“Our research confirms that crimes committed by Hamas and other Palestinian armed groups during their attacks on 7 October 2023 and against those they seized and held hostage were part of a systematic and widespread assault against the civilian population and amount to crimes against humanity,” said Agnès Callamard.

“Hamas and other Palestinian armed groups showed an abhorrent disregard for human life. They deliberately and systematically targeted civilians in locations such as their homes, or while at a music festival, with the apparent goal of taking hostages, which amounted to war crimes. They deliberately killed hundreds of civilians, including by using gunfire and grenades to drive terrified people, including families with young children, out of their safe rooms and hiding places or attacked them while they fled. Amnesty International also documented evidence that some Palestinian assailants beat or sexually assaulted people during the attack and mistreated the bodies of those they had killed.”

Our research confirms that crimes committed by Hamas and other Palestinian armed groups during their attacks on 7 October 2023 and against those they seized and held hostage were part of a systematic and widespread assault against the civilian population and amount to crimes against humanity

Agnès Callamard

Hamas has claimed that its forces were not involved in the targeted killing, abduction or mistreatment of civilians during the 7 October 2023 attacks and that many civilians were killed by Israeli fire. However, based on extensive video, testimonial and other evidence, Amnesty International has concluded that, while some civilians were indeed killed by Israeli forces as they sought to repel the attack, the vast majority of those who died were intentionally killed by Hamas and other Palestinian fighters who targeted civilian locations far from any military objectives. Palestinian fighters, including Hamas forces, were likewise responsible for abducting civilians from multiple locations and committing physical, sexual and psychological abuse against people they captured.

Another 251 people – mostly civilians, including older people and young children – were taken as hostages to Gaza on 7 October 2023. The majority of these 251 people were seized alive and held in captivity, but reportedly 36 of them were already dead when captured. They were held for weeks, months or, in some cases, over two years, with some hostages who returned alive describing to Amnesty International or in public forums being chained in underground tunnels for some or all of their captivity and enduring intense violence, deprivation and psychological abuse, including threats of execution. Some hostages were subjected to sexual violence, including sexual assault, threats of forced marriage or forced nudity. At least six hostages were killed by their captors.

Amnesty International interviewed 70 people, including 17 people who survived the 7 October 2023 attacks, victims’ family members, forensic experts, medical professionals, lawyers, journalists and other investigators. Researchers visited some of the sites of the attacks and reviewed over 350 videos and photos of scenes from the attacks and of people held in captivity in Gaza. 

Amnesty International’s investigation found that Hamas and other Palestinian armed groups committed the crimes against humanity of “murder”; “extermination”; “imprisonment or other severe deprivation of physical liberty in violation of fundamental rules of international law”; “enforced disappearance”; “torture”; “rape… or any other form of sexual violence of comparable gravity”; and “other inhumane acts”.

“Israel’s appalling record of violations against Palestinians including decades of unlawful occupation, apartheid against Palestinians and its ongoing genocide against Palestinians in Gaza, can in no way excuse these crimes. Nor does it relieve Palestinian armed groups of their obligations under international law. The violations by Palestinian armed groups in the context of the 7 October 2023 attacks must be recognized and condemned as the atrocity crimes that they are. Hamas must also unconditionally return the remaining body in Gaza of a person killed during the attacks as soon as it is located,” said Agnès Callamard.

In recent weeks, Prime Minister Benjamin Netanyahu announced the formation of a committee to examine the government decision-making surrounding the 7 October 2023 attacks. However, this move has been widely criticized, including by survivors of the attacks, and families of those killed, for a lack of independence and a failure to follow precedents of judge-led commissions of inquiry.

The authorities of the State of Palestine should publicly acknowledge and denounce the serious violations of international law committed by Palestinian armed groups. They should also conduct independent, impartial and effective investigations to identify those suspected of violations and crimes and fully cooperate with international investigative mechanisms, including by sharing evidence.

International justice needed for all victims

The ongoing ICC investigation into the “situation in Palestine” and the arrest warrants the court has issued against Prime Minister Netanyahu and former Defense Minister Yoav Gallant on charges of war crimes and crimes against humanity remain critical to the prospect of ensuring genuine accountability.

Taking steps to hold senior Israeli officials accountable for their crimes under international law is an essential step in the path towards bringing Israel’s genocide in Gaza to an end, to restore faith in international law as well as ensuring that all victims of war crimes and crimes against humanity are granted access to justice, truth and reparations.

Accountability is non-negotiable. The perpetrators of international crimes must face justice and the institutions they represent must commit to a new path rooted in human rights and international law, including by adopting legislation to prevent recurrence of future violations

Agnes Callamard

The ICC should also continue to investigate crimes committed by Palestinian armed groups before, during and after the 7 October 2023 attacks, with a view to ensuring that individuals suspected of responsibility for crimes against humanity and war crimes, are brought to justice.

“Accountability is non-negotiable. The perpetrators of international crimes must face justice and the institutions they represent must commit to a new path rooted in human rights and international law, including by adopting legislation to prevent recurrence of future violations,” said Agnès Callamard.

“All parties must acknowledge their responsibility and cooperate with investigative bodies and international justice mechanisms such as the UN Commission of Inquiry and the ICC by implementing their recommendations and allowing them to collect, preserve and analyse evidence for accountability. Victims must be heard, acknowledged, and granted effective remedy, including reparations. Without such concrete steps to ensure truth and justice there can be no lasting peace.”

The post Sustainable peace requires international justice for all victims of all crimes in Israel and the OPT appeared first on Amnesty International.

Swe Win: “Photojournalist Sai Zaw should be able to report freely. He should not be in prison.”

In 2023, celebrated photojournalist Sai Zaw Thaike travelled to Rakhine state determined to report on the widespread destruction caused by Cyclone Mocha. However, after a week he was arrested, interrogated and allegedly beaten. In September 2023 he was sentenced to 20 years in prison with hard labour after a trial that lasted just one day.

Sai Zaw’s friend and colleague, Swe Win, editor of Myanmar Now, is campaigning for his release, together with organizations like Amnesty International. Since 2021, more than 200 journalists have been imprisoned and at least seven have reportedly been killed in Myanmar. Media outlets have been banned – including Myanmar Now, which now operates from Australia – and journalists have been forced into exile.

In this piece, Swe Win describes the reality of being a journalist in a country under military control and shares insights into Sai Zaw’s life in prison.

I lead an independent news agency called Myanmar Now, where my team and I report on the most critical issues facing Myanmar, including politics, conflict and human rights abuses.  

Our team of professional journalists deliver accurate reporting at a time when our country is once again in a military dictatorship backed by powerful allies such as China, Russia, India.

I used to work closely with Sai Zaw – a well-known photojournalist in Myanmar. Brave, fearless and unafraid to defy authorities, Sai Zaw was at the forefront of a number of major news events in our country.

In 2021, as a result of the military coup, our country became more violent and journalism became an extremely dangerous profession. Journalists started fleeing the country, our newsroom was raided and we were all declared “terrorists”.

Sentenced to 20 years in prison

Things took a turn for the worse when the military came to power and Sai Zaw was one of the first people advised to leave the country after the coup. However, he decided to stay and document the junta’s violent crackdown, moving from one house to another, like a fugitive. He was living and working underground in Yangon as a photojournalist for our news outlet.

Sai Zaw was sentenced to 20 years in prison, with hard labour. His trial lasted one day. I was shocked.

Swe Win, Editor of Myanmar Now

When Cyclone Mocha slammed into our country, he was determined to report on it, despite the scrutiny he was under.  He travelled to Rakhine State, hundreds of miles from his hometown and embedded himself with a relief team. However, someone tipped off military intelligence, and Sai Zaw – my colleague and my friend – was arrested on 28 May 2023 and charged with causing fear and spreading false news.

Sai Zaw was sentenced to 20 years in prison, with hard labour. His trial lasted one day. I was shocked. This was one of the longest known prison sentences handed down to a journalist since the 2021 military coup in Myanmar.

The prison conditions are horrific

The prison conditions for Sai Zaw are horrific. Earlier this year he was allegedly beaten. He has been targeted not only for his background in journalism, but for speaking out on behalf of all the fellow prisoners who are suffering abuse in front of him.

Under military rule, lawlessness prevails. And despite his ordeal, he refuses to remain silent.

It’s been incredibly difficult to see the impact Sai Zaw’s arrest has had on his family. His mother is older now and his younger brother is disabled, having contracted polio as a child. Sai Zaw is the breadwinner and primary carer for his family members.

Only family members are permitted to visit Sai Zaw in prison, so this puts an added pressure on them. As a friend and colleague, I am not allowed to go and see him, even though I desperately want to.

Sai Zaw wants to be able to report freely

You could say Sai Zaw’s defiant nature, coupled with his passion for journalism, is what brought him notoriety. His aim was to defy the age-old power structure in our country through his camera and that’s what drove him to be one of the bravest, best photographers in Myanmar.

Sai Zaw Thaike is a photographer for the independent Myanmar media outlet Myanmar Now. In 2023, he was sentenced to 20 years in prison for taking photographs of the aftermath of a Cyclone in the country.

He started as a reporter, driven by curiosity, reporting on socio-economic issues affecting communities. He also reported on topics such as political prisoners, land confiscation by the military, and the struggles of factory workers.

Over the years he has worked for almost all the big major national news outlets as a photojournalist and he started getting recognized for his powerful coverage of major human rights issues, including the military crackdown on student protests and the rise of an ultra nationalist movement targeting Muslim minorities in our country.

Understanding our reality

All Sai Zaw wants is to live in a free country, unfettered by military rule. He should be able to report freely. He should be at home, spending time with his family and doing the things he loves, like playing football, watching Manchester United and seeing friends. Sai Zaw should be with his family who he adores. Instead, he is being beaten and subjected to periods of solitary confinement.

As long as Sai Zaw and other journalists remain in prison, simply for doing their work, people around the world must understand that the regime we are under is not changing for the better. Some may think that a stable dictatorship is better than war, but that is a misguided assumption. We need people to understand the reality of what we’re living.

Signs of solidarity and hope

I am calling for the military to immediately release Sai Zaw and I hope others will join me. I am so pleased Sai Zaw is part of Amnesty International’s Write for Rights campaign this year. It really gives journalists in Myanmar hope. With every letter written and every petition signed, it makes me feel like we’re taking a step forward. Sai Zaw and others have been cut off from the outside world, from their family and loved ones, but this means so much for their psychological survival. I know that any signs of solidarity and hope boosts Sai Zaw’s morale.

As journalists, our right to report freely deserves to be supported. We deserve to live in a just society, where we can do our jobs, protecting our communities and promoting truth and justice in a country that is free.

This story was originally published on The Diplomat.

Free Myanmar photojournalist Sai Zaw

.

The post Swe Win: “Photojournalist Sai Zaw should be able to report freely. He should not be in prison.” appeared first on Amnesty International.

Damisoa: we left our drought-stricken land and found new struggles

Damisoa is from the Androy region in the very southern tip of Madagascar. In 2021, he and his family were forced to leave their home due to droughts worsened by climate change meaning there wasn’t enough food for them to survive there.

People displaced by famine and now living in northern Madagascar urgently need humanitarian assistance. But aid is currently almost exclusively concentrated in drought-stricken southern Madagascar.

Damisoa tells his story of displacement and survival and calls for the government to take urgent steps to address the hunger, homelessness and poor healthcare faced by him and others displaced by drought in Madagascar.

I should not have left my ancestral land, in southern Madagascar, but we were forced to leave. Famine had attacked our land.

I didn’t have much to sell to afford the journey: no goat or zebu (cattle), so we sold the cooking pots and the furniture from our home. That made us enough money for our family of 10 to leave. But it didn’t get us far.

We stopped in Toliaria and then again in Antananarivo. Each time, finding whatever work we could to raise the money for the next bus fare: gem mining, menial work, cleaning and laundry. The whole family, including my wife and my children, worked hard to raise money.

Eventually we made it to Ambondromamy, in the Boeny region, in northern Madagasacar. We were told we could earn a living in the forest by burning charcoal and growing corn and mung beans. Straight away, we began cultivating our crops and producing charcoal.  

Then the authorities came. As newcomers, we were afraid: when we saw their guns, we ran away. Some of us were arrested while others were left behind.

Now we have a place to stay, but we are still suffering

Eventually the local government found a solution for vulnerable people by resettling us to some small huts in Tsaramandroso, nearby. They built a place for people to stay. I did not bring my family this far for us to die, but to save our lives.  So, we accepted the offer of a place to live.

However, when we were settled, we continued to struggle. The huts do not feel like we are sleeping indoors. Especially during the rainy season (every December to April), it feels like a thunderstorm inside: the walls let the rain in, and our space is flooded.

Two rows of wooden huts.

Resettlement site in Tsaramandroso, a municipality located in the Boeny region of northwestern Madagascar.

The water around us is deadly

When the water is high, during the rainy season each year, it kills people. This water has a monster and invisible creatures in it: the river is infested with crocodiles. It is also very fast flowing, and people have died trying to cross, so we are afraid of passing through until the tide is lower.

We do not have a boat to cross the river, we use yellow jerry cans as an alternative. We attach the jerry cans with a long rope on the other side of the water and pull it across. We are never sure if it will break or not. Several people help: some know how to swim and can help others cross by carrying them on their backs.

During rainy season, Damisoa’s community is very isolated due to the fast-flowing and crocodile infested seasonal river that surrounds the area where they were resettled.

When there is no more to share, we sleep hungry

We do not have any seeds or food to eat. Because of this poverty, we ignore the danger, and we try to cross the water because we need to look for food. I feel like we live in an abyss, not on earth. Where can we go when we have this water around us?

We would die if we did not help each other. Whenever one of us, from among the 33 households, has something we share it. When there is no more to share, we sleep hungry. We take lalanda leaves (wild sweet potato leaves), boil them with water and salt, and that is what we eat to survive until the next day.

We are fearful of getting sick

My sister went into labour during the rainy season, when the water was high. We did not have enough money to bring her to the doctor. Instead, we walked three hours, crossing the deadly river to see the matron.

Sadly, my newborn niece died because her mother, weakened by hunger and thirst, could no longer breastfeed.

We are fearful of falling sick because we do not have any health insurance. We are poor, so we are careful to avoid complications.

We would only struggle more if we moved elsewhere

We stay here in the North, because we struggled more when we were back in our ancestral land, in the South. And if we leave this place, we will face more struggles. If we leave again, we will be displaced again to a new place, alone, with no government support or humanitarian aid.

We would rather suffer here. It is better to stay with people you are acquainted with. And the land we are staying on is the only place the government has made available for people in our situation.

So, we prefer to stay, but we struggle. We do not have a plough to till the land, we do not have oxen. But we stay here to avoid more struggle.

I am not ashamed to demand humanity 

As the head of the village, I represent the residents here. It’s important to me that I do them justice in this role and use this position to amplify my community’s voices. We are not ashamed of our poverty which is due to the lack of government support given to us.

We are not ashamed to talk about our struggles. There is nothing to hide. If we let shame stop us from speaking out, all of our people could die.

This is where we live, this is our situation. We ask the government to consider our request for support. We look forward to their assistance.

Join Damisoa’s fight for climate-displaced people in Madagascar

Sign the petition and urge the government to act now to support Damisoa and others displaced by drought across Madagascar who are facing hunger, homelessness and poor healthcare.

The post Damisoa: we left our drought-stricken land and found new struggles appeared first on Amnesty International.

Australia: Social media ban for children and young people an “ineffective quick fix” that will not prevent online harms 

Responding to a new Australian law prohibiting children and young people under 16 from using social media, Damini Satija, Programme Director at Amnesty Tech said: 

“A ban is an ineffective quick fix that’s out of step with the realities of a generation that lives both on and offline. The most effective way to protect children and young people online is by protecting all social media users through better regulation, stronger data protection laws and better platform design. Robust safeguards are needed to ensure social media platforms stop exposing users to harms through their relentless pursuit of user engagement and exploitation of people’s personal data.  

“While social media platforms’ practices are harmful to younger users, young people also have a right to express themselves online, access information and participate in the digital town square. Social media provides opportunities for inclusion, connection, creativity, learning, health information and entertainment, all of which are beneficial to their mental health.  

Damini Satija, Programme Director at Amnesty Tech 

“Many young people will no doubt find ways to avoid the restrictions. A ban simply means they will continue to be exposed to the same harms but in secret, leaving them at even greater risk. The Australian government must empower young people with education and tools to navigate social media safely. It must also put pressure on social media platforms to stop putting profit over the safety of users. We must build a pathway towards a digitally safe society, relying on regulation as one of the tools at our disposal.” 

Background  

From 10 December, social media companies must prevent under-16s in Australia from opening accounts and remove existing accounts from their platforms.  

Other countries are considering similar measures. On 26 November, Members of the European Parliament (MEPs) announced their support for an EU-wide minimum age of 16 for access to social media, video-sharing platforms and AI companions. Last week Malaysia also announced plans to introduce a blanket ban for children under 16 years. 

The post Australia: Social media ban for children and young people an “ineffective quick fix” that will not prevent online harms  appeared first on Amnesty International.

Algorithmic Accountability Toolkit

Algorithmic Accountability Toolkit

A guide to uncovering and challenging state automation



Glossary

AI Lifecycle AI systems rely on a common set of processes, such as model conceptualization (defining the task the model aims to address), data collection, data processing, model design, model implementation, and model evaluation. These different components also commonly constitute the sequential stages when developing an AI system and are called the “AI lifecycle”.
Algorithm An algorithm is a list of mathematical rules which solve a problem. The rules must be in the right order – think of a recipe. Algorithms are the building blocks of Artificial Intelligence and Machine Learning. Algorithms enable AI and ML technologies to train on data that already exists about a problem to develop models which are able to solve problems when working with new data.
Algorithm Registers Algorithm registers are “consolidated directories providing information about algorithmic systems used by public agencies in different jurisdictions”. They can take the form of webpages, databases, or datasets, available publicly.
Algorithmic Bias Algorithmic bias refers to systematic and repeatable errors in an algorithmic system that create unfair outcomes, such as privileging one demographic group over another, due to biases embedded in data, model design, or deployment context.
Algorithmic decision-making system An algorithmic system that is used in (support of) various steps of decision-making processes.
Artificial Intelligence (AI) There is no consensus definition of AI; it broadly describes any technique or system that allows computers to mimic human behaviour.
Automated decision-making system An algorithmic decision-making system where no human is involved in the decision-making process. The decision is taken solely by the system.
Biometric (surveillance) technologies Surveillance technologies used to identify human body characteristics using biologically unique markers such as fingerprints, eye retinas and irises, voice patterns, facial patterns, and hand measurements.
Black-box algorithm An algorithmic system where the inputs and outputs can be viewed, but the internal workings are unknown. This terminology most readily applies to more complex ML algorithms.
Facial recognition technology (FRT) Computer vision technique used to identify the faces of humans on the basis of images used for the prior training of an algorithm. It is a type of biometric (surveillance) technology.
Fairness There are numerous suggested methods, approaches and definitions for embedding fairness into AI systems in order to avoid algorithmic bias. These are all predicated on the idea of eliminating prejudice, discrimination or preference for certain individuals or groups based on a characteristic in the output of an AI system. Though fairness methods are an important element of debiasing AI systems, we generally consider them a limited tool in and of themselves.
Machine Learning (ML) A subfield of Artificial Intelligence. A technique to provide AI with the capacity to learn from data to perform a task (either specific or general), and when deployed, ingest new data and change over time.
Predictive algorithms The use of AI techniques to make future predictions about a person, event or any other outcome.

Introduction

Over the past decade, most governments and some regional blocs have adopted and implemented digital transformation strategies, which aim to digitize state functions and public services. More recently, they have increasingly adopted data-driven technologies which introduce automated, algorithmic or AI-driven components to the many functions which sit within the government’s mandate. For example, these could include automated systems which:

  • Assess or “predict” the risk that welfare claimants are committing fraud.
  • Predict the risk that a person commits a crime.
  • Triage visa applications.

These automated decision-making (ADM) systems, deployed ostensibly under the justification of cost-efficiency and/or enhancements to human decision-making abilities, have been widely reported to:

  • Exclude people from access to essential services.
  • Replicate inequities along racial, gender, migration status, disability and socio-economic lines.
  • Give people impacted with limited ability to challenge a decision made about them, leaving them little to no recourse to remedy.
  • Clamp down on the right to peaceful protest through deploying mass surveillance technologies at scale, which particularly impact already marginalized communities.

As AI use continues to become central to everyday life and the functioning of society, tensions are constantly emerging between the anticipated advantages of technology use and concerns about human rights violations. These tensions are exacerbated by the lack of proper oversight or regulation over the use of technology, clear redlines and safeguards where the use of technology is rights-violating, and the lack of clear definitions of types of human rights violations that flow from technology use. Researching the impact of these systems is challenging for a myriad of issues, ranging from the opacity of their deployment to accessing the people and communities that are ultimately impacted by the decisions they make.  


About this toolkit 

This toolkit is designed for anyone looking to investigate and challenge the use of algorithmic systems in the public sector. It aims to synthesize the learnings of Amnesty International’s work in this area, particularly housed within the Algorithmic Accountability Lab, which specializes in researching, campaigning and advocating on issues related to state use of automated systems. Amnesty International has conducted multiple investigations into such systems over the past 3 years, and this toolkit captures many of the learnings and tried-and-tested methods from those. The toolkit details not only how to research these opaque systems and their resulting human rights violations, but also lays out comprehensive tactics for those working to end these abusive systems by seeking change and accountability via campaigning, advocacy or litigation.

Who can use this toolkit?

This toolkit is for civil society organizations (CSOs), journalists, and community organizations. In short, it aims to provide information to:

  • Anyone seeking greater transparency on and seeking to uncover AI/ADM systems that may be impacting their lives in key areas, or the lives of communities that they directly work with.
  • CSOs/Investigators working with impacted individuals and community organizations to uncover these systems and seek accountability and an end to abusive systems. 

Many of our reflections and learnings relate to conducting algorithmic accountability work within an organization with a global remit; however, we have also included additional considerations for community organizations working directly with impacted communities based on our learnings from collaborating with them.

This toolkit takes a human-rights-based approach to algorithmic investigations and seeking accountability for an end to abusive systems. This approach is a synthesis of Amnesty International’s own approach but also shares lessons from other case studies. This approach has four core elements:

1. Human rights frameworks and research

Much of the work in the algorithmic accountability sphere has often explained the risks and harms of AI systems from an AI ethics framework, which, whilst valuable in its own right, does not explicitly situate the harms within an International Human Rights Law (IHRL) framework. IHRL is legally binding on states, and drawing on IHRL means researchers and organizations do not have to reinvent the wheel when trying to challenge systems through campaigning, litigation and advocacy work because states already have existing commitments under IHRL. Equally, where states partner with, or procure systems from, private companies, researchers can draw on the UN guiding principles on Business and Human Rights, which sets out the responsibilities private companies have to respect human rights in their business operations.

Whilst IHRL can sometimes be complex and hard to understand, contextualizing the risks and harms of algorithmic systems in human rights terms can, on the other hand, be more accessible for many people who are not tech-savvy, demonstrating that the risks and harms are not unique phenomena caused purely by modern technology but rather are continued reflections of deep societal issues rooted which can be addressed through clear human rights standards that many have shed light on and challenged previously. The human rights framework outlines clearly the outcomes we wish to see and prevent regardless of the technology in use (for example, everyone has the right to equality and non-discrimination), rather than getting caught up in discussions about new regulations and new protections, which can be tied to hype cycles around new technologies and distract from strong algorithmic accountability outcomes. Whilst AI and algorithmic systems may be technical in nature and innovative technical approaches can be used to examine them, these are strengthened when complemented with human rights research methods, which highlight people’s experiences and testimonies, which are essential to fully understand how systems have impacted the rights of those who are subjected to them.

2. The importance of people’s stories

Centering people’s and community stories helps demonstrate the issues caused by algorithmic systems in a way that is not obfuscated behind technical language and is also easier to understand. Too often, tech accountability work has solely zeroed in on the technical aspects of a system, which simultaneously does not ground the research in the lived experience of those subjected to and impacted by the system, can distract from deeper financial or political motives at play, but also undermines the power of affected people and communities to challenge the system. Additionally, research which solely focuses on the tech without meaningful participation of impacted people and communities can also gesture towards false notions that any issues can be solved through technical fixes.

Understandably, sometimes impacted individuals and communities may be hesitant to speak publicly about their experience due to fear of reprisals or other security-related concerns. It is crucial that any research with affected people or communities incorporates participatory and consent-led approaches to ensure their experiences and stories are not gathered, documented or told in extractive ways.

This includes recognizing that many people we will work with have witnessed or experienced traumatic events. It is crucial that, in centering people’s stories, any research, advocacy, and campaigning takes a trauma-informed approach. For example, where it pertains to interviewing, this includes adapting techniques accordingly, and ensuring the physical, emotional and psychological safety and well-being of the interviewee, the interviewer and any other team members.

3. A sociotechnical framing

A “sociotechnical” approach to understanding an algorithmic system’s impacts means looking at the technology within the political, social, economic and cultural incentives and environmental factors that give rise to its development and deployment in the first place. The use of the word “system” when referring to AI or algorithmic tools is an intentional choice and is used to denote the sociotechnical aspect of the technology. It highlights that these tools are “intricate, dynamic arrangements of people and code” and cannot be reduced down to the technology component, divorced from the structural contexts in which they operate. Espousing a sociotechnical stance also means recognizing that technology is not neutral. Incentives, structural systems of power and oppression, systemic inequity, and policy environments all get baked into technology and reproduced by its use. It is important to recognize that this is not a passive process; there are active choices being made at every stage of policy development and in the creation of automated systems. For example, in the context of Facial Recognition Technologies (FRT), this means recognizing that while AI can be used as a surveillance tool, marginalized communities are already generally subject to greater state surveillance.

The approach chosen to investigate any system must reflect this and any investigation should aim to employ a variety of research methods that examine the political, social, economic and cultural factors underlying how a system is developed and deployed.

4. Research to seek change

Documenting the evidence on how an algorithmic system causes harm is not enough alone to affect change. Raising awareness is just one step to doing so; post-publication of an investigation it is important researchers and organizations continue to pursue accountability and justice for those impacted through advocacy, litigation and other means, and where they cannot, ensure there is an organization in place equipped to do so (this is also critical to ensuring long term impact for affected individuals and communities). This is another way in which the IHRL framework is a critical toolkit for algorithmic accountability work, given its basis in enforceable law and, therefore, by extension, providing the basis on which to delineate clear lines around acceptable and unacceptable uses of technology. This means any research should be accompanied by strategies for how it will help affect change – this may be through:

  • Providing evidence to inform advocacy positions and calls.
  • Exposing an issue, forcing governments and companies to address it.
  • Building a campaign and mobilizing people to take action.
  • Using the legal system to challenge an algorithm.

How to use this toolkit?

This toolkit is designed for both people already familiar with algorithmic accountability issues and those who may not have worked on it before.

We recognize that issues relating to algorithmic accountability intersect with the remit of many different organizations, not only those focused on digital rights. Many investigations have highlighted how algorithmic systems target and harm specific communities (for example, a system that unfairly targets women and people from migrant backgrounds for welfare fraud), meaning algorithmic accountability is highly relevant for organizations and advocates working across numerous social justice issues.

Not all work on algorithmic accountability needs to take the form of a long-form research investigation into a specific system. Advocacy, campaigning and other pieces of work are equally important, and this toolkit is designed to provide guidelines and reflections relating to algorithmic accountability as a whole.

This means we have split the toolkit into distinct chapters, either representing specific stages of a project or, in some cases, overarching considerations and reflections. Below, we provide broad recommendations for which chapters may be relevant for your project:

  1. Project Scoping: A chapter focused on activities that can be helpful to scope your work on algorithmic accountability
    1. We recommend for: Researchers, communications experts, campaigners, advocates
  2. Project Goals & Lifecycle: A chapter focused on activities and considerations
    1. We recommend for: Researchers, communications experts, campaigners, advocates
  3. Project Ethics and Principles: A chapter focused on ethical considerations and general principles for approaching your project on algorithmic accountability
    1. We recommend for: Researchers, communications experts, campaigners, advocates
  4. Obtaining Access to Information: A chapter focused on how to try to access more information on a specific algorithmic system
    1. We recommend for: Researchers
  5. Human Rights Research: A chapter focused on framing the harms of algorithmic systems within Human Rights Law, and how to approach conducting human rights research on these systems
    1. We recommend for: Researchers
  6. Algorithmic Auditing through Empirical Investigations: A chapter focused on the technical side of conducting empirical research on algorithmic systems
    1. We recommend for: Researchers, Technologists
  7. Affecting change after the investigation through advocacy methods and strategic communications: A chapter focused on advocacy and strategic communications approaches
    1. We recommend for: Researchers, communications experts, campaigners, advocates
  8. Routes to Accountability and Justice: A chapter focused on pursuing justice and accountability for algorithmic harms in a variety of ways, including strategic litigation, national supervisory and equalities mechanisms, and community mobilization
    1. We recommend for: Researchers, communications experts, campaigners, advocates

Project scoping

Investigating the use of Artificial Intelligence and/or automated decision-making tools by governments around the world is challenging. In the select high-profile investigations that have been published- such as those about systems in Rotterdam, Netherlands, France, and Denmark many of the investigations have relied upon years of persistent research to demystify a piece of technology that has been deployed, with the very challenging role of holding governments to account on their use of AI often falling on investigative journalists and civil society rather than being enshrined in regulatory and governance mechanisms. In the absence of strong regulation and governance, this toolkit is intended to provide investigative researchers and civil society with enhanced resources with which to continue conducting this critical accountability work.

Embarking on a multi-year investigative project can be daunting, particularly with the knowledge that such projects can require committing substantial time and resources without the assurance that a project is viable. An in-depth scoping phase is essential to understand the viability of an algorithmic investigation project. It is crucial to acknowledge that whilst researchers and organizations can and should retain flexibility to pivot the nature of an output to mitigate this, not all projects will be ultimately successful.

This chapter will focus on the scoping phase of projects, which serves to sharpen the goals of the research, potential methodologies that can be employed, and build out theories of change for follow-up advocacy and campaigning activities.

The scoping phase of a project will look different depending on the remit of your organization, the interests of researchers or those otherwise seeking information, or the geographic focus of the project. This necessarily requires separate considerations depending on the goals of the project. For example, an investigative outlet with a global remit will have a different scoping phase as compared to an organization working on the city level that is seeking more information about how these systems are impacting the people who directly make up their membership base. Equally, if you are an organization operating with a global remit or greater resources, there are additional considerations to keep in mind during the scoping phase, including ensuring that your efforts are not extractive and that additional work is done to acknowledge power structures and mitigate these imbalances from the start of the project itself.

Scoping phase aims

A scoping phase is a fact-finding mission aiming to build a picture of:

  1. Where and by whom algorithmic systems might be being deployed
  2. Whether they are having detrimental impacts on people’s lives and in what ways
  3. What information is available on them in the public domain currently
  4. What further avenues for research are possible
  5. What are the options for challenging systems and seeking change

The following sections offer the different scoping considerations for different types and scales of organizations, as these will vary depending on factors such as your expertise, resources, and relationships with affected communities. We have split these into different sections for different types of organizations, but they should not be considered as entirely separate and the user of this toolkit may adapt and combine these lists as necessary.

Scoping considerations for organizations and researchers working with a global remit

Geography

For organizations or researchers with no set geographical focus or who work globally, a critical question at the outset of any potential project pertains to the geographic focus of an investigation and this can be challenging. Time-intensive scoping phases are necessary to identify a specific geographic region of interest and subsequently invest time in understanding the local context in which algorithmic systems are being deployed. Some key questions to ask yourself at this stage:

  • Does your project centre on a specific technology? Some projects may be centered around a specific area (for example, welfare), whilst others may focus on a specific technology (such as Facial Recognition Technologies (FRT)). In the latter case, the location of your research may be guided by the highest-stakes contexts in which it is deployed and tested, or where you see a specific place that is under-researched and can set a precedent for research in other geographic areas. For example, Amnesty International’s research in this area focused on the deployment of FRT in:
    • The Occupied Palestinian Territories, where experimental new technologies are tested and deployed to uphold apartheid.
    • In New York City, where FRT was used to surveil racialised communities during the Black Lives Matter protests in 2020.
  • Are you duplicating existing work? Many countries around the world now have a number of organizations and communities working on issues relating to digital rights. It is important to ask whether you are best positioned to conduct the research at all, and only proceed if your work will be additive to local efforts. Build relationships with local activists, journalists and organizations to ensure you are not duplicating existing research that has been or is being undertaken. Where possible, aim to hand over the lead to local organizations if they have the capacity and resources to take on the work.
  • What are community organizations asking for? In some cases, community organizations may have very specific asks and requests of where involvement of international organizations and researchers would be useful. Whilst this will be a guiding principle across all your work, some discrete projects may arise from specific requests. For example, in Serbia, Amnesty International was approached by the A11 Initiative and others with the ask of supporting work to investigate the impact of the Social Card Law.
  • What advocacy opportunities are there at a local, national, or regional level? Research will have the most impact where it can be converted into clear calls for action and therefore identifying advocacy opportunities beforehand is important. Proposed regulation or legislation in a jurisdiction, for example, the EU’s Artificial Intelligence Act, presents an opportunity for civil society organizations to achieve lasting change by inputting and advocating on specific issues, and having strong evidence to back up these positions is crucial. However, it is important to note that research can have value in and of itself, for documentation and shining a light on a specific issue, so these advocacy opportunities are not always a necessity.
  • What accountability opportunities are there at a local, national, or regional level? Thinking about opportunities to affect change after a project can be a useful guide on where to research. Some useful starting considerations are:
    • Whether existing regulation or legislation offers avenues for remedy or new calls for enforcement?
    • What other mechanisms exist for transparency and accountability?
    • Are there oversight or supervisory bodies in place and what powers do they have?
    • What is the willingness of domestic courts to hear cases on the issue at hand?
    • What regional legal instruments could be used?

Building in Participatory methods during a scoping phase

Whilst Artificial Intelligence and algorithmic systems are abstract and technical in nature, the impact of them on people’s lives and human rights is not. Involving stakeholders, community members, or those directly affected by the system ensures that the investigation is grounded in local realities and reflects diverse perspectives. For an investigation to truly reflect the experiences and serve the needs of those impacted by the technology in question, incorporate participatory approaches in the project’s lifecycle as early as possible, whilst ensuring a trauma-informed approach.

In the context of algorithmic accountability, participatory research approaches are methods in which the people most affected by the algorithmic system and local organizations help design and carry out the research itself. This can include activities like community-led data collection, collaborative workshops, and mapping exercises – anything that shifts research from being extractive to being collaborative. Due to the technical aspect of algorithmic systems, it is important to take time to build a shared understanding of the algorithmic harm, which creates a space for mutual learning and collaboration, and allows you to be guided by what affected communities would like to see as an outcome.

It is essential to build relationships and bring in community partners at the very beginning of the project, or even in determining whether to go ahead with the project. At this stage, casting the net as wide as possible and speaking to as many communities as possible will help break through any established ideas of who the main bearers of algorithmic harms are and in what ways those harms occur. While a particular issue or an approach can be preferable to you from a research perspective or feel comfortable given the remit or experience of your organization, ultimately, communities, through their representing organizations and advocates, should have a decisive voice in shaping the project and its goals based on their priorities and needs. This can then be reflected in a co-designed research plan and strategy.

Scoping considerations for community organizations and researchers

The scoping phase considerations for community organizations, or researchers working directly with impacted communities, might look different. Some considerations for organizations and researchers working at this level, based on our collaborations, are:

  • What kind of project do you wish to pursue? Long-form investigations are time and resource-intensive, and are not always necessary to seek justice and accountability. It may be that you wish to campaign and advocate against a specific system, but not necessarily conduct a longer investigation. Consider partnering with researchers or journalists who may be able to support your organization’s work with research capacity, knowledge of technical investigations or by garnering media attention to the issues you wish to advocate and campaign on.
  • Are there partner organizations that may be able to support your work? If you decide to pursue an investigation, resource and knowledge sharing from other contexts can be invaluable to guide your approach. Speak to journalists and researchers who have conducted similar projects and draw on global networks who may be able to support your work. International organizations – such as Amnesty International – may be able to support or promote your work to reach a wider audience. Consider who you can bring into your scoping work to help you understand the parameters of the system you are looking at, such as technologists with experience in conducting research on automated systems.
  • Are there international examples that may raise awareness? Whilst the impact of specific technologies will be very much context dependent, states around the world often draw on international examples for new initiatives. Countries looking to implement Digital ID systems often cite India’s Aadhar system as an example, whilst Denmark’s use of technology in their social protection system has been imitated by other governments. Research on the harms of these systems can be used as both inspiration for your research and to raise awareness about any similar proposed schemes within your locality.   
  • Can you directly engage with the communities you work with? If you work in a grassroots or community organization advocating for their rights, resources such as this (produced by TechTonic Justice) provide guidance to try to understand if the people you work with are being subject to an algorithmic system that is impacting their lives.

Knowledge of the system

The first step to investigating a specific automated system is simply gaining knowledge of the system. Whilst the promotion and implementation of algorithm registers has increased in the past decade, many systems are still obfuscated from public knowledge, with little to no information about their design and deployment available to citizens and residents. Government databases are often incomplete, and whilst there are some impressive community-led initiatives to document algorithmic systems, these are challenging to build and maintain.

Some early steps researchers can take:

  1. Desk research and publicly available information: this relies upon commonly used methods in investigative journalism through desk research, reviews of official government communications, strategy and procurement documents that indicate strategic plans for the integration of digital technologies into public service delivery (for further detail of relevant procurement documents, see the Obtaining Access to Information chapter). In parallel, where possible and where security considerations allow for it, building relationships with public sector agencies can be fruitful sources of information, either through formal or informal interviews, including, where possible, relying upon leaks or whistle-blower information.
  2. Fact-finding through FOIs: the widespread submission of Freedom of Information Requests to relevant public sector agencies can be a useful first step to orient an investigation. Lighthouse Report’s Suspicion Machine Series relied upon this approach, submitting in excess of 100 FOIs in the early stages of the project and many other algorithmic accountability investigations have done the same. Be liberal with your submissions and see what comes back. Linked is an example template FOI, which can be easily tailored to request high-level information on a system.
  3. Speaking to impacted individuals or communities, the immediate impact of new digital technologies introduced by public sector agencies is felt by those who are subjected to them. Instances of exclusion from, or delays in receiving, social protection schemes can be uncovered by engagement with these impacted individuals and communities directly, or through organizations that work directly with them on the ground. In some cases, individuals may receive official communications from government agencies detailing decisions made about their situation. Ask, and if possible, try to obtain copies of these, as they may contain information on whether an automated system was used in the decision-making.

Taken together, these early-stage scoping steps reflect the complexities of investigating algorithmic systems in the public sector. They require both interrogating the system on a technical level, whilst simultaneously utilizing human rights and/or participatory research methods to centre the lived experience of those impacted and document the risks and violations of their rights. Ideally, any investigation pursues both elements in tandem to allow researchers to trace the human impact of a system back to the technical design and implementation.


Checklist for assessing the viability of a project

A successful scoping phase attempts to answer a series of questions to determine the need and viability of an investigation. Here is an example of a checklist you may use at the end of the scoping phase to determine whether to proceed with a full investigation or not.  

  • Whether the potential cases raise important human rights concerns that would shed light on the lived experiences of impacted individuals and groups.
  • Whether there is existing – or potential – access to information about the algorithmic system, such as data collected/exploited, documentation, or source code. Consider whether this information can be obtained through Freedom of Information requests, interviews, or other means.
  • Whether internal and external partners can support the research and have the capacity to do so.
  • If you are working for an organization with a global remit, is this work considered additive by impacted communities, and is there potential to work with affected groups, grassroots organizations, civil society, or human rights organizations, in order to:
    • Co-design the research and strategy for change.
    • Co-conduct interviews on the impact of these systems.
  • If you are working directly with impacted communities at the local level, is there value in partnering with globally focussed organizations to draw upon their reach and resources?
  • Whether doing research on each case study will contribute to the advocacy goals of your organization and any partners. 
  • Whether there are upcoming regulatory or advocacy opportunities at the local, national, or regional level that the project could inform or influence.
  • Whether you anticipate having the capacity to conduct follow-up advocacy or accountability work after the project is published. And if not, can this be handed over to any partner organisations who can carry this forward where this is appropriate to do.
  • Whether for each case study, there is potential to collect data on the impact of the system using information on system design, deployment and data use (for statistical auditing purposes).

Project goals and lifecycle

This chapter discusses what you should consider when setting out your project’s goals, alongside the overarching principles that should be present throughout your project.

Overarching aims, taking a holistic view of a system

Investigating the potential human rights impacts of algorithmic systems includes undertaking both sociotechnical analysis of a system and human rights-focused research.

Human rights research undertaken by Amnesty International includes analysis of relevant international human law instruments and standards, relevant reports and studies by the UN, journalistic articles, academic papers and reports from civil society organizations, alongside interviews with impacted communities and other relevant experts (including civil society and government officials).

Technical research of algorithmic systems includes analysis of data and documentation on the technical infrastructure and algorithms deployed, including discussion of statistical approaches. Both of these types of research are discussed in greater depth in Obtaining Access to Information and Algorithmic Auditing through Empirical Investigations chapters of this toolkit.

Researching algorithmic systems through both technical analysis and human rights research establishes a holistic view of the system. This combined method of research acknowledges that, in some cases, it may not be possible to gain access to the technical system, but this does not mean that it is not possible to identify potential harms. Analysis of civil society and UN reports and news reports, alongside interviews with impacted communities, can surface key human rights harms even if technical research cannot be undertaken.

A holistic approach to this research also requires us to consider human rights harms along the lifecycle of the technical system from inception to use and identify harms that exist along the process. However, establishing a lifecycle of harms may not always be possible, and focussing on specific parts of the lifecycle or supply chain might be just as valuable.

How will this output work to affect change

Discussions around how to affect change should precede the conceptualization of any investigation. As described in the scoping chapter, decisions you make to undertake a specific project should be driven by the change you would like to see. Basic questions to ask at this stage are:

  • Who are the people and communities that are impacted?
  • What are the harms impacting them?
  • What is the best way to serve and support people and communities to counter these harms?

The aim of a particular output should be decided in collaboration with impacted communities. This will make sure there is a shared sense of ownership around the work and more active collaboration, and “uptake” during advocacy and strategic communications based on the produced output. It can also help your organization advance its positions on issues impacting communities, to take bolder positions and make sure that these consider the lived experiences of people affected by algorithmic harms within a wider context of systemic marginalization. Relationship and trust building and continued collaboration with communities and their representing organizations are key. 

A theory of change lays out the shifts that we want to see as a result of the research and how these changes can be achieved. It incorporates goals and objectives, channels of influence, key messages, as well as tactics and activities for change, which may include public campaigns, media work, advocacy towards key governmental and corporate actors, and so on. A theory of change that incorporates clear mechanisms for monitoring and evaluating success helps demonstrate how the research can affect change. This is discussed in more detail in the Affecting change after the investigation through advocacy methods and strategic communications chapter of this toolkit.

Who to include?

Undertaking an effective project into the potential human rights harms of algorithmic systems – which centres the experiences of impacted communities and generates meaningful impact – requires a diverse range of expertise. A list of potential stakeholders you can consider formally, or informally, speaking to whilst shaping your project goals:

  • Impacted individuals and communities (where possible)
  • Local and regional journalists and researchers
  • Local CSOs focusing on digital rights
  • Organizations and advocates who work with communities affected by algorithmic systems (for example, racial justice, disability rights, migrants’ rights, LGBTQ rights, women’s and children’s rights)
  • Relevant politicians
  • Legal case workers
  • Academics and subject matter experts
  • Service delivery and practitioners (for example, social workers)
  • Technologists with an understanding of how ADMs work
  • Government officials at the national and municipal level
  • Relevant ombudsman officials (data protection authorities)

Project Lifecycle

[Placeholder for graphic showing typical project lifecycle and timeline]

Project ethics and principles

This chapter will outline framing and ethical considerations that can be useful to return to throughout a project’s lifespan.

Ensuring meaningful community participation throughout a project

For international and larger organizations that may be better-resourced, embedding participatory approaches is crucial throughout a project’s duration by building and sustaining relationships with community organizations over a long period of time.

After a scoping phase, researchers must ensure there is regular consultation and where needed, space should be made to reshape the project in line with the needs of impacted communities.

As well as ensuring that the work reflects the needs of impacted communities, using participatory methodologies can support the communication of the technical research. Being able to highlight personal stories and experiences brings a human face to issues that can be technical and complex.

When published, research outputs should be made available to the impacted communities. This requires ensuring that the output is translated into the language widely spoken by the impacted community and, where possible, making outputs accessible to a diverse audience. This could mean using “Easy to Read” versions of reports, producing explainer videos or presenting research findings back to impacted communities. The more the project is led by affected communities, the easier it will be to communicate and discuss findings with them.

Work with the impacted communities should not stop with the publication of the research, but should be embedded into future advocacy and campaigning goals. This enables a meaningful, long-term collaboration which delivers tangible change for impacted communities.

Taking an Anti-Discrimination and Intersectional lens in your research

Participatory methodologies should embed racial justice and anti-discrimination within the strategy. This includes ensuring that the voices of marginalized groups are central to the work and organizations representing these groups are included when working with grassroots groups. Whilst conducting your research, aim to interview individuals and groups with diverse identities based on race, nationality, ethnicity, religion, gender, geography, disability, age and class. 

Each research project is likely to highlight how algorithmic harms disproportionately impact certain social groups and it is important to ensure that the experiences of these groups are not only central to the report but also to the research methodology by adopting participatory and trauma-informed approaches. In order to ensure meaningful engagement in the research, interpreters should be made available in the relevant languages, where possible.

Intersectionality is a framework for examining how different forms of discrimination can overlap and interact to create a unique and compounding experience of oppression. Taking an intersectional approach to human rights helps to break down barriers between different categories of oppression or marginalization, and to show how different categories of identity (including gender, sexual orientation, race, class, caste, disability, immigration status, religion, ethnicity, indigenous identity, and more) are inherently interconnected. This, in turn, allows for an understanding of how to more effectively and holistically address the harms a person or community experiences as a result of their unique context.

Intersectionality is a critical framework for analysing human rights risks and violations posed by technology, as it sheds light on the ways in which different people are excluded from access to vital services or face intersecting harms from algorithmic systems. For example, an algorithmic system may be biased along gender, racial and disability lines, and any analysis must capture how harms are compounded against specific groups (for example, women of colour with disabilities). Ultimately, an intersectional approach to technology and human rights is essential to building strategies to work toward reparations and redress for communities impacted by the human rights violations made possible by technology.


Establishing shared commitments in your project/research team when collaborating to undertake change work

The success of your change-making work will greatly depend on the founding principles you set to take on this work. Given that the ultimate aim of any change-making in the domain of digital technologies is to serve people and communities at most risk of marginalization and harm by digital technologies, including through a lack of access to digital tools when these can support the realization of their rights, it is important that your work towards change is based on the following principles:

  • It is locally led: needs and priorities of impacted communities guide your work.
  • It prioritizes communities that are at most risk of marginalization and harm by digital technologies
  • It is intersectional: your work looks beyond the symptoms and towards structural and systemic issues leading to intersectional harm. It acknowledges and addresses the different ways technological harm manifests towards people and communities experiencing marginalization based on multiple and intersecting characteristics, backgrounds and experiences.
  • It prioritizes allyship, solidarity and collaborative work with community and grassroots movements. You can do this by supporting and uplifting work done by less-resourced organizations and making yourself visible when and in ways that can best serve people and communities impacted by technologies. You should also be aware of the potential limitations of your knowledge and trust the experiential expertise of impacted communities when developing positions on the development and use of technologies.
  • It is accessible and understandable to impacted people and communities. Ensuring language diversity through translation and interpretation services and providing accessibility services such as live captioning during meetings or having easy-to-read formatted outputs are a few ways you can ensure this.

Security Considerations

While this toolkit provides many tactics and techniques for uncovering and challenging systems, it is critical to prioritize the safety and well-being of those whose rights we seek to uphold, our collaborators, and those conducting the research over the revelation of information. This is both to minimise the risk of causing or exacerbating harm, as well to ensure that rigorous assessments of security considerations are done.

These security risks can be heightened during any step of the project lifecycle. Not every step might be appropriate to conduct for each security context, and any project plan should carefully consider and mitigate for security related concerns. For example, in some cases, alerting authorities to a possible investigation even by way of asking for information about algorithmic systems can be risky.

It is critical to invest time in planning and preparation to carefully assess risks and potential benefits for both impacted communities as well as the research team. We consider the safest method to develop risk mitigation strategies is to do it in partnership with stakeholders, relying on their knowledge and expertise. It is crucial to identify and put in place adequate referral mechanisms to support victims and survivors as needed and review plans regularly. Do not proceed with the project if the risk of harm is too high and cannot be mitigated.[LB2] 

Obtaining Access to Information

Successful investigations into algorithmic systems require marrying information and evidence from a range of sources. Testimonies and interviews with impacted people must sit at the heart of any investigation; however, sufficient access to information on the algorithmic system in question is also paramount. This often proves to be one of the largest obstacles for researchers, with government agencies and private companies around the world often reluctant to grant access to external researchers and open themselves up to scrutiny.

This chapter focuses on the variety of steps researchers can take, after identifying a particular algorithmic system of interest, to try to collect information about it. In practice, many of the activities outlined below can be conducted both during the scoping phase and as part of the main investigation.

Direct engagement with government officials

Whilst public sector agencies are often opaque in their deployment of algorithmic systems, direct engagement with them should be the first port of call, where security considerations and the governance context allow this. Although not all governments will be willing or positioned to engage and disclose information, it is important for researchers to give them the opportunity to present their narrative and rationale for the purpose and justification for the algorithmic system, as this in itself is critical for scrutinising the sociotechnical underpinnings of the system and understanding the true policy drivers behind its use. It also provides researchers with the opportunity to interrogate the system and ask direct questions to officials who have managed the system’s development and deployment.

As part of Amnesty International’s research into Udbetaling Danmark’s deployment of algorithmic systems, researchers conducted a series of meetings and interviews to discuss the use of algorithmic systems within social protection schemes in Denmark, both of which were recorded and used as evidence within the final research output. The first focused on the history of social protection administration within Denmark and the legal and governance structures that underpin it, with the second of these focusing on the technicalities of digital transformation of public administration and a discussion of the algorithmic systems in place.

Amnesty International also proposed a collaborative audit of UDK’s algorithms, which was subsequently denied; however, other investigations, such as those by Lighthouse Reports, have shown that successful collaborations can be conducted by external experts and public officials to assess the impact of proposed algorithmic systems and increase transparency.

Freedom of Information Requests (and their equivalents)

Freedom of Information laws are a critical mechanism through which researchers can circumvent the lack of publicly available information on an algorithmic system. In contrast to the example FOI presented in the scoping chapter, specific FOIs focusing on an identified algorithmic system can be utilized to ask for documentation, communications and other resources on the specifics of the system (for example, design of an algorithm, data exploited, oversight and governance).

This process can often be arduous. Whilst many Access to Information laws require governments to respond within a specific time frame (often under a month), many successful investigations have spent months and sometimes years in back and forth with public sector agencies who rely upon a variety of exemptions to deny requests for further information. It can be useful to check public registries of submitted Freedom of Information requests and their responses for examples on where others may have asked questions on an algorithmic system of interest to you. For example, in the UK, these are housed on a website called What do they Know?

A recent report from the UK analysed over 50 FOI requests that have been sent to the Department for Work and Pensions requesting information on their use of advanced data analytics. Whilst many requests were denied, over time, this has allowed external researchers to build a picture of several algorithmic systems that have been deployed by DWP, including information on their purpose, development, and oversight and governance procedures.

FOIs which are focused on a specific algorithmic system need to be tailored to the particular use case in question. In general, they will attempt to gain insight into the three pillars for technically assessing a system: documentation, code and data. Linked is a template Freedom of Information request, which focuses on requests for information, documentation, and correspondence on:

  • Technical documentation, including system architecture or algorithmic design
  • Information on data collected/exploited
  • Example model inputs and outputs
  • Data Protection Impact Assessments (DPIAs)
  • Any equalities or human rights impact assessments conducted
  • Any tests or evaluations of the system to identify risks and determine appropriate mitigation measures for biases
  • Any documentation on appropriate data governance controls

Example model inputs and outputs and technical documentation (system architecture or algorithmic design) are essential and act as the first port of call for any technical research. These will contain information on the model’s aim and objectives, alongside variables/characteristics that are used. These may be sufficient in themselves for offering a technical view on the system in question.

Do note the template is long, and Freedom of Information requests can be refused on the basis that it would take too long to locate and compile the requested information. We recommend submitting a shorter request, or submitting the template into separate submissions.

Subject Access Requests (and their equivalents)

In some jurisdictions, data protection laws grant people the right to access information an organization holds about them and how it is processed. These are known as Subject Access Requests (SARs) and they help individuals to understand how and why an organization is using their data, and check whether they are doing it lawfully. Under these laws, organizations are mandated to provide information on:

  • Confirmation that they process an individual’s personal data
  • Copies of the personal data they hold
  • Explanations of:
    • why they are processing it
    • the categories of data involved
    • who they share it with
    • how long they keep it
    • the individual’s rights to rectification and erasure of their personal data

This can be a useful research tool for looking at algorithmic systems. When working with people you suspect are subjected to and impacted by an algorithmic system, submitting a SAR can be another source of information on a system’s inputs and how it processes data. SARs can be submitted by an individual or by an organization on behalf of an individual. If you’re considering working with an individual to submit a SAR, it’s important to ensure the individual is aware of what a SAR is, what the process of submitting one involves, and that they are on board with everything this entails (including sharing a copy of the results with the wider research team).   

Expert interviews

Interviews with experts on the domain area and/or on digital and technology issues, especially those with knowledge of the geographic context, are essential for any investigation and building relationships with them is crucial.

They are able to provide valuable background information and leads that are not always available in public records or online sources. Identify and interview academics, analysts, practitioners and front-line workers who have direct experience or deep knowledge of the domain (for example, law enforcement or social protection), or the functioning of the local public service delivery and administration.

To do this, a good place to start is by looking for relevant media and academic articles on either the algorithmic system, or domain area in question, and reach out to the authors and any relevant people quoted within them. Equally, reaching out to any digital rights or other relevant civil society organisations can be useful as they may have other non-public knowledge of the system. Always be clear why you’re reaching out, the purpose of your project, and how it might benefit from their input. After formally or informally interviewing relevant experts, always ensure to ask whether there’s any other contacts they would recommend you speak to.     

Alternative data sources

Assessing the impact of an algorithmic system does not necessarily require public sector information disclosures. It is also important to consider alternative data sources that can be used to do this, particularly within the realm of bias and discrimination analysis (further discussion provided in the Algorithmic Auditing through Empirical Investigations chapter).

After identifying an algorithmic system, consider the publicly available data that may be useful for an investigation. Many states keep public records available at the national and regional level, including survey data on public service provision. Equally, public sector agencies regularly publish quarterly or yearly reports, which can be valuable sources of data and information on changes to service delivery and provision.

A useful starting point for researchers is to map out the “dream scenario” data and evidence that the investigation requires. From here, researchers can then attempt to build this themselves, triangulating publicly available information with requests through Freedom of Information laws.

Where algorithmic systems have been built by private companies and procured by public sector agencies, FOI requests are often met with exemptions on the grounds of the information being proprietary. In this scenario, researchers can attempt to build out datasets which focus on the outcomes of the algorithmic system, whilst refraining from asking information on its inner workings or functionality.

In Lighthouse Reports’ investigation into Sweden’s social insurance agency, researchers were able to obtain a dataset to this effect, the details of which can be found in the Algorithmic Auditing through Empirical Investigations chapter.

Procurement research and analysis of the supply chain 

Algorithmic systems rely on complex supply chains that are each comprised of a network of actors responsible for the various aspects of the system’s training and development. These different actors are often tied together by data flows and together they produce the technology’s functionality. For instance, a computing hardware manufacturer might supply servers to a cloud provider to build data centres. The cloud provider can then lease server space to an AI company that wants to develop and deploy an AI product. The cloud provider may also offer Application Programming Interfaces (APIs) that let the AI company build parts of its product using pre-built code.

The private company may provide their particular product – for example, an advanced fraud detection system – to law enforcement agencies, or even insurance and/or health providers, under the auspices of bringing greater efficiency to their work. If the AI deployment, for instance, were to lead to the continuous discrimination of rights of some groups over others, this would have implications for the entirety of the supply chain. Under the UN Guiding Principles on Business and Human Rights, companies must ensure that adequate human rights due diligence is conducted to identify human rights harms that might appear at any stage of the supply chain or product lifecycle, and to provide mitigation measures and remediation where these are identified and applicable.

Mapping the full supply chain can be challenging for technologies deployed by the state that are developed, designed, or directly procured from private companies and it is not necessary for all algorithmic accountability work. However, it can be useful to investigate the supply chain to pinpoint precisely where responsibility for human rights violations lies, and also find new avenues to create change such as applying external pressure to private companies responsible.

Private procurement of tech systems limits the powers of Freedom of Information requests; however, it presents other opportunities to uncover information on a technological system.

  • Tender notices and Requests for Proposals: states are often required to publish competitive tenders, or bids for government procurement projects, on their websites. Access and analyse proposal and contracting documents to map which companies the government are contracting with to procure digital technologies.
  • Published strategies: government agencies will regularly publish public strategies which detail digital transformation plans. These can often be very long documents; however, they are invaluable for mapping the broader strategy and any key players involved.
  • Loan agreements or Memoranda of Understanding with international actors: it may be the case that the deployment of new technological systems is part of a longer-term development strategy devised with international organizations (such as the World Bank). Analysing any loan agreements or MoUs that may be in place with international actors can offer valuable insights and, at a minimum, provide important contextual information.   
  • Public information and request demonstrations: private companies will market their digital technologies publicly, analyse their advertising materials to gain a better understanding of their product offering and delve into their regular reporting to better understand the business operations. Where you can, try to request a demo of their products.
  • Companies search and job advertisements: utilize mechanisms to look up information on private companies. Companies search allows you to look up official information on a company held in any business registry database. Consider collecting job advertisements made by the company to better understand their organizational structure.  
  • Research letters: although exempt from Freedom of Information laws, you can write letters requesting responses from private companies around their business activities, provision of services and human rights due diligence. When naming private companies in any publication, always give them the right to respond to any statements and evidence made about their involvement in human rights risks or violations.

Human rights research

Utilizing the International Human Rights Legal framework enables harms from digital systems to be situated within clear human rights language and opens up the possibility to challenge states based on binding legal provisions to which they are party. When states deploy AI systems in the provision of services, they put a wide range of human rights at risk. This chapter describes some of them alongside case study examples.

This chapter sets out a non-exhaustive list of human rights research methods that can be helpful when investigating algorithmic systems. It then discusses the common human rights risks and violations that algorithmic systems cause, alongside case studies.

Human Rights Research Methods and Data Sources

Conducting human rights research approaches on an algorithmic system requires an in-depth, multi-faceted and context-specific understanding of the complex issues that surround the use of public sector algorithms on people’s rights, including an understanding of the politics of the system. Some of the available methods and potential primary and secondary data sources include:    

  • Testimonies and focus groups with impacted communities: testimonial evidence sits at the heart of any human rights research. Conducting interviews or focus groups with impacted individuals and communities forms the backbone of any evidence of algorithmic harms. This can be combined with participatory methods, or also be implemented in such a way that enables communities to do their own peer research.     
  • Legal analysis: analysing relevant international human rights law instruments and standards, relevant reports and studies by the UN, domestic interpretations of international standards, and analysing the local laws that govern the public sector agencies deploying the algorithmic system (such as law enforcement, social protection). Other data sources may include court transcripts and transcripts of decisions by equality bodies and ombudspersons.   
  • Discourse analysis: analysing the sociopolitical environment in which the algorithmic system is deployed. Consider looking at media reporting on the issue, media interviews conducted by government officials, government policy documents, and official statements. Interview those working on social justice issues locally to understand the context.
  • Survey data: consider running short surveys with people subject to the technology or system. In Amnesty International’s research into the UK government’s use of technology in social protection, surveys were used to understand welfare claimants’ experiences.

Sources of Human Rights Law

While human rights may have several bases in international law, most are reflected in international and regional treaties, which are binding on states that are party to them.

At the international level, these include

Information on whether a particular state is party to a specific treaty can be found online.

What these rights mean in practice evolves over time, and reference should be made to forms of “soft law” which may aid in this interpretation. Sources of soft law include resolutions and declarations of United Nations organs, and reports from experts, including the General Comments and other works of Treaty Bodies charged with interpretation of specific treaties, and the reports of UN thematic mandate holders (“Special Procedures”).

Many states are also bound by regional human rights treaties. These include the American Convention on Human Rights (in the Inter-American System), the African Charter on Human and Peoples’ Rights (in the African Human Rights System), as well as the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights, (in the European Union and the Council of Europe systems, respectively). Regional courts and treaty bodies, such as the Inter-American Commission and Court of Human Rights, the African Court on Human and Peoples’ Rights, the European Court of Justice and the European Court of Human Rights consider cases and issue judgements interpreting the implementation of these standards, and regional systems also often have their own thematic mandate holders. Sub-regional Courts, such as the East African Court of Justice or the ECOWAS Court of Justice may also issue judgments interpreting regional treaties.

Beyond human rights treaties, international or regional data protection treaties and regulations may also contain relevant safeguards. These include the Convention for the protection of individuals with regard to the processing of personal data (“Convention 108+”, which is open to signatories outside the Council of Europe), the African Convention on Cyber-Security and Personal Data Protection, and the General Data Protection Regulation (GDPR) of the EU.

In addition, human rights are – or should be – protected under domestic law, including in the decisions of domestic courts.

Right to Privacy

The Right to Privacy is a guaranteed right under the International Covenant on Civil and Political Rights, a core and binding human rights treaty to which have been ratified 174 of the 193 UN member states, as well as under regional treaties and the domestic law of many states. To comply with human rights law and standards, restrictions on the right to privacy must meet the principle of legality, serve a legitimate aim, and be necessary and proportionate to that aim.

Strategies used to detect fraud within digital welfare states can undermine the right to privacy. Digital welfare states often require the merging of multiple government databases in order to detect possible fraud within the welfare system. This often amounts to mass-scale extraction and processing of personal data, which undermines the right to privacy. Some welfare systems utilize both the processing of personal data alongside “analogue” forms of surveillance, including asking neighbours and friends to report on people they suspect of welfare benefit fraud. This further exacerbates the violation of the right to privacy. This combined analogue and digital surveillance demonstrates the importance of taking a holistic approach to human rights research.

Case Study: Denmark

Amnesty International’s research on Denmark’s social benefits system, administered by the public authority Udbetaling Danmark (UDK, or Pay Out Denmark) and the company Arbejdsmarkedets Tillægspension (ATP), demonstrates pervasive surveillance in the welfare system and undermines the right to privacy.

The research found that the Danish government implemented legislation that allows mass-scale extraction and processing of personal data of social benefits recipients for fraud detection purposes. This includes allowing the merging of government databases and the use of fraud control algorithms on this data, and the unregulated use of social media and the reported use of geolocation data for fraud investigations. This data is collected from residents in receipt of benefits and their household members without their consent. This collecting and merging of large amounts of personal data contained in government databases effectively forces social benefits recipients to give up their right to privacy and data protection. The collection and processing of large amounts of data – including sensitive data which contains characteristics that could reveal race and ethnicity, health, disability, sexual orientation – and the use of social media, are highly invasive and disproportionate methods to detect fraud. Moreover, Amnesty International’s research showed that the use of this data only amounted to 30% of fraud investigations, which raises concerns regarding the necessity of processing this data.

Benefits applicants and recipients are also subjected to “traditional” or “analogue” forms of surveillance and monitoring for the purposes of fraud detection. Such methods include the persistent reassessment of eligibility by municipalities, fraud control cases or reports from other public authorities, including tax authorities and the police, and anonymous reports from members of the public. These analogue forms of monitoring and surveillance, when coupled with overly broad methods of digital scrutiny, create a system of pernicious surveillance which is at odds with the right to privacy.

Right to Equality and Non-Discrimination

The Right to Non-Discrimination and the Right to Equality are both guaranteed rights under the International Covenant on Civil and Political Rights (ICCPR), as well as most other international and regional treaties, as well as domestic law in most states. The UN Human Rights Committee (HRC) defines discrimination as “any distinction, exclusion, restriction or preference, which is based on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status, and which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise by all persons, on an equal footing, of all rights and freedoms.” The ICCPR states that “all people are equal before the law” and “prohibits any discrimination and guarantee to all persons equal and effective protection against discrimination on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.”

Digitization and the introduction of automation and algorithmic decision-making can have a disproportionate negative impact on certain communities, resulting in a violation of the rights to equality and non-discrimination. As the UN Special Rapporteur on racism has noted, AI systems can lead to discrimination when they are used to classify, differentiate, rank and categorize because they “reproduce bias embedded in large-scale data sets capable of mimicking and reproducing implicit biases of humans, even in the absence of explicit algorithmic rules that stereotype”. The Special Rapporteur stated that “digital technologies can be combined intentionally and unintentionally to produce racially discriminatory structures that holistically or systematically undermine enjoyment of human rights for certain groups, on account of their race, ethnicity or national origin, in combination with other characteristics [and] digital technologies [are] capable of creating and sustaining racial and ethnic exclusion in systemic or structural terms”. The Special Rapporteur called on states to end “not only explicit racism and intolerance in the use and design of emerging digital technologies, but also, and just as seriously, indirect and structural forms of racial discrimination that result from the design and use of such technologies”

Overall, the use of AI and automated decision-making systems within the distribution of social security can entrench discriminatory practices towards already marginalized groups.

In the context of AI and algorithmic decision-making, it’s particularly important to note the distinction between direct and indirect discrimination.

  • Direct discrimination is when an explicit distinction is made between groups of people that results in individuals from some groups being less able than others to exercise their rights. For example, a law that requires women, and not men, to provide proof of a certain level of education as a prerequisite for voting would constitute direct discrimination.
  • Indirect discrimination is when a law, policy, or treatment is presented in neutral terms (i.e. no explicit distinctions made) but disproportionately disadvantages a specific group or groups. For example, a law that requires everyone to provide proof of a certain level of education as a prerequisite for voting has an indirectly discriminatory effect on any group that is less likely to have proof of education to that level (such as disadvantaged ethnic or other social groups, women, or others, as applicable).

Whilst some algorithmic systems have included protected characteristics which causes the system to directly discriminate between groups of people. Others have been found to indirectly discriminate, often by the inclusion of proxy inputs.

A proxy is an input or variable, such as an individual quality defining human beings, that is used by an AI system to make distinctions between individuals and/or social groups. A proxy may appear to be an innocuous piece of data to be included in an algorithm. Yet, where it directly or indirectly correlates with a protected characteristic such as gender, age, race or ethnicity, a proxy leads to biased decisions being generated by the AI system. For example, when an input such as postcode is included within an algorithm, it is often correlated with, and becomes a proxy for, socioeconomic status and race. It may therefore indirectly discriminate against certain racial or ethnic groups due to historical residential segregation.


Case Study: Serbia

The Social Card law entered into force in March 2022 and introduced automation into the process of determining people’s eligibility for various social assistance programmes. A backbone of the Social Card law is the Social Card registry, a comprehensive, centralized information system which uses automation to consolidate the personal data of applicants and recipients of social assistance from a range of official government databases.

The introduction of the Social Card Law and the Social Card registry cannot be isolated from the social and historical contexts into which they are introduced. Whilst laws in Serbia, including the Social Card Law, do guarantee formal equality for all individuals, the practical implementation of the Social Card Law and the Social Card registry does not provide substantive or de facto equality.

Gaps and imbalances in data processed by automated or semi-automated systems can lead to discrimination. A social worker told Amnesty International that before the Social Card registry was introduced, and especially when working with marginalized communities such as Roma, social workers knew that some data was inaccurate or out of date. For example, multiple cars registered to someone living in extreme poverty would not be considered important assets for social assistance eligibility, but rather, would be understood as vehicles sold for scrap metal or that otherwise no longer existed.

Serbia’s Ministry of Labour insisted that laws governing social security, including the Social Card Law, did not treat Roma or any other marginalized groups differently. The Ministry also claims that it has the legitimate right to use “true and accurate data which are necessary for the enjoyment of social security rights”. The Ministry did not recognize the fact that the seemingly innocuous and objective datasets being used as indicators of socio-economic status often ignored the specific context of a community’s marginalization, such as their living conditions, barriers to employment, and their particular needs.

Due to Serbia’s historical and structural context, many individuals from marginalized backgrounds have persistently low literacy and digital literacy levels. They therefore face challenges when interacting with administrative departments to keep their paperwork up to date or to appeal their removal from the social assistance system. In this way, the Social Card registry represents yet another barrier to accessing social assistance, which can amount to indirect discrimination.

Amnesty International’s research found that the Social Card registry is not designed to factor in the challenges and barriers faced by those communities most critically dependent on social assistance, including Roma, people with disabilities and women. Women, who are represented across all groups, are more likely to receive social protection and may also face additional intersectional barriers to accessing their rights.


Beyond the rights to equality, privacy and non-discrimination

The use of automated tools within welfare states can have clear impacts on the right to privacy and the right to non-discrimination. However, moving our analysis beyond these rights can provide a deeper understanding of how these systems impact communities.

Right to Social Security and Adequate Standard of Living

The International Covenant on Economic, Social and Cultural Rights (ICESCR) requires states to respect, protect and fulfil a broader set of human rights which are focused on the need for states to provide for the welfare and well-being of their populations. These rights are also protected under numerous regional treaties and the domestic law of many states.

Key ICESCR provisions relevant to automated welfare systems are the Right to Social Security and the Right to an Adequate Standard of Living. The Right to an Adequate Standard of Living incorporates “adequate food, clothing and housing”; failure to provide social security payments puts the ability to access these basic needs at risk. For example, automated welfare can also reduce access to health or disability related benefits, which can have a direct impact on the right to an adequate standard of living and the right to health.

Right to freedom to peaceful assembly and of association

For years, civil society has warned that states are enjoying a “golden age of surveillance,” as more and more of our online and offline lives are accessible to a growing array of new tools designed to track us. Amnesty International has documented numerous types of technology whose use impacts human rights, notably the rights to freedom of peaceful assembly and of association, which is protected under Articles 21 and 22 of the ICCPR, as well as under the CRC, CRPD and numerous regional treaties, as well as the domestic law of many states.

The use of facial recognition technology (FRT), which is fundamentally incompatible with human rights, is becoming worryingly commonplace. Amnesty International has documented abuses linked to FRT in the Occupied Palestinian Territories, Hyderabad, and New York City. In France, authorities proposed a system of AI-powered video surveillance in the run-up to the Paris Olympics. In the Netherlands, the under-regulated use of cameras at peaceful protests and the accompanying lack of transparency have created chilling effects around the exercise of protest rights. In Hungary, legal changes allowing the use of FRT to target, among other things, Pride marches, are a grave concern.

The use and abuse of these tools have particularly harmful impacts on marginalized communities. Migrants, in particular, are too often exempted from regulatory protections and treated as “testing grounds” for controversial technologies, including biometric identification technologies. The precarious status of migrants can lead to their being targeted for their protected protest rights, including through the use of surveillance and social media monitoring software, as Amnesty International highlighted in the United States.

Algorithmic Auditing through Empirical Investigations

This chapter will focus on “empirical investigations” of algorithmic systems, which we use as a term to refer to the testing of algorithmic systems through experimentation and statistical data. There are different methods and approaches to doing this, which are often folded under the umbrella term of “Algorithmic Auditing”.

Algorithmic auditing approaches have roots in traditional social science auditing approaches, typically used to study racial and gender discrimination in real-world scenarios (such as job applications, rental property applications). Algorithmic auditing has become a ubiquitous technical research method for diagnosing problematic behaviour within algorithmic systems. It is an umbrella term that captures a range of approaches to assess algorithmic systems, from checking governance documentation to testing an algorithm’s outputs and impacts, to inspecting its inner workings. The basic premise of any audit is to attempt to monitor the outcomes of an algorithm, then map these back to the inputs to build a picture of how the algorithm may be functioning. 

Algorithmic audits can allow for a better understanding of:

  • Systematic bias or discrimination against certain groups and how existing structural inequities can be automated, particularly for those with intersecting identities. 
  • The scale at which a system is operating.
  • The underlying drivers in the algorithmic design, deployment, or data use/collection that are causing harm and the (often political or social) assumptions that are operationalised in deployment.
  • How the algorithm is internally functioning.

Public sector algorithms will differ in complexity, with some utilizing machine learning approaches to make forward-looking predictions and classifications about individuals, whilst others will be automated systems that make decisions based on a set of pre-determined rules. As such, an approach to auditing has to mirror this range of complexity, with some systems requiring a highly technical exercise (that may attempt to reverse-engineer an algorithm based on known information and inputs), and others requiring a less complex approach (for example, assessing publicly available information for evidence of differential impacts across groups). 

Most approaches to algorithmic auditing do require a level of technical expertise including a strong foundation in machine-learning and statistics, alongside some expertise in programming languages such as Python or R. We understand that not all organizations working on algorithmic accountability issues will have this, however there are technologists and other organizations (such Amnesty International) that you can reach out to if you wish to find partners to assist with this.

Below, we first provide a brief overview of statistical bias and fairness testing. We then present a variety of approaches that have been taken in different scenarios, dependent on the level of access and information researchers managed to obtain on an algorithmic system. These are roughly split into two types of audits:

  1. “White Box”: where you have obtained sufficient access to the inner workings of an algorithm from the documentation, code and data that you are able to either directly test it, or reverse-engineer it from the information you have.
  2. Black box”: where you have not been granted access to the inner workings, inputs or design of an algorithm; however, you are able to conduct tests by collecting data on its impact. 

Discussion on statistical bias and fairness

In 2016, US investigative newsroom ProPublica published Machine Bias, an investigation into COMPAS, an algorithm deployed in the US court system. The system was built to make risk assessments about the likelihood of an individual committing a future crime, which would then be used to make bail decisions. The investigation found that COMPAS was biased because it was more likely to wrongly label Black defendants as future criminals than white defendants. However, the company that developed the COMPAS system rejected ProPublica’s findings and its metric for measuring bias.  

In the years since, a multitude of statistical definitions of fairness have been proposed to determine whether a machine learning algorithm is biased against certain groups. Most attempt to compare how accurate the decisions made by an algorithmic system are across different demographic groups. However, measuring “fairness” is challenging, both as there is no objective statistical measure, and importantly, different definitions of fairness may have different implications depending on how, against whom – and in what context – the algorithmic system in question is deployed, as well as the definition of discrimination under different legal systems. In the example of Facial Recognition Technologies, in addition to the important work which has been conducted to show how FRT is less accurate for people of colour, any measure of fairness must also consider whether different demographic groups are equally targeted by the use of FRT.

When conducting statistical fairness or bias testing, depending on the data you have access to, it is worthwhile to perform tests of different types of fairness. This can be done in partnership with academic, civil society or journalistic organizations if you do not have the in-house expertise. The study of fairness is a complex one, but as an entry point, below we provide a top-line overview of what statistical fairness testing is to help envision where this sits in a larger algorithmic investigation. Bear in mind, this is only one tool in the accountability toolkit – in itself, these definitions of fairness can be informative but limited. Even when a specific definition of fairness is met, it is still possible that discrimination may be present. Some common definitions are listed below alongside examples; however a full discussion can be found here and here.

1. Demographic Parity Test  

Demographic parity is satisfied when there is an equal proportion of positive predictions between two groups. If a demographic parity test is satisfied, this would suggest that the system is operating “fairly” based on this one definition. For example, in the context of a risk assessment algorithm as described in the COMPAS investigation, the system would meet demographic parity if the proportion of individuals denied bail was equal across demographic groups.

2. Predictive Parity Test

Predictive parity is satisfied when two groups have the same precision. In other words, as a share of positive predictions, they have an equal proportion of correct predictions (true positives).  In the context of COMPAS, among those that the system predicts to be “high risk,” predictive parity would be satisfied if the proportion who reoffend is same across different demographic groups.

3. False positive error rate test

False positive error rate is satisfied when false positive rates are equal across two groups. In the context of COMPAS, the false positive error rate test would be satisfied if, among people who do not reoffend, the system incorrectly labels the same proportion of them as “high risk” across different demographic groups.

Reviewing documentation: how to extract information from what you have 

As discussed in the Access to Information Chapter, a common piece of evidence requested and received through Freedom of Information requests is internal documentation held on an algorithmic system. Documentation on the algorithmic system often details a description of the architecture of the algorithmic system, data source (inputs), the algorithmic design, and any testing of the system. Equally, as outlined in the template FOIs above, always request any information on bias or statistical assessments of the system that may have been conducted internally by the public sector agency. Documentation will often come redacted in one form or another. Some steps you can take when reviewing documentation:

  1. Compile a checklist of what you can learn: Try to detail what data and inputs are included in the algorithmic system, how the data may have been processed prior to inclusion in the model, the algorithmic models chosen and their justification, what the proposed outputs of the system are and how they are used. 
  2. Interrogate the design choices, the development and design of algorithmic systems requires the developer to make a number of judgement decisions, for instance, about what data to include and exclude, which particular model to use. These are not neutral decisions and can be interrogated. For researchers without a technical background, get in touch with experts in the field to help you do this.   
  3. Examine the redactions; these are not always applied correctly, so check for this.
  4. Use this information for follow-up questions, if you have the time to submit further FOIs or interview government officials, build out more specific technical questions on the basis of the information you have acquired. Some examples are attached above.
  5. Corroborate this with your testimonial evidence, consider how the technical information tallies with the experiences of affected people or communities. Where algorithmic decision-making is used for risk assessments or to determine access to public services, is there a pattern in who may be being excluded and how does this compare to what inputs are included in the technical documentation you have received?

SCENARIO 1: Access to code and data: Direct stress-testing of a system

In very select circumstances, you may receive access to the complete set of materials needed to evaluate an algorithmic system. Lighthouse Reports managed to achieve this level of access in their assessment of Rotterdam Municipality’s welfare algorithm. They managed to get access to:

  • The source code used to train the model
  • The list of variables and their relative importance
  • Evaluations of the model’s performance and Rotterdam’s handbook for data scientists.
  • Documentation on the inputs used by the model
  • The trained machine learning model file itself and the raw data

This allowed them to directly test the model’s performance and conduct statistical bias and fairness testing. A guide to their approach can be found here.

SCENARIO 2: Access to code: Reverse engineering and simulations 

A more likely scenario may be that you gain access to redacted documentation and, in select cases, the source code of the algorithmic system. This was the case in Lighthouse Report’s investigation into the French welfare agency’s deployment of a risk scoring algorithm known as “CNAF”.

This level of access allowed researchers to reconstruct the algorithmic system and run statistical tests on their performance. Researchers in this investigation used publicly available demographic data to run the reconstructed model and assess which inputs have the greatest impact on the outputs. They also created “profiles” of different welfare recipients to see which people were especially at risk of being discriminated against by the algorithmic system. A full methodology can be found here.
 

SCENARIO 3: Outcome data: testing without access to inputs of a model

In many cases, you may not be granted access to any of the source code or information on the inputs of an algorithmic system. Many states have been unwilling to provide this detail on the grounds that it is proprietary information or that it would reduce the system’s effectiveness, as people would have too much information on how it works.

You can, however, circumvent this issue by requesting data on the outputs and outcomes of the algorithmic system and conducting statistical bias and fairness testing on this data. This can still provide critical insight into the impact of the system, even if does not unpack the system’s inner workings. For instance, this could mean asking for data and breakdowns on decisions that a risk assessment algorithm has made. This requires two key pieces of information:

  1. How the algorithmic system makes a decision for particular demographic groups and whether these decisions were correct.
    1. A comparison group, for instance, if a random selection approach was used instead of the algorithmic decision-making system.

This was the approach taken in the investigation into Sweden’s Social Insurance Agency’s algorithmic system, which can be found here.

Affecting change after the investigation through advocacy methods and strategic communications

This chapter will focus on the key considerations for approaching advocacy and strategic communications work, both throughout the project’s lifecycle and post-publication of any research. These are two key pathways to continued impact after an algorithmic investigation, but not the only methods available and ultimately this must be guided by the strategy for change determined at the beginning of the project, after the scoping phase.   

We suggest three main building blocks to the approach towards changemaking, which can be adapted based on contextual needs and priorities.

1. Building power within and outside your organization

The first step to building power means equipping yourself, your organization, and your partners with the knowledge and expertise around the issue you want to address. It requires pivoting your attention to collaboratively brainstorming how to use the outcomes of the investigation to meet other objectives of those in coalition to achieve change on algorithmic justice issues.

Based on the assessment of existing needs, developing, sharing and delivering relevant guidance documents, policy briefs, and workshops are useful. Creating spaces for knowledge-sharing and peer support is equally important. As much as possible, these resources and spaces should be available and open to partner organizations and advocates and ideally developed collectively.

The latter will also help build power with partner organizations, advocates and movements of impacted people and communities. Building alliances with digital and human rights organizations, community organizations, social, environmental, racial, migrant, gender, disability, queer, and intersectional justice movements, as well as with the journalist community, to challenge digital and wider systemic harms is increasingly vital. Such cross-coalition work is also an acknowledgement of the fact that digital rights and technologies do not exist in a bubble but are embedded in wider structures of inequality, exploitation, and oppression.

2. Shaping narratives (communications work)

Another essential element for change is influencing public perceptions, discussions, and narratives around technology, its value, and potential risks. Thinking on this should start at the Project Scoping phase and be revisited regularly throughout the project lifecycle. Pro-innovation and anti-regulation narratives, often pushed by the tech industry, are currently dominating discussions around AI and digitalisation in the mainstream media and are shaping policy development and the proliferation of technological harms across a wide area of public life.

Any research, campaigns or advocacy projects are bolstered by careful thinking around your communications strategy and key messages. Any evidence on the harms and risks of algorithmic systems and automation serves to counter the dominant narratives around the benefits of technological development and deployment. When conducting your project, some useful considerations for communications are:

  • How can the project serve local community needs?
    • How does it center the lived experience of those impacted?
    • Do your key messages sufficiently take into account the local context?
    • How do your key messages align with calls from local civil society and community organizations?
  • How can the project help demystify the technology in question?
    • Can it be explained in simple non-technical language?
    • Can a graphic or illustration be used to represent the technology?
  • How can the project challenge existing narratives and debates around technology?
    • How can it break the false dichotomy commonly presented between innovation and regulation?
    • What positive and rights-affirming vision of the role of technology in society does it present?
    • How do the key messages serve the affected people and communities subject to the technology in question? How can you co-design these with partner organizations together?
  • What strategies can you use within your communications to reach the widest audience and affect change?
    • Are there specific relevant events you can publish around which would benefit the project?
    • What other relevant languages are you able to translate your key communications or press releases into?
    • What social media platforms can you promote the work on?
    • What days of the week are best for media coverage? Avoiding quieter days of the week, such as Mondays and Fridays, can be a useful rule of thumb.

3. Influencing policy (“traditional” advocacy)

In the meantime, consistent advocacy towards rights-respecting and enforceable digital regulation is the third essential pillar towards positive changemaking. This entails pushing for higher legal protections in new or developing regulations, seeking the repeal of unjust laws, effective implementation of existing laws, and, when needed, pushing back against the weakening of the latter.

Plugging into collaborative coalition discussions around seeking justice and accountability will, as a starting point, establish what scope and format is best suited for achieving agreed and necessary policy goals. For example, if the aim is to influence and engage in negotiations around digital policy (such as AI regulation) or pressure public or private actors to discontinue a particular technology, you might consider a shorter hard-hitting piece to bring urgent public attention to the issue, including through media collaboration. On the contrary, if you are aiming for long-term change, such as shifting public opinion around AI or petitioning for establishing a new governance framework around AI, you might need to consider a longer-term project with thorough evidence gathering, a series of workshops, as well as sustained collaboration with partner organizations.

Similarly, as narratives around technology can impact the formation of public policy around it, successful examples of effective rights-respecting regulation can support positive and balanced discussions around technology and debunk disingenuous narratives such as the idea that regulation stifles innovation. Success in a city or country or related to a specific regulation can also be used as a best practice to motivate further regulation in other places or related to other aspects of technological impact. Locally targeted advocacy work can be supported by making use of existing human rights and digital rights frameworks at regional or international levels (such as regional digital regulations or international human rights frameworks).

When thinking about targeted advocacy related to the development, use, and/or regulation of digital technologies, it is helpful to think about:

  • What is the outcome you would like to see: what are your, and your partner organizations’, advocacy goals?
  • What are your key messages to push towards those goals? These can include policy recommendations, but also a broader narrative setting through communications work.
  • What is the best way to reach your goals? Is it through direct engagement with policymakers or through wider campaigning and media work to create public pressure on authorities? Often it can be both.
  • What is the context you are working in? Is there political will by those holding power to enact the changes you are calling for? What is the public debate around the issue?
  • What are the timelines you are working within? Are there any key dates for influencing? Is there an established procedure with set deadlines for decisions (such as negotiations around a particular regulation)?
  • Who is your target audience? Who are you trying to influence? Who are the people, organizations, and what are the channels that can help you influence them?
  • Who are your allies? Do you work within a coalition? Are there any allies among policymakers or media you can work with?
  • Who are your adversaries? Whose goals contradict yours?

Routes to Accountability and Justice

Amnesty International and its partners have pursued justice and accountability for algorithmic harms in a variety of ways, including strategic litigation, national supervisory and equalities mechanisms, community mobilization. The mechanisms outlined below are not intended to be exhaustive; rather, they highlight a range of approaches that community organizations can build upon or creatively adapt to generate pressure and pursue justice and accountability in ways that best suit their contexts.

We also want to highlight that some of the mechanisms are quite complex and require resourcing, for smaller organizations working on algorithmic accountability, coalition building and working with expert partners can be an effective way to support developing wider strategic responses like those outlined below.  

Strategic Litigation

One way in which Amnesty International has sought justice and accountability for those negatively impacted by algorithmic systems is through strategic litigation. This involves the deliberate conceptualizing or selection and involvement in legal proceedings with the aim of creating specific long-term societal change. It goes beyond resolving individual grievances, using the legal system as a tool to pursue effective remedies and accountability, promote human rights, challenge unjust laws or practices, and set precedents that benefit wider communities.

In France, Amnesty International and fourteen other coalition partners led by La Quadrature du Net (LQDN) submitted a complaint to the Council of State, the highest administrative court in France, demanding the withdrawal of a risk-scoring algorithmic system used by the French Social Security Agency’s National Family Allowance Fund. The algorithmic system was used to detect overpayments and errors regarding benefit payments and treats individuals who experience marginalization – those with disabilities, lone single parents who are mostly women, and those living in poverty – with suspicion. This discrimination built into the system prompted a challenge to the legality of the system.

The strength of this strategic litigation effort lies with the combined expertise of various civil society organizations in France, which helped ensure the case reflected the needs and experiences of affected communities across the country. 

National supervisory and equalities mechanisms

National ombudsman and supervisory bodies offer a different avenue to seek justice and accountability.

Many states, particularly those with strong Human Rights institutions, will aim to protect various rights through some form of national mechanism. This may be through a ministry or committee, a dedicated ombudsman, an equalities body, or equivalent. This opens up opportunities for investigations that expose algorithmic bias and discrimination.

In Sweden, following the disclosure by Lighthouse Reports and Svenska Dagbladet’s (SvD) of an algorithmic system that disproportionately flagged certain groups for further investigation regarding social benefits fraud, (including women, individuals with foreign backgrounds — those born overseas or whose parents were born in other countries — low-income earners), advocacy work with the dedicated discrimination ombudsman led them to put an open call to find a suitable case to challenge the system in question on grounds of discrimination.    

States with some form of national data protection and privacy laws will commonly have a regulatory body or authority to oversee the implementation and enforcement of the legislation. IMY, the Swedish Data Protection Authority, opened up an investigation into the Swedish Social Insurance Agency’s use of algorithmic systems after the publication of Lighthouse Reports and SvD’s research exposed discrimination within an algorithmic system, which resulted in the system being shut down.

Building power, mobilizing communities and capacity building

There are many routes toward building power with communities and collectively driving towards human rights objectives. For instance, in the fight to prohibit facial recognition, Amnesty International has worked with dozens of local organizations and digital volunteers to pursue justice and accountability in local courts.

In September 2020, Amnesty International filed a public records request under New York City’s Freedom of Information Law (FOIL) to obtain New York Police Department (NYPD) records on its surveillance of the historic Black Lives Matter (BLM) protests in 2020.

The NYPD initially refused to provide disclosures, denying both the initial request and the subsequent appeal, on how it had used facial recognition surveillance against protestors during the 2020 BLM protests, which saw activists targeted by the technology and subjected to warrantless harassment at their residences.

In July 2021, Amnesty International and Surveillance Technology Oversight Project (S.T.O.P.), a privacy and civil rights group, filed a lawsuit against the NYPD for refusing to disclose its records.

In February 2022, Amnesty International worked with over 7,000 digital volunteers to map facial recognition-enabled cameras across New York City, using Street View imagery and the in-house micro-tasking platform, “decoders”. The research revealed that New Yorkers living in areas at greater risk of stop-and-frisk by police are also more exposed to invasive facial recognition technology. The analysis, part of the global Ban the Scan campaign, showed how the NYPD’s vast surveillance operation particularly affects people already targeted for stop-and-frisk across all five boroughs of New York City. In the Bronx, Brooklyn and Queens, the research also showed that the higher the proportion of residents belonging to racialized communities, the higher the concentration of facial recognition-compatible CCTV cameras. The data from this research was submitted to the New York Supreme Court as additional weight to our case against the NYPD, demonstrating that accountability around facial recognition is a public interest issue.

In August 2022, the New York Supreme Court decided in favour of Amnesty International and S.T.O.P. on their joint Article 78 lawsuit and ordered the NYPD to disclose thousands of records of how the force procured and used facial recognition technology against BLM protesters. We have received nearly 2700 documents to date, which are shedding new light on abuses of the technology by the NYPD.

Amnesty International extends its deepest gratitude to Anna Dent who has provided expert review and input into the Algorithmic Accountability Toolkit.

The post Algorithmic Accountability Toolkit appeared first on Amnesty International.