Prohibitions

Local Interventions

Ban or Restrict Local Government Use of Harmful AI Technologies

Algorithmic decision systems are frequently sold to city agencies with promises of efficiency or cost reduction. Instead, algorithms are overwhelmingly used to reduce people’s access to critical and life-saving resources, from healthcare to unemployment assistance. These outcomes persist even when abiding by best-in-class mitigation techniques, leading to some decisions that are impossible to remedy after […]

Read more

Algorithmic decision systems are frequently sold to city agencies with promises of efficiency or cost reduction. Instead, algorithms are overwhelmingly used to reduce people’s access to critical and life-saving resources,1 from healthcare to unemployment assistance. These outcomes persist even when abiding by best-in-class mitigation techniques,2 leading to some decisions that are impossible to remedy after the fact. Local governments should avoid using AI and algorithmic decision systems, especially where critical decisions are made about people’s lives and livelihoods. 

Where AI technology is used by local agencies, governments must guarantee the right to opt out, the right to request a timely appeal, and the right to remedy decisions. This can be achieved through the following mechanisms:

Disclosures

Offer pre-decision disclosures that give individuals the right to opt out from the use of an AI system making decisions about them.

Timely Notification

Provide timely notification after a decision is made, including what decision or recommendation was made using AI, a clear description of the parameters and logic of how the AI impacted the decision or recommendation, and clear description of what personal information was used to make the decision, including both the input and output data.

Appeal

Provide timely and clear instructions for appealing the decision to a human reviewer, including the ability to correct any inaccurate information used in the decision.

Accessibility

All information must be delivered in an accessible format and language.

  1. Kevin De Liban, Inescapable AI: The Ways AI Decides How Low-Income People Work, Live, Learn, and Survive, TechTonic Justice, November 2024, 
    https://www.techtonicjustice.org/reports/inescapable-ai. ↩︎
  2. Eileen Guo, Gabriel Geiger, and Justin-Casimir Braun, “Inside Amsterdam’s High-Stakes Experiment to Create Fair Welfare AI,” MIT Technology Review, June 11, 2025, https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure. ↩︎

Prohibit or Limit the Use of Surveillance Technologies

People are subject to numerous harmful surveillance technologies by public agencies, workplaces, and private companies. Mounting evidence shows that the use of biometric technologies by police departments, including facial recognition systems, is flawed and error-prone, and can lead to irreparable harm, such as wrongful arrests. Private companies currently exploit the extraordinary amounts of data they […]

Read more

People are subject to numerous harmful surveillance technologies by public agencies, workplaces, and private companies. Mounting evidence shows that the use of biometric technologies by police departments, including facial recognition systems, is flawed and error-prone, and can lead to irreparable harm, such as wrongful arrests. Private companies currently exploit the extraordinary amounts of data they collect to set individualized prices for the goods and services we need to survive, driving up the cost of living. Algorithmic pricing schemes on app platforms artificially deflate wages and exacerbate the affordability crisis. Local governments can protect against these uses of surveillance technology by banning the use of biometric technologies, banning algorithmic rental price-fixing, and banning the use of surveillance wage-setting for everyone.

Strong example

Jersey City, New Jersey; Philadelphia, Pennsylvania; Minneapolis, Minnesota; and San Francisco, California, banned algorithmic rental price-fixing.

State & Regional Interventions

Bright-Line Rules That Restrict the Most Harmful AI Uses Wholesale

Bright-line rules that prohibit the most harmful use cases of AI send a clear message that the public determines whether, in what contexts, and how AI systems will be used. A growing list of ripe targets for these clear prohibitions include:

Read more

Bright-line rules that prohibit the most harmful use cases of AI send a clear message that the public determines whether, in what contexts, and how AI systems will be used. A growing list of ripe targets for these clear prohibitions include:

  • AI cannot be used for emotion-detection systems.
  • AI cannot be used for “social scoring,” i.e., scoring or ranking people based on their social behavior or predicted characteristics.
  • Surveillance data cannot be used to set prices or wages.
  • AI cannot be used to deny health insurance claims.
  • Surveillance and monitoring data about workers cannot be sold to third-party vendors.
  • AI cannot be used to replace public school teachers. 
  • AI cannot be used to generate sexually explicit deepfake imagery or election-related deepfake imagery.
  • AI cannot be used for the grooming and sexual exploitation of minors.
  • AI cannot be used for predictive policing.
  • AI cannot be used for military applications.
  • AI cannot be used to aid oil, gas, and coal extraction.

Federal Interventions

Bright-Line Rules That Restrict the Most Harmful AI Uses Wholesale

Bright-line rules that prohibit the most harmful use cases of AI send a clear message that the public determines whether, in what contexts, and how AI systems will be used. A growing list of ripe targets for these clear prohibitions include:

Read more

Bright-line rules that prohibit the most harmful use cases of AI send a clear message that the public determines whether, in what contexts, and how AI systems will be used. A growing list of ripe targets for these clear prohibitions include:

  • AI cannot be used for emotion-detection systems.
  • AI cannot be used for “social scoring,” i.e., scoring or ranking people based on their social behavior or predicted characteristics.
  • Surveillance data cannot be used to set prices or wages.
  • AI cannot be used to deny health insurance claims.
  • Surveillance and monitoring data about workers cannot be sold to third-party vendors.
  • AI cannot be used to replace public school teachers. 
  • AI cannot be used to generate sexually explicit deepfake imagery or election-related deepfake imagery.
  • AI cannot be used for the grooming and sexual exploitation of minors.
  • AI cannot be used for predictive policing.
  • AI cannot be used for military applications.
  • AI cannot be used to aid oil, gas, and coal extraction.