AI Regulation

The significant resources (capital, energy, land, and water) going into data center expansion are being deployed in service of largely unproven artificial intelligence technologies—whose purported “productivity benefits” have yet to reach millions of consumers and workers across the country, and whose harmful effects are materially reshaping our institutions in ways that ratchet up inequality.

And yet, the federal government has repeatedly attempted to block states from passing laws regulating AI, encroaching upon local and state authority and endangering millions of people.

AI regulation goes hand in hand with data center regulations to ensure that if data centers are built, they cannot do so in service of technology that harms people.

Local Interventions

Local governments are empowered to protect their constituents, particularly communities of color, immigrants, and low-income and working-class people, from the harms of AI technologies.

Ban or Restrict Local Government Use of Harmful AI Technologies

Algorithmic decision systems are frequently sold to city agencies with promises of efficiency or cost reduction. Instead, algorithms are overwhelmingly used to reduce people’s access to critical and life-saving resources, from healthcare to unemployment assistance. These outcomes persist even when abiding by best-in-class mitigation techniques, leading to some decisions that are impossible to remedy after […]

Read more

Algorithmic decision systems are frequently sold to city agencies with promises of efficiency or cost reduction. Instead, algorithms are overwhelmingly used to reduce people’s access to critical and life-saving resources,1 from healthcare to unemployment assistance. These outcomes persist even when abiding by best-in-class mitigation techniques,2 leading to some decisions that are impossible to remedy after the fact. Local governments should avoid using AI and algorithmic decision systems, especially where critical decisions are made about people’s lives and livelihoods. 

Where AI technology is used by local agencies, governments must guarantee the right to opt out, the right to request a timely appeal, and the right to remedy decisions. This can be achieved through the following mechanisms:

Disclosures

Offer pre-decision disclosures that give individuals the right to opt out from the use of an AI system making decisions about them.

Timely Notification

Provide timely notification after a decision is made, including what decision or recommendation was made using AI, a clear description of the parameters and logic of how the AI impacted the decision or recommendation, and clear description of what personal information was used to make the decision, including both the input and output data.

Appeal

Provide timely and clear instructions for appealing the decision to a human reviewer, including the ability to correct any inaccurate information used in the decision.

Accessibility

All information must be delivered in an accessible format and language.

  1. Kevin De Liban, Inescapable AI: The Ways AI Decides How Low-Income People Work, Live, Learn, and Survive, TechTonic Justice, November 2024, 
    https://www.techtonicjustice.org/reports/inescapable-ai. ↩︎
  2. Eileen Guo, Gabriel Geiger, and Justin-Casimir Braun, “Inside Amsterdam’s High-Stakes Experiment to Create Fair Welfare AI,” MIT Technology Review, June 11, 2025, https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure. ↩︎

Prohibit or Limit the Use of Surveillance Technologies

People are subject to numerous harmful surveillance technologies by public agencies, workplaces, and private companies. Mounting evidence shows that the use of biometric technologies by police departments, including facial recognition systems, is flawed and error-prone, and can lead to irreparable harm, such as wrongful arrests. Private companies currently exploit the extraordinary amounts of data they […]

Read more

People are subject to numerous harmful surveillance technologies by public agencies, workplaces, and private companies. Mounting evidence shows that the use of biometric technologies by police departments, including facial recognition systems, is flawed and error-prone, and can lead to irreparable harm, such as wrongful arrests. Private companies currently exploit the extraordinary amounts of data they collect to set individualized prices for the goods and services we need to survive, driving up the cost of living. Algorithmic pricing schemes on app platforms artificially deflate wages and exacerbate the affordability crisis. Local governments can protect against these uses of surveillance technology by banning the use of biometric technologies, banning algorithmic rental price-fixing, and banning the use of surveillance wage-setting for everyone.

Strong example

Jersey City, New Jersey; Philadelphia, Pennsylvania; Minneapolis, Minnesota; and San Francisco, California, banned algorithmic rental price-fixing.

Attach Strong Conditions to Government Procurement of AI Technology

Increasingly, governments are turning to third-party tech vendors to outsource technical skills and automate key government functions. This in turn depletes in-house technical expertise and diminishes the quality of government services for all people. In the event that a local government agency must pilot, purchase, or otherwise use AI technology, local governments should attach conditions […]

Read more

Increasingly, governments are turning to third-party tech vendors to outsource technical skills and automate key government functions. This in turn depletes in-house technical expertise and diminishes the quality of government services for all people. In the event that a local government agency must pilot, purchase, or otherwise use AI technology, local governments should attach conditions to ensure that vendors, products, and city agencies abide by these strong accountability measures.1 Conditions must be binding and legally enforceable, and must exist as grounds to reject or void contracts if and where tech firms cannot abide by accountability measures.

  1. For guidance, see Accountable Tech et al., Zero Trust AI Governance, August 10, 2023, https://ainowinstitute.org/wp-content/uploads/2023/08/Zero-Trust-AI-Governance.pdf; Roya Pakzad and Cynthia Conti-Cook, Key Considerations When Procuring AI in the Public Sector, Taraaz and The Collaborative Research Center for Resilience (CRCR), 2025, https://static1.squarespace.com/static/5d159d288addab0001036c45/t/6890f9066bf93951bedd9485/1754331401682/AI_Procurement_Taraaz_CRCR_2025.pdf; and Rashida Richardson, Best Practices for Government Procurement of Data-Driven Technologies, May 2021, https://riipl.rutgers.edu/files/2021/05/Best-Practices-for-Government-Technology-Procurement-May-2021.pdf. ↩︎

State & Regional Interventions

State policymakers are uniquely positioned to pass legislation protecting constituents from the worst AI abuses: Even as federal legislation has lagged, state legislatures have moved to enact measures to meet the moment. States are empowered to protect their constituents, particularly communities of color, immigrants, and low-income and working-class people, from the harms of AI technologies.

Bright-Line Rules That Restrict the Most Harmful AI Uses Wholesale

Bright-line rules that prohibit the most harmful use cases of AI send a clear message that the public determines whether, in what contexts, and how AI systems will be used. A growing list of ripe targets for these clear prohibitions include:

Read more

Bright-line rules that prohibit the most harmful use cases of AI send a clear message that the public determines whether, in what contexts, and how AI systems will be used. A growing list of ripe targets for these clear prohibitions include:

  • AI cannot be used for emotion-detection systems.
  • AI cannot be used for “social scoring,” i.e., scoring or ranking people based on their social behavior or predicted characteristics.
  • Surveillance data cannot be used to set prices or wages.
  • AI cannot be used to deny health insurance claims.
  • Surveillance and monitoring data about workers cannot be sold to third-party vendors.
  • AI cannot be used to replace public school teachers. 
  • AI cannot be used to generate sexually explicit deepfake imagery or election-related deepfake imagery.
  • AI cannot be used for the grooming and sexual exploitation of minors.
  • AI cannot be used for predictive policing.
  • AI cannot be used for military applications.
  • AI cannot be used to aid oil, gas, and coal extraction.

Regulate AI Throughout the Entire Life Cycle of Development

States should regulate AI throughout the entire life cycle of development, from how data is collected through the training process, to fine tuning and application development and deployment. Require AI companies to submit to independent third-party oversight and testing throughout the AI life cycle, and provide enforcement agencies with the resources and in-house staffing necessary […]

Read more

States should regulate AI throughout the entire life cycle of development, from how data is collected through the training process, to fine tuning and application development and deployment. Require AI companies to submit to independent third-party oversight and testing throughout the AI life cycle, and provide enforcement agencies with the resources and in-house staffing necessary to conduct oversight throughout the AI life cycle.

Fight Against Attempts to Block States from Regulating AI 

Throughout 2025, the federal government repeatedly attempted to block states from passing laws regulating AI, most recently by threatening to put a provision limiting states’ ability to pass AI laws into the National Defense Authorization Act (NDAA), a defense spending bill. Legislators must continue to speak out against these attempts to block state authority to […]

Read more

Throughout 2025, the federal government repeatedly attempted to block states from passing laws regulating AI,1 most recently by threatening to put a provision limiting states’ ability to pass AI laws into the National Defense Authorization Act (NDAA), a defense spending bill.2 Legislators must continue to speak out against these attempts to block state authority to protect constituents.

  1. Cecilia Kang, “Defeat of a 10-Year Ban on State A.I. Laws Is a Blow to Tech Industry,” New York Times, July 1, 2025, https://www.nytimes.com/2025/07/01/us/politics/state-ai-laws.html. ↩︎
  2. Cristiano Lima-Strong, “It’s Back. Congress Gears Up for Year-End Fight Over Moratorium on AI Laws,” Tech Policy Press, November 18, 2025, https://www.techpolicy.press/its-back-congress-gears-up-for-yearend-fight-over-moratorium-on-ai-laws. ↩︎

Federal Interventions

Federal policymakers must pass legislation protecting the public from the worst AI abuses, as well as block preemption attempts that prohibit state policymakers from protecting their state constituents.

Reassert Congressional Authority over White House AI Executive Order Overreach

The Trump administration has made its desire to use executive authority to boost the AI industry and fast-track data centers across the country extremely clear. This includes a drive to preempt state and local authority and sidestep congressional authority in service of AI data centers. Congress can protect against such unilateral actions. Moreover, the July […]

Read more

The Trump administration has made its desire to use executive authority to boost the AI industry and fast-track data centers across the country extremely clear. This includes a drive to preempt state and local authority1 and sidestep congressional authority in service of AI data centers.2 Congress can protect against such unilateral actions.

Moreover, the July 2025 executive order “Accelerating Federal Permitting of Data Center Infrastructure” orders the secretary of commerce to launch an initiative providing financial support for qualifying data center projects.3 Congress, per its authority to control federal spending, can institute oversight over all federal investment in data center projects, including any loans, loan guarantees, grants, tax incentives, and offtake agreements suggested in the executive order.4

  1. Executive Order 14365 of December 11, 2025, Ensuring a National Policy Framework for Artificial Intelligence, 90 Fed. Reg. 58499 (2025), https://www.federalregister.gov/documents/2025/12/16/2025-23092/ensuring-a-national-policy-framework-for-artificial-intelligence. ↩︎
  2. Executive Order 14318, Accelerating Federal Permitting of Data Center Infrastructure. ↩︎
  3. Ibid. ↩︎
  4. The Appropriations Clause, Art. I, § 9, Cl, 7, reads that “No Money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law.” Congress has the authority to require that a financial program referenced in Sec. 3 of the Executive Order, “Accelerating Federal Permitting of Data Center Infrastructure,” proceed into Congressional appropriations. In that process of appropriations, Congress may impose conditions, limitations, or prohibitions on the use of the funds. ↩︎

Repeal or Roll Back Federal Tax Incentives and Subsidies Given to AI Firms and Data Center Speculators

Repeal or roll-back all federal tax subsidies and credits for data center infrastructure, such as the 100 percent bonus depreciation for IT infrastructure and data center equipment under Public Law 119–21 or the 45Q credit for carbon sequestration technologies.

Read more

Repeal or roll-back all federal tax subsidies and credits for data center infrastructure, such as the 100 percent bonus depreciation for IT infrastructure and data center equipment under Public Law 119–211 or the 45Q credit for carbon sequestration technologies.2

  1. Public Law 119–21, 119th Cong., 1st sess. (July 4, 2025), 139 Stat. 72. ↩︎
  2. 26 U.S.C. § 45Q (2023). ↩︎

Condition Future Federal Investment in AI on Guarantees for the Public

Congress can attach enforceable conditions to all federal investment into AI firms to ensure that any taxpayer support for the AI industry works to benefit the public. See “Establish Public Benefit Conditions on All Federal Investment in AI”for a comprehensive list of these conditions.

Read more

Congress can attach enforceable conditions to all federal investment into AI firms to ensure that any taxpayer support for the AI industry works to benefit the public. See “Establish Public Benefit Conditions on All Federal Investment in AI”for a comprehensive list of these conditions.

Pursue Enforcement Strategy to Thwart Toxic Market Behavior

Federal policymakers can pursue an enforcement strategy that utilizes competition, financial, fraud, and transparency regulations to surface and hold accountable toxic market behavior by AI companies.

Read more

Federal policymakers can pursue an enforcement strategy that utilizes competition, financial, fraud, and transparency regulations to surface and hold accountable toxic market behavior by AI companies.

Make Clear There Will Be No Bailout for AI Firms That Fail

Congressional policymakers can message clearly that there will be no federal bailout for AI firms that fail.

Read more

Congressional policymakers can message clearly that there will be no federal bailout for AI firms that fail.

Establish Federal Deference to State and Local Power

Congress can reject attempts to strip states of their ability to protect constituents from data centers through moratoriums or proposed national legislative frameworks, ensuring that permitting decisions for data centers remain firmly under state and local control. Congress can also make clear that the President cannot use emergency authorities to usurp state, county, or municipal […]

Read more

Congress can reject attempts to strip states of their ability to protect constituents from data centers through moratoriums or proposed national legislative frameworks, ensuring that permitting decisions for data centers remain firmly under state and local control. Congress can also make clear that the President cannot use emergency authorities to usurp state, county, or municipal laws and regulations—including zoning and permitting laws—with regard to data centers and associated energy infrastructure.1

  1. Thanks to Public Citizen for this recommendation. See Deanna Noel and Meghan Pazik, “Reining in Big Tech: Policy Solutions to Address the Data Center Buildout,” Public Citizen, December 3, 2025, https://www.citizen.org/article/reining-in-big-tech-policy-solutions-to-address-the-data-center-buildout. ↩︎

Reject Federal Sandbox and Civil Immunity Bills

Reject federal sandbox and civil immunity bills that function as a moratorium by blocking states from passing and enforcing their own laws to regulate AI use cases.

Read more

Reject federal sandbox1 and civil immunity2 bills that function as a moratorium by blocking states from passing and enforcing their own laws to regulate AI use cases.

  1. S. 2750, SANDBOX Act, 119th Cong. (2025), introduced by Sen. Ted Cruz, https://www.congress.gov/bill/119th-congress/senate-bill/2750. ↩︎
  2. S. 2081, RISE Act, 119th Cong. (2025), introduced by Sen. Cynthia Lummis, https://www.congress.gov/bill/119th-congress/senate-bill/2081/text. ↩︎

Resist Industry-Written Federal Standards

Resist passing weak and industry-written federal standards that effectively function as a moratorium, blocking states from passing stringent standards to protect their constituents. In particular, the federal government can scrutinize the upcoming legislative recommendation establishing a federal policy framework for AI that preempts state AI laws prepared by the administration under the December 2025 Executive […]

Read more

Resist passing weak and industry-written federal standards that effectively function as a moratorium, blocking states from passing stringent standards to protect their constituents.1 In particular, the federal government can scrutinize the upcoming legislative recommendation establishing a federal policy framework for AI that preempts state AI laws prepared by the administration under the December 2025 Executive Order.2

  1. Kate Brennan, Amba Kak, and Sarah Myers West, “The Storm Clouds Looming Past the State Moratorium: Weak Regulation is as Bad as None,” Tech Policy Press, June 10, 2025, https://www.techpolicy.press/the-storm-clouds-looming-past-the-state-moratorium-weak-regulation-is-as-bad-as-none. ↩︎
  2. Executive Order 14365, Ensuring a National Policy Framework for Artificial Intelligence. ↩︎

Bright-Line Rules That Restrict the Most Harmful AI Uses Wholesale

Bright-line rules that prohibit the most harmful use cases of AI send a clear message that the public determines whether, in what contexts, and how AI systems will be used. A growing list of ripe targets for these clear prohibitions include:

Read more

Bright-line rules that prohibit the most harmful use cases of AI send a clear message that the public determines whether, in what contexts, and how AI systems will be used. A growing list of ripe targets for these clear prohibitions include:

  • AI cannot be used for emotion-detection systems.
  • AI cannot be used for “social scoring,” i.e., scoring or ranking people based on their social behavior or predicted characteristics.
  • Surveillance data cannot be used to set prices or wages.
  • AI cannot be used to deny health insurance claims.
  • Surveillance and monitoring data about workers cannot be sold to third-party vendors.
  • AI cannot be used to replace public school teachers. 
  • AI cannot be used to generate sexually explicit deepfake imagery or election-related deepfake imagery.
  • AI cannot be used for the grooming and sexual exploitation of minors.
  • AI cannot be used for predictive policing.
  • AI cannot be used for military applications.
  • AI cannot be used to aid oil, gas, and coal extraction.

Regulate AI Throughout the Entire Life Cycle of Development

States should regulate AI throughout the entire life cycle of development, from how data is collected through the training process, to fine tuning and application development and deployment. Require AI companies to submit to independent third-party oversight and testing throughout the AI life cycle, and provide enforcement agencies with the resources and in-house staffing necessary […]

Read more

States should regulate AI throughout the entire life cycle of development, from how data is collected through the training process, to fine tuning and application development and deployment. Require AI companies to submit to independent third-party oversight and testing throughout the AI life cycle, and provide enforcement agencies with the resources and in-house staffing necessary to conduct oversight throughout the AI life cycle.

Fight Against Attempts to Block States from Regulating AI 

Throughout 2025, the federal government repeatedly attempted to block states from passing laws regulating AI, most recently by threatening to put a provision limiting states’ ability to pass AI laws into the National Defense Authorization Act (NDAA), a defense spending bill. Legislators must continue to speak out against these attempts to block state authority to […]

Read more

Throughout 2025, the federal government repeatedly attempted to block states from passing laws regulating AI,1 most recently by threatening to put a provision limiting states’ ability to pass AI laws into the National Defense Authorization Act (NDAA), a defense spending bill.2 Legislators must continue to speak out against these attempts to block state authority to protect constituents.

  1. Cecilia Kang, “Defeat of a 10-Year Ban on State A.I. Laws Is a Blow to Tech Industry,” New York Times, July 1, 2025, https://www.nytimes.com/2025/07/01/us/politics/state-ai-laws.html. ↩︎
  2. Cristiano Lima-Strong, “It’s Back. Congress Gears Up for Year-End Fight Over Moratorium on AI Laws,” Tech Policy Press, November 18, 2025, https://www.techpolicy.press/its-back-congress-gears-up-for-yearend-fight-over-moratorium-on-ai-laws. ↩︎