The Trump administration could reverse progress on AI regulation | Technology News

13 December 2024Last Update :
The Trump administration could reverse progress on AI regulation | Technology News

While efforts to regulate the creation and use of artificial intelligence (AI) tools in the United States have been slow to make gains, the administration of President Joe Biden has attempted to outline how AI should be used by the federal government and how AI companies should ensure the safety and security of their tools.

The incoming Trump administration, however, has a very different view on how to approach AI, and it could end up reversing some of the progress that has been made over the past several years.

President Biden signed an executive order in October 2023 that was meant to promote the “safe, secure, and trustworthy development and use of artificial intelligence” within the federal government. President-elect Donald Trump has promised to repeal that executive order, saying it would hinder innovation.

Biden was also able to get seven leading AI companies to agree to guidelines for how AI should be safely developed going forward. Aside from that, there are no federal regulations that specifically address AI. Experts say the Trump administration will likely have a more hands-off approach to the industry.

“I think the biggest thing we’re going to see is the massive repealing of the sort of initial steps the Biden administration has taken toward meaningful AI regulation,” says Cody Venzke, a senior policy counsel in the ACLU’s National Political Advocacy Department. “I think there’s a real threat that we’re going to see AI growth without significant guardrails, and it’s going to be a little bit of a free-for-all.”

Growth without guardrails is what the industry has seen so far, and that’s led to a sort of Wild West in AI. This can cause problems, including the spread of deepfake porn and political deepfakes, without lawmakers restricting how the technology can be used.

One of the top concerns of the Biden administration, and those in the tech policy space, has been how generative AI can be used to wage disinformation campaigns, including deepfakes, which are fraudulent videos of people that show them saying or doing things they never did. This kind of content can be used to attempt to sway election results. Venzke says he doesn’t expect the Trump administration to be focused on preventing the spread of disinformation.

AI regulations may not necessarily be a major focus for the Trump administration, Venzke says, but it is on their radar. Just this week, Trump chose Andrew Ferguson to lead the Federal Trade Commission (FTC) – and he will likely push back against regulating the industry.

Ferguson, a commissioner on the FTC, has said that he will aim to “end the FTC’s attempt to become an AI regulator”, Punchbowl News reported, and said the FTC, an independent agency accountable to the US Congress, should be wholly accountable to the Oval Office. He has also suggested that the FTC should investigate companies that refuse to advertise next to hateful and extremist content on social media platforms.

Venzke says Republicans think that Democrats want to regulate AI to make it “woke,” which means that it would acknowledge things like the existence of transgender people or man-made climate change.

AI’s ability to ‘inform decisions’

Artificial intelligence doesn’t just answer questions and generate images and videos, though. Kit Walsh, director of AI and access-to-knowledge legal projects at the Electronic Frontier Foundation, tells Al Jazeera that AI is being used in many ways that threaten people’s individual liberties, including in court cases, and regulating it to prevent harm is necessary.

While people think computers making decisions can eliminate bias, it can actually cause bias to become more entrenched if the AI is created using historical data that is itself biased. For instance, an AI system that was created to determine who receives parole might utilise data from cases where Black Americans received harsher treatment than white Americans.

“The most important issues in AI right now are its use to inform decisions about people’s rights,” Walsh says. “That ranges from everything from predictive policing to deciding who gets governmental housing to health benefits. It’s also the private use of algorithmic decision-making for hiring and firing or housing and so on.”

Walsh says she thinks there’s a lot of “tech optimism and solutionism” among some of the people who Trump is interested in recruiting to his administration, and they may end up trying to use AI to promote “efficiency in government”.

This is the stated goal of people like Elon Musk and Vivek Ramaswamy, who will be leading what appears to be an advisory committee called the Department of Government Efficiency.

“It is true that you can save money and fire some employees if you are alright with less accurate decisions [that come with AI tools]. And that might be the path that someone might take in the interest of reducing government spending. But I would recommend against that, because it’s going to harm the people who rely on government agencies for essential services,” Walsh says.

The Trump administration will likely be spending a lot more time focused on deregulation than creating new regulations if Trump’s first term as US president in 2017-2021 offers any hint of what to expect. That includes regulations related to the creation and use of AI tools.

“I would like to see sensible regulation that paves the way for socially responsible development, deployment, and use of AI,” says Shyam Sundar, director of the Penn State Center for Socially Responsible Artificial Intelligence. “At the same time, the regulation should not be so heavy-handed that it curtails innovation.”

Sundar says the “new revolution” sparked by generative AI has created “a bit of Wild Wild West mentality among technologists”. Future regulations, he says, should focus on creating guardrails where necessary and promoting innovation in areas where AI can be useful.