Can we make AI safe without stifling innovation?

Jun 28, 2023

8 mins

Can we make AI safe without stifling innovation?
author
Rozena Crossman

Journalist and translator based in Paris, France.

Artificial intelligence is far from new, yet it continues to enthrall the world. It entered its current golden age in the 1970s, but is still making headlines. In April, for example, a Japanese city decided to delegate some administrative government work to ChatGPT, an AI-powered chatbot released a mere six months previously. But AI’s history also includes systems that have discriminated against certain job candidates and convinced people to take their own lives. All over the world, in every industry imaginable, people will continue to use this technology to make improvements at work. At the same time, its ability to do serious harm has been documented widely. So why hasn’t any country enacted laws aimed at regulating AI yet?

To help us to understand the complex tango involved in increasing AI’s benefits to society while reducing its potential to cause harm, Welcome To The Jungle spoke with Ian Andrew Barber, an international human rights lawyer based in London. Barber, who is a senior legal officer at Global Partners Digital, a digital rights non-governmental organization, offers fresh insight into the challenges lawmakers face when it comes to AI, legal initiatives that are in the works and the logic behind them.

*This interview took place on April 14th 2023, prior to Italy’s lift of the ChatGPT ban and Sam Altman’s appearance in congress.

How does AI factor into your work as a human rights lawyer?

AI is a growing issue everywhere. It seems like everyone is talking about AI, trying to wrap their heads around how it works and what developments we will see in the future. But lawyers are increasingly paying attention to AI in the context of existing and proposed regulations, which have required us to take a more coordinated and proactive approach.

It’s clear that AI offers a number of benefits to society. It can optimize anything from agriculture to urban living, or facilitate greater access to information and education. But at the same time, we’re also seeing risks emerging from these technologies — to human rights and democratic values. These include the right to a fair trial, the right to privacy, guarantees of non-discrimination, etc.

In recent years, we’ve seen a number of non-binding AI guidelines or frameworks proposed by the technical community and international organizations such as Unesco and the OECD. On top of that, we now have countries coming up with their own national AI strategies, which outline how they will deal with the benefits and the risks. In the US, for example, there’s a blueprint for an AI Bill of Rights. For the moment, these approaches are non-binding and haven’t translated into legislation. With that said, in the past year or so, there have been efforts by regional blocs and intergovernmental organizations to create legally-binding frameworks with respect to AI.

Why has it been so difficult for lawmakers to legislate for AI?

It’s been difficult for a number of reasons. First, [lawmakers have] just a basic understanding of the technology. It’s quite complicated and policymakers are not always the most tech savvy. Most people outside of the technical community are faced with a very steep learning curve when it comes to understanding technology. For example, [at] recent US congressional hearings, legislators lacked a basic understanding of what algorithms are and how they are used by social media platforms. And recent Supreme Court arguments about intermediary liability show how different branches of government simply don’t grasp the fundamentals when it comes to technology, particularly when it’s complex or emerging.

Then, there’s a disconnect between how a “normal” person understands AI and what policymakers are doing, or trying to understand. On one end, we have these developments happening at the Council of Europe or the European Union with really brilliant policymakers trying to find solutions to the risks posed by AI. On the other end, you have people on social media, even influential individuals, claiming AI has grown too powerful or is going to take over the world. Policymakers should respond to what the people need and want, but there’s a general lack of understanding. I think it’s the biggest issue across the board.

Also, the technology is still developing at a rapid pace. A decade ago AI systems still struggled to distinguish a photo of a cat from a human, and now we have seen the ongoing proliferation of generative AI [the technology behind ChatGPT]. This has added another level of complexity to this technology that all stakeholders need to be able to understand in order to effectively regulate it. Lawmakers seem to be constantly playing catch up.

image

What can legislators do to keep up?

Regulations need to build on existing legal frameworks. We already have recognized protections and rights that are guaranteed at the international and national levels, so there’s no reason to simply abandon them. For example, when the modern international human rights system was developed in the aftermath of World War 2, they probably didn’t envision issues around digital mass surveillance. But we’ve adapted to apply these existing protections, such as privacy, in a new context. Sometimes, that requires a bit of work and finesse — but we’ve made it happen before.

More recently, we saw how existing data protection laws apply to AI when Italy banned ChatGPT. It said there was no legal basis to justify the mass collection and storage of personal data for the purposes of training ChatGPT’s algorithms. [It has since rescinded the ban.] But with AI, it’s a bit more complicated. For example, under international human rights law, you have the right to an effective remedy, meaning that individuals should have the ability to challenge a violation of their rights and seek redress. But the issue with AI is that there is often this “black box” or opacity around decision-making. How do you know how it made a particular decision? Is it because of a biased developer or the training data? You don’t know, and therefore might not be able to challenge a violation of your rights or seek redress. So, how do we solve that? We still want to make sure the right to remedy is viable and available. That’s why it’s important for legislatures to focus on transparency requirements, accountability mechanisms and oversight.

Can you tell us about current initiatives to legislate for AI and what they’re focusing on?

At the national level, it’s at a very nascent stage right now. We’re not seeing AI-specific legislation that’s comprehensive, but we are starting to see countries consider it. The National Telecommunications and Information Administration (NTIA) of the US Department of Commerce has requested comments on AI system accountability measures and policies, and the UK has an open consultation on its new approach to AI. These are key opportunities for activists, experts and organizations to make comments and attempt to influence government policy.

Besides these national efforts, there are also two really important initiatives that are emerging out of Europe. On one hand, we have the EU Artificial Intelligence Act. This is the 27 EU member states working together to create a harmonized legal framework for the development and use of AI within the internal market. It aims to accomplish this using a risk-based approach, where intervention and obligations are based on the level of risk based on the intended use of an AI system. Right now, there are four distinct levels, as well as a more recent addition of general purpose AI systems.

First, there’s “unacceptable risk”: these applications would simply be banned because they contravene EU values and violate fundamental rights. Social scoring, as seen in China, would be one example. The proposal would also ban AI systems that exploit the vulnerabilities of individuals based on age, physical or mental disability, or social and economic situation. There’s a very short list of prohibited uses — only four at this point.

Then, there’s “high risk.” These AI systems may have an adverse impact on safety or fundamental rights. This category applies to AI systems used in border control, law enforcement, medical devices, recruiting procedures, among others. They would be subject to various requirements such as those relating to data quality, transparency, and oversight. Then there’s “limited risk.” So these would just have basic transparency requirements, including making users aware they are interacting with an AI system.

More recently, EU policymakers have tried to respond to the issue of general purpose AI, such as ChatGPT, and have inserted a new category into the proposal. (They’re trying to put it in its own category.) The problem is that this AI can be used for different purposes with varying levels of risk. I can use it to book my vacation or churn out misinformation at scale. I think it shows how the EU is already having to adapt to technological changes, and the law hasn’t even been put into effect yet.

And the other European initiative you mentioned?

The Council of Europe’s proposed Convention on AI. The continent’s premier human rights organization, which is composed of 46 countries, is setting out a binding framework that provides obligations and principles on the design, development and application of AI systems. This is to be based on the Council of Europe’s standards on human rights, democracy, and the rule of law. It’s essentially a thematic human rights treaty.

While the EU’s Artificial Intelligence Act is a proposed law that will directly apply in the EU once enacted, the Convention on AI has the potential to be the world’s first legally binding treaty on AI. This is because countries outside of Europe such as Canada, the US, Israel, Japan and others would be able to sign on, setting international standards for approaching AI governance. By setting out new, globally-recognized safeguards for AI, it could radically drive up protections on human rights.

Could these initiatives stifle innovation?

Well, I guess the entire point is that you want to ensure you’re not stifling innovation. That’s why both of these initiatives are taking a risk-based, proportionate approach. It’s not necessary for AI systems that pose a limited risk to be subject to the same obligations as more dangerous systems. It’s important that all regulation is tailored and targeted, and not overly burdensome. So, these approaches are both pro-innovation and address risks. Both of these things can happen at the same time.

I think that providing effective guardrails inherently lends itself towards innovation. People want to use products and services that they know are safe and secure. We see this with the regulation of IoT [the internet of things] devices. IoT devices aren’t limited to smart watches or household devices, they are also used in healthcare and manufacturing. These technologies are increasingly subject to regulations, but that hasn’t slowed the pace of development.

There’s all this talk about an AI race. But isn’t a slightly less advanced, safer and reliable technology better than one that’s slightly more advanced, but potentially dangerous? I think we will ultimately decide that the answer is “yes.”

image

Do you think Italy’s ChatGPT ban will stifle the country’s ability to stay competitive?

No, I don’t. I think two things will likely happen. One, ChatGPT will decide that it will comply with the data protection law in question. Compliance would enable its product to reach a broader number of people, and ultimately benefit the company. So the financial incentive alone is pretty significant. Two, it’s not as if ChatGPT is the only option out there. There are dozens of alternatives that people can use and that are available in Italy, and will likely take ChatGPT’s place in the Italian market.

So, digital rights decisions in the EU affect the AI market in the US?

Yes, it’s called the “Brussels effect” when EU regulations have a significant influence on laws around the world. This already happened with the General Data Protection Regulation (GDPR), and we’re now seeing it play out with the EU’s Digital Services Act, which requires online platforms such as Meta and Twitter to adhere to certain transparency, moderation and due diligence obligations.

So even though the US doesn’t have comprehensive data protection laws or online platform regulations, a lot of their biggest companies have to abide by European laws if they want a part of that market. In many ways, we’re seeing the US becoming a rule taker and not rule maker.

What’s the bare minimum an AI policy needs to include if it is to be able to mitigate threats?

At the most basic level, you need to first understand the potential harm of whatever system and context you’re facing. So you need to have some means of assessing an AI system, how it can be used and the risks that it might pose. You also need to have transparency, explainability, and accountability requirements, as well as independent and effective oversight. That means understanding where the data came from, but also being transparent with an individual when AI is being used and providing information about how the AI made a decision.

It goes back to the right to redress and the importance of providing explainability. Ultimately, you need to be able to challenge a decision made by AI and figure out where the onus lies.

Photo: Bess Adler for Welcome to the Jungle

Follow Welcome to the Jungle on Facebook on LinkedIn and on Instagram and subscribe to our newsletter to get our latest articles every day!

Topics discussed