Home / News / Can we really regulate AI?

Can we really regulate AI?

[ad_1]

Governments around the world are racing to regulate artificial intelligence (“AI”). The Biden administration recently issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The European Union also reached a tentative agreement on the long-awaited Artificial Intelligence Law.

This increasing effort to regulate and control the development of artificial intelligence is not surprising. From improving drug discovery and disease detection to automating simple tasks, the economic and social benefits of AI are real. But when used incorrectly, AI can cause real harm. However, focusing on AI technology itself rather than harmful uses of AI as the target of regulation may be misguided. There is a risk of missing the mark by not adequately preventing harmful uses of AI or overly blocking socially beneficial ones.

Part of the challenge is defining the boundaries of AI. Artificial intelligence is an amorphous category that encompasses a variety of computational methods. Many of the recent advances in artificial intelligence are thanks to advances in “machine learning” methods that more effectively mimic human cognitive processes. But as new learning methods are developed, the boundary between artificial intelligence and “ordinary” computing continues to shift. Hence the adage, “Artificial intelligence is what hasn’t been done yet.”

These shifting boundaries make it much more difficult to identify AI as a target of regulation. A broad definition of artificial intelligence risks taking over entire computational systems and over-regulating technological development. Narrower definitions may be more targeted, but they also run the risk of quickly becoming obsolete, given the dynamic, rapidly changing boundaries of AI. The White House executive order flirts with both broad and narrow definitions of artificial intelligence. It sets very specific thresholds for compliance with AI reporting requirements, such as AI models that are trained “using amounts of computing power greater than 1026 integer or floating point operations.” While potentially more manageable, these thresholds seem somewhat arbitrary. And even if they reflect the cutting edge of artificial intelligence today, they will quickly become obsolete.

Not only is it difficult to meaningfully define the limits of AI, it is also challenging to quantify the risks of AI or AI models in the abstract and without context. We can’t meaningfully measure whether a major AI model, like the one powering ChatGPT, will ultimately cause more harm than good. We certainly cannot quantify the risk of AI “superintelligence” taking over the world; This is a speculative doomsday scenario, one that no doubt encourages a more cautious approach to AI regulation. Conversely, we can more easily measure and punish certain harmful uses of these models (such as deepfakes that spread disinformation and undermine the democratic process).

Of course, some laws, such as the EU Artificial Intelligence Law, also single out and regulate certain applications of AI that are perceived to be higher risk, such as facial recognition and credit scoring systems. Additionally, there are many laws and regulations that are not specific to AI that govern the use of AI, from financial laws to copyright laws. But these “downstream” rules are part of an AI governance framework that increasingly includes “upstream” regulation of AI methods and models, that is, the technology itself.

We need to think more carefully about the relative merits of upstream AI regulation, especially to avoid over-regulation of AI development. We don’t want to make it difficult for smaller startups to compete with larger companies like OpenAI and Microsoft, which dominate the market in AI development and can more easily cover compliance costs. We must try to both prevent and eliminate the harm caused by artificial intelligence. But at the same time, we should not be so cautious that we destroy many economic and social benefits.

It may be too early to know that the harms of AI clearly outweigh its benefits and that an extremely cautious regulatory stance is therefore justified.

Nikita Aggarwal is a lecturer at the University of California, Los Angeles School of Law and a postdoctoral fellow at the UCLA Institute of Technology, Law, and Policy.

[ad_2]

About yönetici

Check Also

Meet the 2023-24 Aurora-Elgin men’s basketball all-District team

[ad_1] Players from Waubonsie Valley, West Aurora, Oswego East and Class 1A state finalist Aurora …

Leave a Reply

Your email address will not be published. Required fields are marked *

Watch Dragon ball super