Main Content

How governments and companies should advance trusted AI

AI is a profound opportunity. The stakes are high. AI is projected to enhance human productivity and unlock an astounding $16 trillion in value by 2030. Not only will this fuel economic growth and boost GDP, but it will also offer a competitive edge to individuals and organizations who effectively leverage its capabilities. AI could also help address some of our most pressing challenges, whether it’s pioneering drug discovery, improving manufacturing and food production, or confronting climate change.

But, as with any other powerful technology, AI comes with the potential for both misuse and risk. If AI is not deployed responsibly, it could have real-world consequences - especially in sensitive, safety-critical areas. This is a serious challenge we must overcome, and it is precisely why we urge Congress and Administration to enact smart regulation now. At IBM, we believe smart regulation should be based on three core tenets:

#1 Regulate AI risk, not AI algorithms

We should regulate high-risk uses of AI. Not all uses of AI carry the same level of risk. While some might seem harmless, others can have far-reaching consequences, such as propagating misinformation, introducing bias into lending decisions, or compromising election integrity. Because each AI application is unique, we strongly believe that regulation must account for the context in which AI is deployed and must ensure that the high-risk uses of AI are regulated more closely. This kind of smart, precision regulation works. There is a successful precedent. In semiconductors, we have never licensed the invention of new chips. Instead, we regulate when, where, and how those products are used. This helps promote both innovation and accountability. The same can be done with AI.

#2 Make AI creators and deployers accountable, not immune to liability

We should hold those who create and deploy AI accountable. While governments play an important role, others must also bear a responsibility. Legislation should consider the different roles of AI creators and deployers and hold them accountable in the context in which they develop or deploy AI. For example, companies using AI for employment decision-making cannot claim immunity from employment discrimination charges. Similarly, if a software developer creates a financial algorithm that promotes fraudulent activities, they should be held liable for the potential harm it may cause. Let’s learn from past mistakes with emerging technologies. Section 230 stands as a cautionary tale; we cannot create another broad shield against legal liability, irrespective of reasonable care. It is essential to find the right balance between innovation and accountability.

#3 Support open AI innovation, not an AI licensing regime

We should not create a licensing regime for AI. An AI licensing regime would be a serious blow to open innovation and risks creating a form of regulatory capture. This would inadvertently increase costs, hinder innovation, disadvantage smaller players and open-source developers, and cement the market power of a few players. Instead, AI should be built by and for the many, not the few. To that end, a vibrant open AI ecosystem is good for competition, innovation, skilling, and security. It guarantees that AI models are shaped by many diverse, inclusive voices. Other governmental actions, such as funding the National AI Research Resource could further help foster an open AI innovation ecosystem.

Responsible AI at IBM

For over a century, IBM has been at the forefront of responsibly introducing groundbreaking technologies. This means we don’t release technology to the public without fully understanding its consequences, providing essential guardrails, and ensuring proper accountability. Instead, we believe that addressing the repercussions of those innovations is just as important as the innovations themselves.

Our commitment to trusted and accountable AI is evident in our approach to building and deploying AI models. AI models are essentially a representation of underlying data. That is why IBM has embraced a holistic approach with a platform that allows AI creators to deploy trusted and accountable AI. Our watsonx platform ensures governance at every level – from data ingestion to model development, deployment and monitoring throughout the entire AI lifecycle. This enables companies to deploy trusted, responsible, and accountable AI.

Balancing innovation with responsibility and trust

An AI rising tide can and should lift all boats. At IBM, we urge governments to adopt and implement consistent smart regulation that would allow business and society to reap the benefits of AI while addressing the potential for misuse and risk. IBM is ready and committed to play a pivotal role in bringing the power of responsible AI. We look forward to doing our part in building an AI future we can all trust.”

Link to article