Crafting safe Generative AI systems
Premium

Regulation is necessary but not sufficient; a broader approach should be considered

August 21, 2023 12:47 am | Updated 12:49 am IST

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference in Shanghai on July 6, 2023

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference in Shanghai on July 6, 2023 | Photo Credit: Reuters

The Generative AI revolution is upon us and will potentially unleash a wave of technical and social change. Large Language Models (LLMs) alone are predicted to add $2.6 trillion-$4.4 trillion annually to the global economy. As one example of their potential impact, consider the ongoing pilot of Jugalbandi Chatbot in rural India (powered by ChatGPT). Jugalbandi promises to serve as a universal translator, accepting queries in local languages, retrieving answers from English-language sources, and presenting them back to users in their native language. This service alone can democratise access to information and improve the economic well-being of millions of people. And this is only one of hundreds of new services which are being developed.

Concerns

However, alongside positive developments, this AI revolution also brings risks. Most pressingly, AI powered tools are enabling bad actors to create artificial entities that are indistinguishable from humans online (via speech, text, and video). Bad actors can misrepresent themselves or others and potentially launch a barrage of variations on old harms such as misinformation and disinformation, security hacks, fraud, hate speech, shaming, etc.

In the U.S., an AI generated image of the Pentagon burning spooked equity markets. Fake Twitter and Instagram users promulgating strong political views have been reposted millions of times, contributing to polarised politics online. Cloned AI voices have been used to circumvent bank customer authentication measures. An individual in Belgium was allegedly driven to suicide with his conversations with an LLM. And recent elections in Turkey were marred by AI generated deepfakes. Over one billion voters will head to polls across the U.S., India, the EU, the U.K., and Indonesia in the next two years, and the risk of bad actors harnessing Generative AI for misinformation and election influence is steadily growing.

Concerns about safety associated with Generative AI deployment, then, is rightly at the top of policy makers’ agenda. Using AI tools to misrepresent people or create fake information is at the heart of the safety debate. Unfortunately, most of the proposals under discussion do not seem promising. A common regulatory proposal is to require all digital assistants (aka ‘bots’) to self-identify as such, and to criminalise fake media. While both measures could be useful to create accountability, they are not likely to satisfactorily address the challenge. Established companies may ensure their AI bots self-identify, and only publish valid information. However, bad actors will simply disregard the rule, capitalising on the trust created by compliant companies. We need a more conservative assurance paradigm, whereby all digital entities are assumed to be AI bots or fraudulent businesses unless proven otherwise.

Identity assurance framework

Regulation is necessary but not sufficient; a broader approach should be considered to improve Internet safety and integrity. Based on our recent research at the Harvard Kennedy School, we propose an identity assurance framework. Identity assurance ensures trust between interacting parties by verifying the authenticity of the involved entities, enabling them to have confidence in each other’s claimed identities. The key principles of this framework are that it be open to the numerous credential types emerging around the world, not specific to any single technology or standard, and yet provide privacy protections. Digital wallets are particularly important as they enable selective disclosure and protect users against government or corporate surveillance. This identity assurance framework would be extended to humans, bots, and businesses.

Today, more than 50 countries have initiatives underway to develop or issue digital identity credentials which will form the foundation of this identity assurance framework. India, with Aadhaar, is in a leadership position to establish online identity assurance safeguards. The EU is now establishing a new identity standard which will also support online identity assurance, but full user adoption will likely take the rest of this decade.

Also read | AI anxiety: workers fret over uncertain future

Identity assurance is also tied to the question of information integrity. Information integrity ensures that the content being accessed is authentic and was published by the person it claims to be published by. This credibility comes from three pillars. The first is source validation, which is to enable verifiability that information comes from a known source/publisher/individual. The second is content integrity, which is to enable verifiability that the information has not been tampered with. The third is information validity. This is contentious but can be realised with automated fact-checking and crowdsourced reviews.

Neither identity assurance nor information integrity are easy to attain. Identity assurance touches on well-known tensions — privacy versus surveillance, civil liberty versus security, anonymity versus accountability. Information integrity raises the questions of censorship and the timeless question of ‘who defines the truth?’ As we consider rebalancing these two pillars online, we must recognise that each nation’s values differ and their appetite for different risks will be different. But these differences are manageable within a larger framework.

It is the responsibility of global leaders to guarantee the secure and safe deployment of Generative AI. We need to reimagine our safety assurance paradigm and build a trust framework to ensure global identity assurance and information integrity. Beyond regulation, we need to engineer our online safety.

John Fiske is a Senior Fellow at Mossavar Rahmani Center for Business and Government at Harvard Kennedy School; Satwik Mishra is Vice President (Content), Centre for Trustworthy Technology, a WEF Fourth Industrial Revolution Centre and Master in Public Policy Graduate from Harvard Kennedy School. Views are personal

Top News Today

Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.