AI Alignment can’t be “solved” as OpenAI says

OpenAI is taking the alignment problem very seriously as is evident from the blog posted on July 5: the company will be spending 20% of its compute over the next four years on alignment.

July 10, 2023 06:11 pm | Updated July 11, 2023 01:11 pm IST

OpenAI will be spending 20% of its compute over the next four years on alignment. 

OpenAI will be spending 20% of its compute over the next four years on alignment.  | Photo Credit: Reuters

After releasing a flurry of wave-making AI tools like ChatGPT -- the prevailing gold standard for chatbots -- and text-to-image generator DALL.E, OpenAI has finally turned its attention towards alignment in AI. A couple of days back, the company announced it was setting up a new Superalignment team led by co-founder and Chief Scientist Illya Sutskever, and Jan Leike, newly appointed Head of Alignment. OpenAI is taking the alignment problem very seriously as is evident from the blog posted on July 5: the company will be spending 20% of its compute over the next four years on alignment.

“We are assembling a team of top machine learning researchers and engineers to work on this problem,” the blog stated. Sutskever, who has shifted his core focus in research to alignment, also said the company was looking to hire engineers for the team. Promising that alignment was “critical to achieve their mission,” the team intended to “solve the core technical challenges of superintelligence in four years.” OpenAI has also added a caveat, admitting, “This is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem.”

Why is AI alignment needed?

The matter of alignment has become an urgent one as AI moves in massive leaps. If we’ve arrived at a point in time where a machine can write, draw, and know enough to pass the bar exam, all reasonably on par or better than human capability, it may be time to tighten the bolts around AI safety. Just as a steering wheel controls the car, alignment ensures AI models behave according to human intent and don’t go “rogue.” But turns out, aligning AI exactly with how we would like it to function is a trickier proposition than driving a car.

“I would say that the most significant challenges are related to identifying which values we wish to align AI models to, their possible conflicts, and particularly how to address their evolvement over time -- because society is not a static entity but rather a very changing one. We realize several factors and characteristics if we try to unpack these issues,” Giada Pistilli, Principal Ethicist at open-source platform Hugging Face explains.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

Also, humans have varied values, making alignment a moving target. “Once the values are identified, these vary in interpretation, application, and context. Namely, not all humans can easily and accurately answer the question, ‘what are your core values?’ These are often implicit and hidden in our actions, vary from one social and cultural context to another, and most importantly, change through personal experiences. For example, we know in philosophy that the value of equality has a very different interpretation according to different schools of thought, and has a way of being applied that also varies depending on the socio-political context in which it is associated. Given all this, it seems to me to be very difficult to universally agree on the “right” values to ascribe to an AI model, ones that make consensus and apply to all users worldwide,” she continued.

But as complex as alignment is, ignoring the risks associated with it would mean a world gone awry. A self-driving system in a car could hit a pedestrian, an AI model sifting through resumes might tilt its focus on one specific gender (In 2018, Amazon had to scrap an AI recruiting tool found to have a bias against women) or a chatbot writing ad briefs could spew more falsehoods than truth.

Pistilli says that understanding the goals of an AI model should be the first thing to be done because alignment follows once this clarity is there. “I think a lot depends on the limits that we set for the model and especially of the use that is offered to its users. The wider the range, the greater the potential risks. If we can be in control of these factors, the complexity of the alignment problem could be reduced,” she stated.

Alignment can’t be “solved”

But while having a consensus on the importance of alignment, a host of scientists from the AI community have dismissed OpenAI’s approach to tackling it. Meta’s Chief AI Scientist Yann LeCun disagreed that the alignment problem is even a solvable one, let alone one that can be done in four years like OpenAI claims it will. “One doesn’t just ‘solve’ the safety problem for turbojets, cars, rockets, or human societies, either. Engineering-for-reliability is always a process of continuous & iterative refinement,” the French scientist tweeted.

Pistilli reiterated a similar sentiment. “I don’t think there’s anything to solve, as that would already imply a deep understanding of the problem. We can’t ‘solve’ the complexity of human values, their malleability, and sometimes even their inconsistencies. I fear we’ll fail if the goal is to mimic or summarize all this in one or more AI models. Trying to suggest engineering solutions to social problems has never worked because we human beings are richer than a function table and some patterns -- fortunately!” she explained.

While she does applaud the efforts of OpenAI, Pistilli worries that this strategy “reduces the extreme (and absurd) something that has never sought to be solved but only analysed. Perhaps we will realize that the solution to the problem is more mundane and less ambitious than what is being attempted. We’ll see.”

Top News Today

Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.