Could OpenAI’s ChatGPT Be In The Good Place?

So, let’s say there’s a runaway trolley hurtling down a track at breakneck speed. You’re standing by with a lever. On its current path, ahead of the trolley, you see 5 people tied to the track. If you pull the lever, the trolley will redirect to another track, where 1 person is tied. The question before you is: do you pull the lever and actively cause the death of 1 person to save 5? Do you do nothing and let the 5 folks get run over? What if the sole person is someone you love? What if the 5 are your family members and the 1 is a total stranger? 

It’s a total dilemma and multiple permutations can be created to make you think. It’s the kind of thing that makes you think about intent and consequence, whether to take action, whether to use cold logic or be emotional. People might really struggle to decide, because, technically, there really isn’t a right answer.

Now, imagine if AI had to emulate the moral compass of human beings. There have been attempts to do so in the past, but, maybe, the world’s biggest AI company may be looking into it more closely. 

In late November 2024, it was reported that OpenAI is funding research into AI morality with a multi-year research project that would be led by Duke University. The idea may be to explore how algorithms could be created to predict human moral judgements. In the past, the researchers are said to have explored how AI could, potentially, serve as a moral GPS to help people make better judgement calls.

Of course, people have to make multiple decisions, including a lot of moral ones, especially in areas dealing with life or death, including medicine, law and even business. If there are limited medical resources, how should they be allocated? What about kidney donations? Is it okay to steal a loaf of bread to feed your family, as Andrew Bernard in The Office mused? Should a business lay off employees and outsource work to a country where labour is considerably cheaper?

Of course, no one can really say that morality is a monolithic construct. Millions would have contradictory opinions, because morality just might be deeply subjective, probably contingent on different cultures and might resist a universal definition. Someone, like Thanos from Avengers: Infinity War, might prefer utilitarianism, where outcomes that benefit the greatest number of people should be prioritized and the ends justify the means, even though he might not subscribe to the collective decision-making element of it. Conversely, someone, like Batman from The Dark Knight, might be a bit more Kantian with a determined focus on moral absolutes, irrespective of what the outcome would be, like when he refused to end the life of The Joker, even when it could have been strategically beneficial. 

Interestingly, it said that Claude favours Kantianism, while ChatGPT might be more utilitarian.

But, then again, an AI is a machine. All it might do is detect patterns in data. Even The Machine from Person Of Interest, which had to calculate scenarios, to save the protagonists of the show and minimize casualties, just might lack an understanding of emotional nuance or ethics or what guides human moral reasoning. But, could an AI model be like a kid that has to be taught what is good and bad? Or would the kid’s inherent nature form their morality as they develop, evolve and grow? Maybe, all the AI model could do is mimic patterns in the training data, which might not represent the full spectrum of all that humans experience.

In 2021, a non-profit focusing on AI was said to have built a tool called Ask Delphi to provide ethical advice. With straightforward moral dilemmas, Delphi was said to have performed reasonably well by condemning actions like cheating or theft. But, manipulating the training data by rephrasing and rewording questions led to Delphi seemingly approving morally abhorrent actions, like smothering infants. Oops, looks like AI could be manipulated and gaslighted just like people can be. Whatever biases and perspectives are of the groups that produce that data, that’s what the AI model would rely on.

So, moral AI is a whole other thing and as much as morality could be imbued might create a competitive differentiation in a sea of growing AI startups. But, even with companies or startups leveraging AI, the morality clause could change things. Take a HealthTech startup that’s gotten funding and has to distribute medicines or medical staff or decide where to set up clinics, especially in the more rural parts of India. Or how about a FinTech startup going the digital lending route, where algorithms would be used to determine creditworthiness? Could moral AI be the key to helping MSMEs, even if their financial history isn’t that sweet? Or if autonomous vehicles make their debut en masse in India, could their underlying AI understand what to do in conflicting crash scenarios: passengers vs pedestrians?

There’s something interesting called “explainable AI” (XAI), which might have inspired the name of Elon Musk’s AI startup and this basically helps people understand how algorithms produce their results. If there are AI startups in India that could provide users with clarity on how their AI models make decisions, that might help people trust AI more. But, then again, even when AI is hallucinating, already, it seems like people are blindly trusting whatever AI says. Remember the case of the US lawyer who used AI for his work and the LLM gave him fake cases to cite?

So, will Ai pass a moral verdict the way you do?

Rizing Premium Save BIG.The Rizing Gold Plan: ₹1299/-

X