The race to create artificial intelligence chat bots is heating up, as OpenAI recently unveiled a newer, more powerful version of ChatGPT called GPT-4. Google just released its own chatbot called Bard, a rival to ChatGPT and Microsoft’s A.I. chatbot, Bing.

While AI technology is beginning to shape our future, there are concerns about the dangers.

Anurag Mehra, a visiting professor in science and society at Vassar College, teaches a course called “Digital Lives.”


What You Need To Know

  • The race to create A.I. chat bots is heating up as Microsoft and Google release their own chat bots, four months after ChatGPT first came out

  • Anurag Mehra, a visiting professor at Vassar College, worries chat bots can be trained to spread misinformation and disinformation

  • He believes that the responsibility lies with the technology companies to put guardrails to protect people from content that may be racist or misogynistic, or if the bot itself is being biased in its responses

“I started looking at digital technology, including you know the bigger issues of social media, and misinformation, and the kind of sources that students are referring to on the web,” Mehra said.

These days, Mehra is consumed by AI. Chatbots are now available to the masses, starting with ChatGPT, launched by the company OpenAI four months ago.

Since then, other tech giants like Microsoft and Google are racing to catch up with their own chatbots. A chatbot is trained by ingesting billions of words mainly from the internet.

Mehra has deep concerns about the ethics of AI.

“How an artificial intelligence system or a bot behaves has to do with how it has been trained,” he said. “Now the problem is, that the probability of what word it’s going to choose, and how the entire paragraph is going to look, also depends upon what all it has ingested, so for example, if I create a bot and feed it, let’s say misogynistic stuff or racist stuff, the answers are invariably going to be racist and sexist.”

Releasing AI systems to the public without oversight, Mehra believes, is like experimenting in the wild. The responsibility, he says, lies with the tech companies.

“They essentially have to put guardrails, which is like an external constraint you place in the system so that it doesn’t give you bad answers, or if it does, it apologizes, and kind of figures out that it’s expressing a bias, which is present in its own data,” Mehra said.

The new technology could ace a bar exam, write poetry, conduct a makeshift therapy session and possibly more. Even so, Mehra sees the need for regulatory action.

“If I keep producing new features, every day, and I keep on, you know you have GPT-4 today, you have a Google bot tomorrow, with newer features, and newer stuff that they can do,” Mehra said. “Regulators are at their wits’ end, trying to figure out ‘how do I control this thing?’ ”