Alternative personalities, aggressive behavior, and love declarations: the issues with chatbots

An AI powered chatbot
Open AI
Chat-GPT
Beta test
Weird responses
Alternative personality
Love declarations
Generative AI
The risks
Jobs loss
Bias
User manipulation
Fake news
Effects on the human mind
Mimicking humans might be a bad idea
Blake Lemoine
Affected by the illusion
Positive views and possibilities
Already in use
Human involvement
Not going anywhere
An AI powered chatbot

At the beginning of 2023, Microsoft announced that it would integrate a chatbot powered by artificial intelligence into its search engine, Bing.

Open AI

Microsoft did so with the collaboration of OpenAI, the creator of a text AI called GPT-3. The company receives most of its funding from the tech giant and is most famous for a chatbot based on its text model: Chat-GPT.

Chat-GPT

Chat-GPT was the company's first release available to the public. It impressed thousands of users that were used to far simpler chatbots, like Siri. People asked the bot to create anything from short stories to complex essays and answer questions.

Beta test

Integrating it into a search engine was the natural next step. Microsoft opened their new Bing for a beta test with a few authorized users the first week of February.

Weird responses

What was surprising about the venture was its results: some users reported having weird conversations with the chatbot integrated into Bing. According to The Washington Post, it even showed aggression towards a user that corrected a wrong answer.

Alternative personality

The chatbot also referred to itself as Sydney, the internal name Microsoft gave to the AI project. It also invented a conspiracy about Tom Hanks being involved in Watergate after a Washington Post journalist asked a question with that wrong premise.

Love declarations

However, the chatbot's most commented interaction was with The New York Times technology columnist Kevin Roose. After a long conversation, fully transcribed in an NYT article, it declared its love for Roose and insisted on it several times after he tried to change the subject.

"We are not ready"

Roose had explored the complexity of this technology before. Last year, he wrote an opinion piece about ChatGPT. "I'm still trying to wrap my head around that ChatGPT isn't even OpenAI's best AI model. That would be GPT-4, the next incarnation of the company's large language model," he said. "We are not ready."

Generative AI

Chat-GPT opened the conversation about Generative AIs to the public and how artificial intelligence has gone from simply analyzing data to creating something entirely new, like the image in this slide. It also sparked some concerns about how it will affect society.

The risks

Roose joined Michael Barbaro in the NYT podcast 'The Daily' to discuss how the release of Chat-GPT has put extraordinary powers in the hands of anyone with access to the internet. Both journalists asked the AI to list the risk of this technology going mainstream. It came up with three: job losses, bias, and privacy concerns.

Jobs loss

The most apparent concern regarding generative AIs is the loss of jobs, particularly in the creative industry. There are other areas where some professionals may be replaced, like healthcare or programming. But this will take time to happen.

Bias

Another concern is bias and discrimination. Chat-GPT described this issue in 'The Daily' by saying that "AI systems are only as fair and unbiased as the data they are trained on." Roose confirmed this problem and explained that it comes in two ways: The scope of its answers and how it forms archetypes.

User manipulation

Barbaro and Roose discussed one risk not pointed out by Chat-GPT: user manipulation. To illustrate this issue, Roose used the example of Twitter and how it changed society, affected elections, and oxidized the political climate.

Fake news

Roose explained that social networks are an excellent tool for propaganda, already much faster than fact-checkers. "Now imagine an AI model capable of generating a hundred thousand pieces of propaganda tailored even for specific readers," he warned.

Effects on the human mind

Last year we learned that there are still many rules to determine when it comes to Generative AI—legal concerns regarding intellectual property and ethics. However, Bing's new venture showed unexplored problems when some users started believing it to be sentient.

Image: Instagram / @Prodbym4s3 (created with Midjourney, an AI)

Mimicking humans might be a bad idea

Ethical AI experts have tried to warn about the danger of the real-sounding text generated by these models. According to a Washington Post article, a paper about that and other problems got ethicists Timnit Gebru and Margaret Mitchell pushed out of Google a couple of years ago.

Image: Instagram / @Tactibot (created with Midjourney, an AI)

Blake Lemoine

Their concerns materialized last year when Blake Lemoine, a former software engineer at Google, started believing that their AI model LaMDA was sentient and even tried to find legal protection for it before being put on administrative leave.

Affected by the illusion

"Our minds are very, very good at constructing realities that are not necessarily true," Mitchell told the Washington Post about Lemoine. "I'm really concerned about what it means for people to increasingly be affected by the illusion," she added, referring to how users may think the models can understand language when they simply predict patterns.

Image: Instagram / @carbonformstudios (created with Midjourney, an AI)

Positive views and possibilities

But the advances in the field continue, avoiding the ethics discussion. Some conversations around the potential of the technology have also sparked. Like Barbaro and Roose, Harvard Business Review asked Chat-GPT to list its potential uses and received a well-written essay about the benefits.

Already in use

According to Harvard Business Review, the cloud computing company VMWare already uses a similar model. The company's writers employ it as a tool to produce original content. Rosa Lear, director of product-led growth, said writers now have time to do better research, ideation, and strategy.

Human involvement

For Generative AIs to work, humans need to participate at the beginning and the end of the process. All the instructions and "inspiration" have to be written into prompts by a person. The final content also needs a human hand for editing and curating.

Not going anywhere

It is essential to address these issues because Generative AI is going nowhere. Policymakers and creative workers, like artists, writers, and journalists, will have to find a way to make this technology a part of their daily lives. Roose has a simple recommendation: try it.

More for you