News

After You Die, You Could Be Resurrected as a Chatbot. That’s a Problem.

No one knows where we go when we die. Or for that matter, what happens to our most intimate thoughts, dreams, and desires when the nerve cells in our brains fire for the very last time. But Microsoft may have some ideas.

In December, the U.S. Patent and Trademark Office (USPTO) granted a patent to Microsoft that outlines a process to create a conversational chatbot of a specific person using their social data. Specifically, Microsoft could use images, voice data, social media posts, text messages, and written letters to “create or modify a special index in the theme of the specific person’s personality.”

That sounds pretty benign, but in an eerie twist, the patent states that the chatbot could potentially be inspired by friends or family members who are already dead. And the system could even generate a 2D or 3D simulacrum of the person.

This content is imported from {embed-name}. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

Naturally, this opens a whole can of worms, explains Irina Raicu, the director of the internet ethics program at Santa Clara University’s Markkula Center for Applied Ethics. “If you try to create a very good chatbot for someone who died…you could put words into people’s mouths that they never said,” she notes.

Taking a person’s tweets and Facebook posts, then creating an index—or a sort of catalogue for the data to help a computer search for the right answers to a query—does not always lead to organic or honest responses.

“If this becomes accepted, I think this could have a chilling effect on human communications,” Raicu says. “If I’m worried that anything I’m going to say could be used in a weird avatar of myself, I’ll have to second-guess everything.” Using sarcasm on the internet, for instance? You might not want to anymore, for fear that your comments could be taken in earnest and built into a chatbot dialogue, potentially harming your reputation postmortem.

This isn’t the first time an intelligent chatbot has been created as a way to bring back the dead.

In 2015, technologist Eugenia Kuyda’s friend, Roman, died in a sudden and tragic car accident in Moscow. She gathered text message conversations between Roman and many of his friends and assembled a chatbot that could serve as a sort of analogue for him. In 2017, she used that experience to launch Replika, an AI chatbot service that allows anyone to make their own virtual friend.

Regardless of any positive effects, it raises an issue: While these chatbots may be beneficial to the person who is grieving, they may also be exploiting the dead, Raicu says.


🤖 🧠 STRANGER THAN (SCIENCE) FICTION

Black Mirror, a popular sci-fi anthology on Netflix, seemingly prophesied this technology back in 2013 with an episode titled “Be Right Back.” In it, a woman signs up for a chat service that lets her communicate with an AI version of her late partner, who’d died in a car crash. We won’t spoil it for you, but suffice to say, things get weird.

And then there’s the 2013 film Her, wherein Joaquin Phoenix stars as a lonely writer who dates Samantha, an intelligent operating system voiced by Scarlett Johansson, with troubling results. While Samantha is not a chatbot per se, the film still illustrates the psychological trauma that can befall those who lean too heavily on technology.


In the case of the Microsoft patent, Raicu says that an individual has a Constitutional right to privacy, so this sort of chatbot is already a violation of a deceased person’s autonomy—they have no say in which bits of their social data go into the final chatbot, for instance. And creating a chatbot modeled on a person who has never consented in the first place feels unfair, because they aren’t a part of the decision-making process.

On the one hand, Raicu says, much of this brand of innovation is driven by people who do feel genuine empathy and want to help others through the loss of a loved one, perhaps. But at the same time, these technologists must be astute in their designs, considering the negative implications.

It may seem dystopian, and perhaps a bit para noid, but the only surefire way to protect your humanity from these kinds of programs would be to set up a section in your living will regarding your personal data, says Alexander Hauptmann, a research professor at Carnegie Mellon University’s Language Technologies Institute.

“You could imagine that people might be able to put stuff in their will about how their archive of data should be used or disposed of,” he says. “But then the other question is, who is actually going to sue [the person who built the chatbot]? Maybe some other family member who knows what the will said and objects to it.”

For what it’s worth, we asked Microsoft about the patent. While they didn’t tell us much, they did direct us to a January 22 tweet from Tim O’Brien, general manager of AI Programs at Microsoft, in which he confirmed that there’s no active plans at the company to use this chatbot patent.

“But if I ever get a job writing for Black Mirror, I’ll know to go to the USPTO website for story ideas,” he tweeted. Touché.


🎥 Now Watch This:

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io

Most Related Links :
usnewsmail Governmental News Finance News

Source link

Back to top button