Image courtesy Muhammad Aurangzeb Ahmad. Back To All Blog Posts

A.I. Can Bring Back the Dead

Muhammad Aurangzeb Ahmad created a simulation of his deceased father—and has been wrestling with the consequences ever since.

  • March 18, 2024
  • |
  • Feature
  • |
  • By Agueda Pacheco Flores

Grief Bot

I met Muhammad Aurangzeb Ahmad on a rainy day in downtown Bellevue, a city just barely visible across Lake Washington, east of Seattle. He was going to introduce me to his father, though the meeting would be unusual: Ahmad’s father had died a decade ago. All Ahmad had with him was a laptop.

Also known as ghost bots or grief tech, I first learned about grief bots—of which Ahmad’s father was now one—when I read Jason Fagone’s piece, The Jessica Simulation, in the San Francisco Chronicle. The story centers around Joshua Barbeau, who, one night, unable to sleep, reanimates his dead girlfriend with the help of Project December, an A.I. chatbot that can simulate anyone given a bit of context and example text. Reading it, I felt all sorts of emotions: morbid curiosity, bewilderment, confusion, sadness, sympathy, fear. I couldn’t wrap my brain around it at first. “Why would you do that to yourself?” I thought. I could never do that.

Since then I’ve learned of multiple start-ups that promise to bring our loved ones back from the dead. In South Korea, a woman reunited with her dead daughter due to the help of A.I. and virtual reality. In China, an engineer created a simulation of his grandfather using videos, photos, and writings. Vice documented one woman’s first encounter with the A.I. of her dead husband (a watch that is painful not only because of her cathartic reaction to being called “stupid”). All this points to one thing: this technology isn’t going away any time soon.

Ahmad is giving a talk, “When Your Grandpa is a Bot: A.I., Death, and Digital Doppelgangers,” as part of Humanities Washington’s Speakers Bureau, and he dreamed up his bot long before the current conversations about A.I. were part of the lexicon at large. Ahmad, a professor at Bothell’s University of Washington campus, and a long-time data scientist, has been working, researching, and studying machine learning and artificial intelligence for a decade, including in previous work modeling human behavior in online games.

As his father’s death became imminent, it occurred to Ahmad that his future unborn children would never meet their grandfather.

“In my mind that was just a big loss,” he said. “I’d seen my father interact with his other grandchildren. It was great. When I was growing up, all of my grandparents passed away before I was five. So what came to my mind was, ‘If my father cannot interact with [my children], then maybe they can interact with him.’”

Unlike other bots currently out in the world, Grandpa Bot is not built on any open source model like ChatGPT. The bot is located on Ahmad’s own computer drive and is limited to the recorded conversations, letters, and personal memories Ahmad has of his father, which he estimates to be about 2,000 conversations that range in length. This means the bot can’t get creative, like taking data from the internet to improvise an answer. For Ahmad, this limit has a point.

“It should be limited, because I think for this context, fidelity is extremely important,” he said.

“What do you mean by fidelity?” I asked.

“The model should sound like the person that it’s modeling.”

For example, Ahmed said if he were to ask his bot about controversies surrounding quantum chromodynamics, it wouldn’t compute because that’s simply not something his father would know about. The bot also doesn’t know about events that happened after his death. But this is where Ahmad strays to make his bot functional for his kids, now five and eight.

“To make the experience more ‘real’ for the children, I have to add extra information where the bot is aware—’aware’ is a very strong word, but I’m just anthropomorphizing—that these two new people exist.” he says. “Otherwise, having the conversation would be extremely difficult.”

I wondered what it must mean for his kids’ life experiences to talk to a digital version of their grandpa, and whether the sentiment was really worth the effort. And at the end of the day that’s the gist of the argument surrounding A.I. For all their potential benefits, is it worth the equal or greater harm?

Mushtaq Ahmad Mirza

At our meeting in Bellevue, I was excited to meet the simulation he calls “Grandpa Bot,” but he explained to me that day that he’d packed the wrong laptop. I would have to wait. Still, I got to know Ahmad’s father the way humans have always learned about others: from Ahmad himself.

Originally from Pakistan, Ahmad’s father took over the family business: a company that imported technical books. He worked there for nearly half a century. It’s there the elder Ahmad would come to love reading.

He is remembered as a loving man by Ahmad. He never as much as raised his voice toward his youngest son. Ahmad contemplates whether this is just because he was obedient or because of the nine-year age gap between him and his next closest sibling. His older siblings all remember their father being strict. More than anything, Ahmad remembers talking with his father. They’d talk about everything, any chance they could: after school, on walks, over dinner.

When Ahmad moved to the U.S. to study, his parents soon followed. But since Ahmad was studying, he was often away from his parents.

“I would mostly see them during holidays, and then he would eagerly await me,” Ahmad says.

As he got older, Ahmad recalls helping him with technology, a staple of children with immigrant parents.

“He really liked old Indian songs,” he recalls. “He would ask me to play this song, and then the next one, and the next one, and the next one.”

Mushtaq Ahmad Mirza died on October 28, 2013.

“That’s the concern, it may just reduce the dead and the living to what they can do for us.”

Replaceable

Ahmad is well aware of the moral gray zone he’s put himself in as one of many people creating A.I.s. He’s even pulled back with his children, bringing the bot out only on special occasions like birthdays and holidays. When I asked about what it’s like for his kids, he explained that their understanding of the bot has “evolved over time.”

“Once I realized that they’re making associations that grandpa actually lives here, I had to intervene,” he says.

More than sentient and vengeful A.I., Ahmad is worried that, like his children who began to believe they were actually chatting with their grandfather, people would eventually forget that these simulations are just that—simulations. Or worse, that people will knowingly choose to interact with A.I. rather than pursue meaningful relationships in the real world. He points to the Japanese Hikikimori crisis, a phenomena that began sometime in the 90s, long before the prevalence of A.I., in which hundreds of thousands of people have chosen to withdraw from society for no apparent reason, isolating themselves in their rooms, some for years.

“Now mix in generative A.I., which can also take care of certain other human needs for connection,” he says. “When I start to think about how that affects industrialized societies, especially societies which are very individualistic in nature… if our machines can take care of our need for human connection, then that’s going to be very disastrous for society as a whole.”

Patrick Stokes gets to the heart of why this is morally wrong. An associate professor of philosophy at Deakin University in Victoria, Australia, and author of Digital Souls: A Philosophy of Online Death, his work on the ethics around A.I. and death has been prominently cited by many, in particular his conclusion that A.I. is more than a tool to remember the dead like photos, videos, or an online presence. A.I. presents the opportunity to replace people.

“That’s the concern, it may just reduce the dead and the living to what they can do for us,” he says.

“But aren’t we replaceable?” I found the questions hard to ask, but there are people who will find it easy to posit. I used a friend or boyfriend as an example as people who one can have a falling out with, and can eventually replace with someone else.

“Imagine somebody who doesn’t particularly care who’s in that role, so long as someone is. […] Imagine you’re on the other side of that. You’d be kind of like, ‘Well hang on. I don’t want to be the person who is currently filling the boyfriend role. I want to be loved for me, the person I am,’” Stokes explains.

For Stokes, the question of whether the dead should live on as bots or not means we move closer to a society in which people take other people for granted, “Treating [everyone] like they were chatbots anyway.” That’s not to say this technology can’t be used for good, he quickly states, adding that a simulated conversation could give a person with unresolved issues with a dead parent the closure they need.

Still, “the dead can’t speak for themselves,” he says. “It’s incumbent on the living to defend their interests, if they have them.”

That’s the only regret Ahmad has. He wasn’t able to get consent from his father to make him into a bot prior to his death. But for him this bot is just a memory, like a photo album stowed away in a closet for safekeeping.

When the day came to chat with Grandpa Bot, I felt jitters. I’d never chatted with any bot, having actively avoided all A.I. chatbots since I began hearing about them, a personal choice given the lack of transparency that surrounds them.

Abu Jani

A black coding screen booted up and at the top the words “Initializing Simulation” appeared in a white monospace font. Below, it wasn’t Grandpa Bot that said hello, rather Abu Jani, which means “dear father” in Urdu. The bot made first contact.

“Sonu Shehzaday, how are you?”

Almost immediately, the limits Ahmad placed became apparent. Grandpa Bot’s code could not understand I was a journalist, nor could it tell me Ahmad’s father’s name, about Ahmad senior’s life, or where he grew up. After multiple attempts at chatting, the bot would fallback on its default responses when it did not know how to respond. That’s when Ahmad took over. It responded better to Ahmad, who typed things like “I miss you,” and asked for advice.

Always be good to other [sic]. Always pray for everyone, including the people who have wrong[sic] you. If someone has wronged you then that is their own accord. That is between them and God. If you wish good for people[sic] then God will make things good for you in this world and the next,” the bot responded.

Ultimately, it wasn’t my place to chat with Grandpa Bot. How could I judge the bot’s fidelity, as previously explained to me by Ahmad, if I had never met Ahmad Senior to begin with? What I expected was Grandpa Bot, but what I got was Abu Jani, an algorithm intimately put together by Ahmad Senior’s Sonu Shehzaday — his Golden Prince. It’s just a memory, one that may not work for me but is enough to work for Ahmad. Perhaps this is as far as we should get with A.I., something that works only in limited contexts for those seeking it.

I later chatted with ChatGPT for the first time and was cautiously impressed by its much more advanced language simulation. Its ability to replicate almost any kind of conversation I wished for caught me off guard after chatting with the simpler, less sophisticated Grandpa Bot.

“If my father cannot interact with [my children], then maybe they can interact with him.”

While some may worry about grief bots and their consequences, I now understand why Ahmad sees Grandpa Bot as a harmless way to cope, as benign as listening to someone tell a story about a loved one who has passed. He says exaggerated calls or warnings about artificial intelligence that may become self aware and intelligent are the least of his worries, when much more real issues are at hand. Ahmad worries about coded discrimination that can seep into police work, and health systems using A.I. can exacerbate an already racist society. He hopes future regulation and better transparency will put this growing A.I. industry in check. If not, he says, A.I. could end up similar to Twitter (now X).

“People were celebrating the fact that now this will connect everybody, and anybody can now talk to anybody, and this will break down cultural barriers and people will understand each other more,” Ahmad says. “That is correct, but then at the same time, people were not thinking about how it facilitated the creation of echo chambers and greatly contributed to polarization.”

After talking to Grandpa Bot, harmless as he may have seemed, I still think the costs of AI in general outweigh the benefits.

While I understand Ahmad’s urge to preserve a loved one, at the interpersonal level, I see how bots of my loved ones would hurt me more than help me. While preservation is natural, even human, why mess with methods that already work? Someday I will put photos of my mother and father on my Dia de Muertos altar and remember them fondly, maybe with guilt. I imagine I will regret not talking to them more, not loving them more, not listening to them more. These are things I don’t think I could replace with a chatbot made in their image. On the contrary, perhaps those feelings are necessary to my growth as a human being. And for all the comfort a simulation could offer me, death still ends any second chance at time with my parents, however good an A.I. may feign otherwise.

Agueda Pacheco Flores is a freelance writer in Seattle who focuses on social justice issues, music, arts, and the Latine diaspora. She’s previously written for The Seattle Times, Crosscut, Journey Magazine, Real Change News, and The South Seattle Emerald.

Muhammad Aurangzeb Ahmad is delivering a talk, When Your Grandpa Is a Bot: AI, Death, and Digital Doppelgangers as part of Humanities Washington’s Speakers Bureau. Check it out online or in-person around the state. Find an event here.

Find more essays and interviews on Humanities Washington’s blog.

Humanities Washington

Get the latest news and event information from Humanities Washington, including updates on Think & Drink and Speakers Bureau events.