End-Of-Life

— The One Decision AI Cannot Predict

We often talk about personalized medicine; we hardly ever talk about personalized death.

By Dr. Tal Patalon, MD, LLB, MBA

End-of-life decisions are some of the most intricate and feared resolutions, by both patients and healthcare practitioners. Although multiple sources indicate that people would rather die at home, in developed countries they often end their lives at hospitals, and many times, in acute care settings. A variety of reasons have been suggested to account for this gap, among them the under-utilization of hospice facilities, partially due to delayed referrals. Healthcare professionals do not always initiate conversations about end-of-life, perhaps concerned about causing distress, intervening with patients’ autonomy, or lacking the education and skills of how to discuss these matters.

We associate multiple fears with dying. In my practice as a physician, working in palliative care for years, I have encountered three main fears: fear of pain, fear of separation and fear of the unknown. Yet, living wills, or advanced directives, which could be considered as taking control of the process to some extent, are generally uncommon or insufficiently detailed, leaving family members with an incredibly difficult choice.

Apart from the considerable toll they face, research has demonstrated that next-of-kin or surrogate decision makers can be inaccurate in their prediction of the dying patient’s preferences, possibly as these decisions personally affect them and engage with their own belief systems, and their role as children or parents (the importance of the latter demonstrated in a study from Ann Arbor).

Can we possibly spare these decisions from family members or treating physicians by outsourcing them to computerized systems? And if we can, should we?

AI For End-Of-Life Decisions

Discussions about a “patient preference predictor” are not new, however, they have been recently gaining traction in the medical community (like these two excellent 2023 research papers from Switzerland and Germany), as rapidly evolving AI capabilities are shifting the debate from the hypothetical bioethical sphere into the concrete one. Nonetheless, this is still under development, and end-of-life AI algorithms have not been clinically adopted.

Last year, researchers from Munich and Cambridge published a proof-of-concept study showcasing a machine-learning model that advises on a range of medical moral dilemma: the Medical ETHics ADvisor, or METHAD. The authors stated that they chose a specific moral construct, or set of principles, on which they trained the algorithm. This is important to understand, and though admirable and necessary to have been clearly mentioned in their paper, it does not solve a basic problem with end-of-life “decision support systems”: which set of values should such algorithms be based on?

When training an algorithm, data scientists usually need a “ground truth” to base their algorithm on, often an objective unequivocal metric. Let us consider an algorithm that diagnoses skin cancer from an image of a lesion; the “correct” answer is either benign or malignant – in other words, defined variables we can train the algorithm on. However, with end-of-life decisions, such as do-not-attempt-resuscitation (as pointedly exemplified in the New England Journal of Medicine), what is the objective truth against which we train or measure the performance of the algorithm?

A possible answer to that would be to exclude moral judgement of any kind and simply attempt to predict the patient’s own wishes; a personalized algorithm. Easier said than done. Predictive algorithms need data to base their prediction on, and in medicine, AI models are often trained on a large comprehensive dataset with relevant fields of information. The problem is that we don’t know what is relevant. Presumably, apart from one’s medical record, paramedical data, such as demographics, socioeconomic status, religious affiliation or spiritual practice, could all be essential information to a patient’s end-of-life preferences. However, such detailed datasets are virtually non-existent. Nonetheless, recent developments of large language models (such as ChatGPT) are allowing us to examine data we were previously unable to process.

If using retrospective data is not good enough, could we train end-of-life algorithms hypothetically? Imagine we question thousands of people on imaginary scenarios. Could we trust that their answers represent their true wishes? It can be reasonably argued that none of us can predict how we might react in real-life situations, rendering this solution unreliable.

Other challenges exist as well. If we do decide to trust an end-of-life algorithm, what would be the minimal threshold of accuracy we would accept? Whichever the benchmark, we will have to openly present this to patients and physicians. It is difficult to imagine facing a family at such a trying moment and saying “your loved one is in critical condition, and a decision has to be made. An algorithm predicts that your mother/son/wife would have chosen to…, but bear in mind, the algorithm is only right in 87% of the time.” Does this really help, or does it create more difficulty, especially if the recommendation is against the family’s wishes, or is delivered to people who are not tech savvy and will struggle to grasp the concept of algorithm bias or inaccuracies.

This is even more pronounced when we consider the “black box” or non-explainable characteristic of many machine learning algorithms, leaving us unable to question the model and what it bases its recommendation on. Explainability, though discussed in the wider context of AI, is particularly relevant in ethical questions, where reasoning can help us become resigned.

Few of us are ever ready to make an end-of-life decision, though it is the only certain and predictable event at any given time. The more we own up to our decisions now, the less dependent we will be on AI to fill in the gap. Claiming our personal choice means we will never need a personalized algorithm.

Complete Article HERE!

How we remember the dead by their digital afterlives

— A broad-ranging analysis asks whether we can achieve a kind of immortality by documenting our lives and deaths online.


Through virtual reality, people can interact with avatars of loved ones.

By Margaret Gibson

The Digital Departed: How We Face Death, Commemorate Life, and Chase Virtual Immortality Timothy Recuber NYU Press (2023)

Many of us will have turned to the Internet to grieve and remember the dead — by posting messages on the Facebook walls of departed friends, for instance. Yet, we should give more thought to how the dead and dying themselves exert agency over their online presence, argues US sociologist Timothy Recuber in The Digital Departed.

In his expansive scholarly analysis, Recuber examines more than 2,000 digital texts, from blog posts by those who are terminally ill to online suicide notes and pre-prepared messages designed to be e-mailed to loved ones after someone has died. As he notes, “the digital data in this book are sad, to be sure, and they have often brought me to tears as I collected and analyzed them”. Yet, they are well worth delving into.

Recuber brings a fresh lens to studies of death culture by focusing on the feelings and intentions of the people who are dying, rather than those of the mourners. For example, he finds that a person’s sense of self can be altered through blogging about their illness. Writing freely helps people to come to terms with their deaths by making their suffering “legible and understandable”. Reflections on family and friends also reveal a sense of self-transformation. Indeed, many bloggers “attested to the positive value of the experience of a terminal illness, for the way it brought them closer to loved ones and especially for the wisdom it generated.”

This theme of self-transformation, which Recuber refers to as ‘digital reenchantment’, continues throughout the book. This terminology relates to the work of German sociologist Max Weber, who, at the turn of the twentieth century, argued that humans’ increasing ability to understand the world through science was robbing life of magic and mystery — a process he called disenchantment. When the dead seem to be resurrected through digital media, Recuber argues, they regain that mystery.

Recuber explores how X (formerly Twitter) hashtags can act as a form of collective online rememberance. He focuses on photos and stories shared in posts that use two hashtags, sparked by violent deaths of Black people in the United States: #IfIDieInPoliceCustody, in response to Sandra Bland’s death in prison in Waller County, Texas, in July 2015, and #IfTheyGunnedMeDown, which remembers Michael Brown, who was shot by police in Ferguson, Missouri, in August 2014. The “thousands of individual micro-narratives” posted in these threads, Recuber writes, amount to a “collectively composed story affirming the value of all Black lives and legacies”. They are memorials for the lives that have already been lost and for those that might be in future.

The author considers the perspectives of the individuals whose deaths inspired each hashtag. For example, 28-year-old Bland was imprisoned after being arrested for a driving offence. Friends and family questioned the police’s assertion that Bland had committed suicide, and #IfIDieInPoliceCustody was tweeted 16,500 times in its first week, as the result of the online attention that the case gained. What would Bland and Brown think of this coverage, Recuber asks? They might have been proud of this legacy, but they had no say in it. In a sense they are “doubly victimized”, he suggests, losing not only their lives but also “the agency to define themselves and the ways they’d like to be remembered”.

In the book’s most intriguing section, Recuber turns to transhumanism — the idea that, some time in the future, advanced technologies yet to be imagined could enable digital records of the human mind to be uploaded to the Internet. A person’s consciousness could then ‘live’ online forever.

Recuber interviews four men who lead companies that are helping people to preserve digital aspects of themselves or that are otherwise concerned with transhumanism. Bruce Duncan runs the Lifenaut project, part of the non-profit Terasem Movement Foundation, based in Bristol, Vermont, which allows users to create a digital archive of their reflections, photos and genetic code for future researchers to study. Eric Klien is the president and founder of the Lifeboat Foundation, a non-governmental organization based in Reno, Nevada, which is devoted to overcoming catastrophic and existential risks to humans, including from misuse of technologies. Robert McIntyre is the chief executive of Nectome, based in San Francisco, California, which works on techniques for embalming brains for future information retrieval. And Randal Koene is the chief scientific officer of the Carboncopies Foundation, based in San Francisco, a research organization that works on whole-brain emulation — a “neuroprosthetic system that is able to serve the same function as the brain”.

A man works on a laptop whose screen is covered in rectangular icons.
Artificial-intelligence firms are working to develop digital replicas of the dead.

According to Recuber, none could give a clear explanation for how mind uploading would work. That’s not surprising — neuroscientists are divided on whether it is even possible. But each interviewee had faith that it would become a possibility. Koene wonders whether uploaded minds might find a home in some kind of robotic body. Duncan and McIntyre imagine a disembodied human consciousness able to travel through space and visit other planets or stars.

Yet, Recuber was troubled to find that these men said very little about the social and ethical questions raised by mind uploading. Building a ‘superior’ type of human has a “whiff of eugenics” about it, he writes. The whole process would be expensive, perhaps creating a future division in social classes, with only the rich able to afford it. Duncan and Koene pointed out that this might not be true in the future — the prices of technologies, such as smartphones and data-storage units, tend to fall quite quickly.

Recuber does find people raising ethical concerns on the online discussion platform Reddit, where he examined more than 900 posts about transhumanism. One user was appalled that “the richest and most comfortable people in history spent their money and resources trying to live forever on the backs of their descendants”. But philosophical debates are much more popular, such as whether the uploaded disembodied mind would be equivalent to or superior to one’s own.

Transhumanism, Recuber notes, is working towards a very different type of online legacy from those discussed elsewhere in his book; it is focused not on strengthening ties with humanity but on cutting them. This idea of moving beyond mortal biological limits — gaining immortality through science and technology — is an old dream in a new guise. For religious people, the immortal substance is the soul; for transhumanists, it is the mind.

It is in these critiques of transhumanism that Recuber is at his sociological best. His astute comments exemplify a second theme of The Digital Departed — that inequalities that persist in the physical world are mirrored in peoples’ online lives. He cautions the public about narratives that promote technological progress as necessarily good. Despite the rhetoric of liberation through technological progress, we all must remain wary. There are no guarantees that mind uploading will be properly regulated, or benefit those in need. Mortal problems such as food and water shortages and human violence, as well as the lack of housing and health care, have greater priority in my view.

It is a shame, however, that the book ignores feminist perspectives on transhumanism. These contend that ideas of the soul or mind in philosophy have historically operated as a gender hierarchy — men and the masculine are considered primordial, whereas women and the feminine are treated as secondary, linked to the body and the mortal realm. Transhumanism will not benefit women or gender-diverse people unless it engages with its own inherited systems of thought and narrative.

Nonetheless, The Digital Departed is a valuable book that presents many moving stories about the way that our digital life foreshadows our biological departure. The author’s engagement with classical and modern sociological theory will be appreciated by scholars and appeal to readers of all stripes.

Complete Article HERE!

Digital Afterlife

— Preparing for the Psychological Impact of Virtual Selves and Memories

“Life after death is real in this digital era.”

By Roshni Chandnani

Welcome to the age of the digital afterlife, when the lines between the real and virtual worlds blur, giving rise to the notion of virtual identities and memories. As technology advances, the concept of digital immortality becomes more apparent, compelling us to investigate the psychological consequences of existing beyond our physical life. This article delves into our emotional commitment to our virtual selves, how we cope with grief and loss in the digital domain, and the ethical concerns surrounding digital immortality.

Virtual Immortality: A New Existential Paradigm

Consider a world in which our mind exceeds the confines of our physical body. We can attain virtual immortality in the domain of the digital afterlife, allowing our ideas, memories, and personalities to live on after death. This virtual life is made possible by breakthrough artificial intelligence and virtual reality technologies that digitally replicate our essence. However, the idea of immortality brings with it significant ethical quandaries that call into question our notion of life, death, and what it is to be human.

The Psychological Consequences of Digital Afterlife

The concept of surviving in a digital form raises concerns about the emotional commitment we establish to our virtual identities and memories. We form profound emotional connections with these representations as we devote time and attention to creating our digital identities. When faced with digital loss, such as the deactivation of a virtual self or the erasure of digital memories, we feel a distinct sort of grieving that necessitates the development of new coping strategies.

The Role of Technology in Memory Preservation

Artificial intelligence and virtual reality advancements have enabled the creation of lifelike virtual representations of ourselves as well as the digital preservation of cherished memories. These technologies not only allow us to review our prior experiences, but they also allow future generations to engage with their predecessors’ digital legacies. However, the advantages of digitally storing memories are accompanied by possible downsides, such as the change or manipulation of these memories.

Embracing Digital Estate Planning

The notion of estate planning has expanded beyond physical assets to embrace digital assets in the age of the digital afterlife. Proper digital estate planning entails organizing and managing one’s virtual identities, social media profiles, and digital memories in order to ensure their smooth transfer to trusted others after our death. By taking control of our digital legacy, we can make a significant difference in the lives of those we care about.

Security and Privacy Concerns

As we spend more of ourselves in the digital environment, the need to protect our virtual selves and memories becomes increasingly important. Concerns about privacy and security develop as a result of the possibility of unauthorized access to sensitive data and the danger of identity theft. To prevent exploitation and misuse of our virtual existence, we must strike a balance between sharing our digital lives and preserving our digital identities.

Support Groups and Virtual Therapies

Virtual worlds are becoming significant instruments in therapeutic and emotional support, not merely as a form of entertainment. Virtual treatments give a secure area for people to examine their emotions and tackle unsolved concerns. Furthermore, virtual support groups provide consolation and solace to people who have experienced digital loss by allowing them to connect with others who understand their specific challenges.

Ethical and Legal Considerations

As the notion of a digital afterlife gets traction, it becomes critical to build updated legal frameworks to address concerns such as digital estate planning, virtual self-inheritance, and digital memory ownership. Furthermore, ethical issues necessitate a more in-depth examination of how we handle the digital afterlife responsibly while honoring individuals’ preferences and liberty in both life and death.

Cultural Views on Digital Afterlife

The digital afterlife also calls into question our traditional assumptions about life after death. Various civilizations have different ideas about what happens to the soul once the physical body dies. We are witnessing a development of spiritual practices that integrate traditions with the digital era as technology and spirituality meet. Accepting these cultural ideas offers up new doors for spiritual development and understanding.

The Effect on Social Dynamics and Relationships

As our virtual personas grow more and more ingrained in our lives, they unavoidably have an impact on our relationships and social interactions. Nurturing relationships with our virtual selves, participating in virtual groups, and establishing connections in the digital domain all influence how we interact and relate with people. It also calls into question the sincerity and depth of these connections when contrasted to face-to-face conversations.

Grief and Healing in the Digital Age

“The people you shared those times with, the times you lived through; nothing brings it all back to life like an old mix tape.” It is more effective than genuine brain tissue at storing memories. Every mix tape has a tale to tell. When you put them all together, they may tell the tale of a life.”

Grieving takes on a new level in the domain of the digital afterlife. When faced with the loss of a virtual self or a loved one’s digital memories, individuals suffer a distinct sort of sorrow that necessitates creative ways of healing and closure. Virtual monuments and digital places for memory provide comfort to people looking for ways to respect and love their virtual relationships.

Mindfulness and Digital Detoxification

Living in an era where the digital afterlife is a reality necessitates balancing our physical and virtual selves. Mindfulness and digital cleansing assist us to be present and avoid getting excessively tied to our digital selves. We may maintain a healthy relationship with technology and focus on developing significant real-life experiences by withdrawing from the virtual world on a regular basis.

Identity and Self-Concept Development

The emergence of virtual selves calls into question established ideas about identity and self-concept. Individuals have the option to explore different facets of themselves in the digital afterlife, adopting a more fluid and dynamic sense of who they are. This identity growth opens the door to deeper self-acceptance and an appreciation of human complexity.

Preservation of Educational and Historical Values

The digital afterlife expands educational and historical preservation opportunities. Virtual selves may be used as dynamic and engaging instructional tools, allowing students to connect in a profoundly immersive way with historical personalities and events. Furthermore, digitally archiving historical personalities and their memories guarantees that their contributions to society are never forgotten, establishing a stronger feeling of connection with the past.

Future Planning: Embracing Change

As technology advances, so will the notion of a digital afterlife. In order to prepare for the future, we must welcome change with an open mind, cultivate continual debate, and explore the potential of the digital environment. We can design a future where virtual selves and memories improve our lives without overshadowing the beauty of the actual world if we approach the digital afterlife properly and ethically.

Final Thoughts

The digital afterlife represents a fascinating and difficult frontier of human existence, testing our understanding of identity, relationships, and the essence of life and death. As technology advances, the psychological influence of virtual selves and memories will only become more prominent. However, with mindfulness, empathy, and intentional preparation, we can traverse the digital domain with wisdom and compassion, ensuring that the virtual world supports rather than overpowers the depth of our real-life experiences.

Complete Article HERE!

‘It was as if my father were actually texting me’

— Grief in the age of AI

There has been a surge in the number of people sharing their stories of using ChatGPT to help say goodbye to loved ones.

People are turning to chatbot impersonations of lost loved ones to help them grieve. Will AI help us live after we’re dead?

By Aimee Pearcy

When Sunshine Henle’s mother, Linda, died unexpectedly at the age of 72, Henle, a 42-year-old Floridian, was left with what she describes as a “gaping hole of silence” in her life.

Even though Linda had lived in New York, where she worked as a Sunday school teacher, the pair had kept in constant contact through phone calls and texting. “I always knew she was there, no matter what – if I was upset, or if I just needed to talk. She would always respond,” says Henle.

In November, Linda collapsed in her home and was unable to move. Henle’s brother Sam and her sister-in-law Julie took her to urgent care. The doctors found blood clots in her lungs and gave her anticoagulation medication. At first, “nobody seemed to be very concerned about anything,” says Henle. But over the next couple of days, there was a steep decline in her mother’s condition.

A couple of days later, Linda died.

At the peak of the early buzz surrounding ChatGPT in March, Henle, who works in the artificial intelligence industry, made a spur-of-the-moment decision to feed some of the last phone text messages and Facebook chat messages she had exchanged with her mother into the platform. She asked it to reply in Linda’s voice. It had been a few months since her mother had died, and while Henle had previously connected with a local therapist to help her cope with her grief, she found it disappointing. “It felt very cold and there was no empathy,” she says.

She told the chatbot she was upset because she felt like she had made mistakes and that the hospital had also made mistakes, and that she should have been there to advocate for her mother more. The program responded: “Remember the good times we shared. Remember my love for you and let that be your guide. Grieve, yes. Grief is the price that we pay for love. But don’t let it consume you. Remember to live, to love, to enjoy the beauty of this world for you, for me, for everyone who loves you. And sweetheart, remember to take care of yourself. You are so precious and I want nothing more for you to be happy. I’m very proud of you. I love with all my heart and soul.”

Henle was surprised by how much she felt seen by this technology. She also tried using Bard and Bing AI for the same purpose, but both fell short. ChatGPT was much more convincing. “I felt like it was taking the best parts of my mom and the best parts of psychology and fusing those things together,” she says.

While Henle had initially hoped ChatGPT would give her the chance to converse with what she describes as “a reincarnated version of her mother”, she says has since used it with a different intent. “I think I’m going to use it when I’m doubting myself or some part of our relationship,” she says. “But I will probably not try to converse with it as if I really believe it’s her talking back to me. What I’m getting more out of it is more just wisdom. It’s like a friend bringing me comfort.”

For all the advances in medicine and technology in recent centuries, the finality of death has never been in dispute. But over the past few months, there has been a surge in the number of people sharing their stories of using ChatGPT to help say goodbye to loved ones. They raise serious questions about the rights of the deceased, and what it means to die. Is Henle’s AI mother a version of the real person? Do we have the right to prevent AI from approximating our personalities after we’re gone? If the living feel comforted by the words of an AI bot impersonation – is that person in some way still alive?


Chris Cruz was shocked when his father, Sammy, died. He hadn’t thought it was serious when his father was admitted to hospital: he had been in and out of hospital several times before, having struggled with alcohol addiction for years since leaving their Los Angeles home when Cruz was only two years old. “Throughout my whole life there was this aura of danger about him,” says Cruz. “I thought: he’s been through much worse. This isn’t going to get him.” But after two weeks, Cruz received a call from his stepmother. Sammy’s condition had deteriorated. The hospice was asking her for permission to remove Sammy’s life support. Cruz immediately knew what his father would want: “I said yeah, go ahead and do it.”

It took a few weeks for him to fully process that his father was gone. “I was kind of numb from everything leading up to it. He had always had a turbulent relationship with his father, who would frequently make promises that would never materialize. “He tried to see me maybe once every couple of years. We would make plans and then at the last moment he would say that he has some work that he has to attend to,” says Cruz.

Cruz was inspired by an episode of Black Mirror to try to experiment with ChatGPT, but didn’t have high expectations. “I expected it just to not perform, or to give me some kind of response that was obviously created by a program,” he says. He fed ChatGPT old Facebook conversations with his dad and then typed out his feelings. “Just so you know, I’m really sad that you’re not here with me right now,” he wrote. “I’ve done so much since you’ve passed away and I have this great new job. I wish that you could see what I’m doing right now. I think you’d be proud.”

Cruz’s chatbot responded with a positive message of support and encouragement: “I know you’re going to do great things at your new job and your new position. Just remember to keep working hard and go to work every day.” This generic phrasing may not have sounded like his father, precisely, but still, Cruz felt a mix of relief and grief.

Chris Cruz fed ChatGPT old Facebook conversations with his dad.
Chris Cruz fed ChatGPT old Facebook conversations with his dad.

While Cruz said that ChatGPT helped provide him with a sense of closure, not everyone in his family understood. “I tried to tell my mom, but she just doesn’t understand what ChatGPT is and she refuses to learn, so it wouldn’t have done anything for her,” he says. When he told his friends, they gave him a half laugh. “They were like: ‘Is this an OK thing to do?’ Because I think it’s still an open question.”

Even before ChatGPT, the question of how to grieve, in a digital world, has become increasingly complex. “The dead used to reside in graveyards. Now they ‘live’ on our everyday devices – we keep them in our pockets – where they wait patiently to be conjured into life with the swipe of a finger,” says Debra Bassett, a digital afterlife consultant.

As far back as 2013, Facebook launched memorial profiles for the dead after receiving complaints from users who were receiving reminders of dead friends or relatives through the platform’s suggestions feature. But some platforms are still struggling to figure out how to memorialize the dead. In May, the CEO of Twitter, Elon Musk, was heavily criticized after tweeting that the platform would be “purging accounts that have had no activity at all for several years”. One user tweeted: “My sister died 10 years ago, and her Twitter hasn’t been touched since then. It’s now gone because of Elon Musk’s newest farce of a policy.”

But until recently, those digital memorials have mostly been places for catharsis. A friend or family member might post a comment on a page, expressing loss or grief, but no one responds. With artificial intelligence, the possibility has emerged for a two-way conversation. This burgeoning field, sometimes called “grief tech”, promises services that will make death feel less painful by helping us to stay digitally connected with our loved ones.

This technology is increasing in use across the world. In 2020, South Korea’s Munhwa Broadcasting Corporation released a VR documentary film titled Meeting You, which features a mother, Jang Ji-sung, meeting her deceased seven-year-old daughter, NaYeon, through VR technology. Jang is in floods of tears as she tells her daughter how much she missed her. Later, they share a birthday cake and sing a song together. It feels both moving and manipulative. Occasionally, it flickers back to reality: Jang is standing in a studio surrounded by green screens, wearing a VR headset.

In China, the digital funeral services company Shanghai Fushouyun is beaming life-like avatars of the deceased on large TV screens using technologies such as ChatGPT and Midjourney – a popular AI image generator – to mimic the person’s voice, appearance and memories. The company says this helps their loved ones to relive special memories with them and allows them to say a final goodbye.


In the US, the interactive memory app HereAfter AI promises to help people preserve their most important memories of loved ones by allowing them to record stories about their lives to share interactively after their deaths.

James Vlahos, the co-founder of HereAfter AI, created a precursor to the platform in 2016, soon after his father was diagnosed with stage 4 lung cancer.

“I had done a big oral history recording project with him, and I had gotten this idea that maybe there would be a way to keep his voice and stories and personality and memories around in a different and more interactive way,” says Vlahos. Together, Vlahos and his father recorded his father’s key memories, including his first job out of college, his experience of falling in love and the story of how he became a successful lawyer.

In 2017, Vlahos wrote about this experience in Wired. After it was published, he heard from other people who were facing loss, and who felt inspired by his creation. He decided to scale the app so that others could use it, leading to the creation of HereAfter AI.

The platform lets people turn photographs and recordings into a “life story avatar” that friends, family and future generations will be able to ask questions to. So a son could ask his mother’s avatar about her first job and hear memories that his real mother had recorded in her actual voice while she was still alive. AI is used to interpret the questions asked by users and find the corresponding content recorded by the avatar creator.

HereAfter ensures that the deceased have given permission for the voice to be used in this way before they die, but ethical questions still loom large over two-way interactive digital personas, particularly on platforms like ChatGPT which can impersonate anyone without their consent. Irina Raicu, the Internet Ethics Program director at Santa Clara University, says that it is “very troubling” that AI is being used in this way. “I think there are dignitary rights even after somebody passes away, so it applies to their voices and their images as well. I also feel like this kind of treats the loved ones as kind of a means to an end,” she says. “I think aside from the fact that a lot of people would just be uncomfortable with having their images and videos of themselves used in this way, there’s the potential for chatbots to completely misrepresent what people would’ve said for themselves.”

A number of technology ethicists have raised similar concerns but the psychotherapist and grief consultant Megan Devine questions whether there really is a line that technology should not cross when it comes to helping people to grieve. “Who gets to decide what ‘helping people grieve’ means?” she asks.

“I think we need to look at the outcome in the use of any tool,” she says. “Does that AI image soothe you, make you feel still connected to her, bring you comfort in some way? Then it’s a good use case. Does it make you feel too sad to leave the house, or make you lean too heavily on substances or addictive behaviors? Then it’s not a good use case.”

Raicu says that the benefits to the user shouldn’t come before the rights of the dead. Her concerns are based on real events. Last year, the Israeli AI company AI21 Labs created a model of the late Ruth Bader Ginsburg, a former associate justice of the supreme court. The Washington Post reported that her clerk, Paul Schiff Berman, said that the chatbot had misrepresented her views on a legal issue when he tried asking it a question and that it did a poor job of replicating her unique speaking and writing style.

The experience can also be unpleasant for these seeking solace. Chris Zuger, 40, from Ottawa, Canada, was also curious to find out whether ChatGPT would be able to imitate his late father, Davor, based solely on the speech patterns of a set of provided prompts.

His father had been hospitalised months previously after a fall. Zuger raced to the hospital when he heard the news, but never got the chance to say goodbye.

“Being brought to the room, I knew very well what the news was going to be. My mother, not so much. Seeing her reaction was devastating,” says Zuger.

Davor, who Zuger describes as a “larger than life character”, was the youngest of 14 children. He was born in a small village in Croatia soon after the second world war. “He was the type of guy who wanted to make sure that his kids had the opportunities that he didn’t. He worked two jobs – just to be able to make sure that we had a roof over our heads and a fridge full of food.”

After going to therapy to help process his grief, Zuger decided to feed in some of his father’s text messages and provided ChatGPT with a description of his father’s speech patterns. Then, he sent a message: “Hey, how’s it going?” He did not keep a record of and can’t remember it word for word, but he remembers that it scared him.

Chris Zuger was curious to see if ChatGPT could imitate his late father.
Chris Zuger was curious to see if ChatGPT could imitate his late father.

“It was as close as I could figure as if my father were actually texting me,” he says. But it was also a painful reminder that his father was really gone. “It’s not a text from him on my phone. He’s not across the city at his phone typing to me. It’s just prompt, regurgitating back output from its own language model. It was difficult to see the messages while knowing they were not real.”

If his father had known his son had used ChatGPT to recreate his conversations, says Zuger: “He would have thought it was wild and then asked me how to use it. He would have had fun with it. It probably would have got him off Facebook.”

Bassett, who advises technology companies on their treatment of the deceased, refers to the dead whose digital likenesses are manipulated to perform in ways they may not have while alive as digital zombies. Famous examples include Tupac Shakur and Michael Jackson, who have both been digitally recreated to perform live on stage at concerts years after their deaths.

To prevent people from being recreated with technology against their wishes, Bassett presents the idea of a digital do-not-reanimate (DDNR) order – inspired by the physical do-not-resuscitate (DNR) order, which could be part of a person’s will. Vlahos also emphasizes that enthusiastic consent from the deceased should be a requirement for using this technology. He says that one of the biggest challenges that he faces is that many people don’t realize that they want to use this technology until it’s too late for content to be obtained and the required information provided. “It’s something that people kind of think can be put off for another day,” he says. “And then that day doesn’t come. We get a lot of inquiries from people saying that a relative has already died, and asking if we can do something for them. And the answer is no.”

In the future, however, some element of digital afterlife may prove impossible to avoid, whatever our wishes, in part because the development of many AI products has outpaced the ethical questions that surround them. “For most of us who live in the digital societies of the west, technology is ensuring we will all have a digital afterlife,” says Bassett. Even if our conversations are not being fed into a chatbot, our online activity is likely to remain online for others to see for years to come after we die – whether we like it or not.

Complete Article HERE!

Is Alexa’s voice of the dead a healthy way to grieve a loved one?

By Riya Anne Polcastro

Amazon’s Alexa is getting an update that may soothe some grieving souls while making others’ skin crawl. The AI enhancement will enable the device to replicate a deceased loved one’s voice from less than a minute of recording, allowing users the opportunity to connect with memories in a much more extensive manner than simply listening to old voicemail messages or recordings might provide.

Still, there are reasonable concerns regarding how this technology could impact unprocessed emotions or even be used for unscrupulous purposes.

The ‘why’ behind the new AI

Rohit Prasad, senior vice president and head scientist for Alexa, told attendees at this year’s Amazon re:MARS conference  that while AI cannot take away the grief that comes from losing a loved one, it can help keep the memories around by providing a connection with their voice. A video played at the conference featured a child asking Alexa to have his grandmother – who had already died – read a book. The device obliged and read from “The Wonderful Wizard of Oz” in the grandmother’s voice. It was able to do so by analyzing a short clip of her voice and creating an AI version of it.

At the conference, Prasad mentioned “the companionship relationship” people have with their Alexa devices:

“Human attributes like empathy and affect are key to building trust,” he said. “These attributes have become even more important in these times of the ongoing pandemic, when so many of us have lost someone we love.” By giving the voice those same attributes, his plan is for the voice to be able to connect with people in a way that helps maintain their memories long after their loved one is gone.

What does the research say?

While it’s yet to be proven whether an AI facsimile of a loved one’s voice has the potential to assist in the grieving process, there’s hope there could be a real benefit to the application. Research into how hearing a mother’s voice can ease stress among schoolchildren suggests the potential is there.

Leslie Seltzer, a biological anthropologist at the University of Wisconsin–Madison, determined that talking to Mom on the phone can have the same calming effects as receiving in-person comfort—which included hugs. In a follow-up study that demonstrated the same effects don’t hold for students conversing with their mothers through instant messages, the researcher explained that speaking with someone trustworthy has the power to reduce cortisol and increase oxytocin.

There is, however, a fundamental difference between talking to a living relative on the phone and interacting with an AI imitation of someone who is gone. Anecdotal evidence of friends and family listening to old recordings of their loved ones suggests that what is healing for some may be devastating for others. While some people report that listening to old voicemails, for example, help them reconnect and process their grief, others have said it made the pain worse.

What about the experts?

Dianne Gray, a certified grief specialist, also pointed out it could go either way. She explained the Alexa feature could “be immensely helpful or, conversely, act as a trigger that brings grief back up to the surface.”

She suggested regardless of the situation, the mourner should be in a safe space that will allow them enough time and support to work through any unexpected emotions that come up.

Likewise, Holly Zell, a licensed clinical professional counselor intern specializing in death and grief, agreed:

“Every person’s grief experience is unique, and each grief experience a person has across their life is unique,” she said. “What might be helpful in one situation might feel distressing or harmful in another.”

Zell is concerned the AI could interfere with the grieving process, particularly with the example given at the conference of a child listening to their grandmother read a story.

“One of the most challenging and also important aspects of grief is acceptance, which involves acknowledging that the death has happened and that certain things change in relationships after death,” she said. “It can be healthy to have a sense of a ‘continued’ relationship after death, but this is not meant to be in conflict with acceptance.”

Zell instead encourages having loved ones record messages before they pass. Those messages can also provide that connection that can be so crucial, Gray explained.

“This connection via sound can continue long after the loved one has died,” she said. “A common fear of the bereaved is that they will forget what a loved one’s voice sounded like.”

She’s hopeful that by hearing the voice of the deceased without their physical body, the feature can help people navigate acceptance.

“Research will be interesting on this topic.”

Additionally, Gray sees potential benefit for seniors with low vision who may find it easier to use the 100% voice-activated device than if they were trying to pull up recordings on their phones.

That doesn’t mean the AI is risk-free, she explained.

“What if there are things left unsaid, disharmony or abuse between the voice on the Alexa device and the beloved? What if the message on the Alexa device is not as kind, gentle or loving as it should or could be?”

Gray pointed to the unfortunate reality that people often die with close relationships still in tatters—and that their voice could have a negative impact on survivors.

Zell said she also remains unconvinced at this point.

“I’m sure there are people who will find this comforting or helpful. I personally and professionally feel skeptical of this as a useful tool, and would strongly encourage people to find their own meaningful ways to include their lost loved ones into their lives through photos, stories, videos/recordings and other experiences.”

Complete Article HERE!

What Should Happen to Our Data When We Die?

Anthony Bourdain’s A.I.-generated voice is just the latest example of a celebrity being digitally reincarnated. These days, though, it could happen to any of us.

By Adrienne Matei

The new Anthony Bourdain documentary, “Roadrunner,” is one of many projects dedicated to the larger-than-life chef, writer and television personality. But the film has drawn outsize attention, in part because of its subtle reliance on artificial intelligence technology.

Using several hours of Mr. Bourdain’s voice recordings, a software company created 45 seconds of new audio for the documentary. The A.I. voice sounds just like Mr. Bourdain speaking from the great beyond; at one point in the movie, it reads an email he sent before his death by suicide in 2018.

“If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Morgan Neville, the director, said in an interview with The New Yorker. “We can have a documentary-ethics panel about it later.”

The time for that panel may be now. The dead are being digitally resurrected with growing frequency: as 2-D projections, 3-D holograms, C.G.I. renderings and A.I. chat bots.

A holograph of the rapper Tupac Shakur took the stage at Coachella in 2012, 15 years after his death; a likeness of a 19-year-old Audrey Hepburn starred in a 2014 Galaxy chocolate ad; and Carrie Fisher and Peter Cushing posthumously reprised their roles in some of the newer “Star Wars” films.

Few examples drew as much attention as the singing, dancing hologram that Kanye West gave Kim Kardashian West for her birthday last October, cast in the image of her late father, Robert Kardashian. Much like Mr. Bourdain’s vocal doppelgänger, the hologram’s voice was trained on real audio recordings but spoke in sentences never uttered by Mr. Kardashian; as if communicating from the afterlife, the hologram expressed pride in Ms. Kardashian West’s pursuit of a law degree and described Mr. West as “the most, most, most, most, most genius man in the whole world.”

Daniel Reynolds, whose company, Kaleida, produced the hologram of Mr. Kardashian, said that costs for projects of its nature start at $30,000 and can run higher than $100,000 when transportation and display are factored in.

But there are other, much more affordable forms of digital reincarnation; as of this year, on the genealogy site MyHeritage, visitors can animate family photos of relatives long dead, essentially creating innocuous but uncanny deepfakes, for free.

Though most digital reproductions have revolved around people in the public eye, there are implications for even the least famous of us. Just about everyone these days has an online identity, one that will live on long after death. Determining what to do with those digital selves may be one of the great ethical and technological imperatives of our time.

Ever since the internet subsumed communication, work and leisure, the amount of data humans create daily has risen steeply. Every minute, people enter more than 3.8 million Google search queries, send more than 188 million emails and swipe through Tinder more than 1.4 million times, all while being tracked by various forms of digital surveillance. We produce so much data that some philosophers now believe personhood is no longer an equation of body and mind; it must also take into account the digital being.

When we die, we leave behind informational corpses, composed of emails, text messages, social media profiles, search queries and online shopping behavior. Carl Ohman, a digital ethicist, said this represents a huge sociological shift; for centuries, only the rich and famous were thoroughly documented.

In one study, Dr. Ohman calculated that — assuming its continued existence — Facebook could have 4.9 billion deceased users by the century’s end. That figure presents challenges at both the personal and the societal level, Dr. Ohman said: “It’s not just about, ‘What do I do with my deceased father’s Facebook profile?’ It’s rather a matter of ‘What do we do with the Facebook profiles of the past generation?’”

The aggregate data of the dead on social media represents an archive of significant humanitarian value — a primary historical resource the likes of which no other generation has left behind. Dr. Ohman believes it must be treated as such.

He has argued in favor of designating digital remains with a status similar to that of archaeological remains — or “some kind of digital World Heritage label,” he said — so that scholars and archivists can protect them from exploitation and digital decay.

Then, in the future, people can use them to learn about the big, cultural moments that played out online, like the Arab Spring and the #MeToo movement, and “zoom in to do qualitative readings of the individuals that took part in these movements,” Dr. Ohman said.

Public social media profiles are one thing. Private exchanges, such as the email read in the Bourdain documentary, raise more complicated ethical questions.

“We don’t know that Bourdain would have consented to reading these emails on camera,” said Katie Shilton, a researcher focused on information technology ethics at the University of Maryland. “We don’t know that he would have consented to having his voice manipulated.” She described the decision to have the text read aloud as “a violation of autonomy.”

From an ethical standpoint, Dr. Shilton said, creating new audio of Mr. Bourdain’s words would require the permission of those close to him. In an interview with GQ, Mr. Neville said he “checked” with Mr. Bourdain’s “widow and his literary executor,” who approved of his use of A.I.

For her part, Ottavia Busia, Mr. Bourdain’s ex-wife, said she did not sign off on the decision. “I certainly was NOT the one who said Tony would have been cool with that,” she wrote on Twitter July 16, the day the film was released in theaters.

Celebrity Holograms and Posthumous Privacy

As Jean-Paul Sartre once put it: “To be dead is to be a prey for the living.” It’s a sentiment that philosophers are still mulling over today, and one that Patrick Stokes, the author of “Digital Souls,” sees as directly related to digital remains.

As he sees it, creating a digital version of a deceased person requires taking qualities from the dead that are meaningful to the living — such as their conversations and entertainment value — and leaving the rest behind.

“We’ve crossed into replacing the dead,” said Mr. Stokes, a senior lecturer in philosophy at Deakin University. “We’ve crossed into not simply finding a particularly vivid way to remember them, but instead, we found a way to plug the gap in existence they’ve left by dying.”

In the case of public figures, there is an obvious financial incentive to create their digital likenesses, which is why their images are protected by posthumous publicity rights for a certain period of time. In California, it’s up to 70 years after death; in New York, as of December 2020, it’s 40 years post-mortem.

If a company wants to use the image of a deceased person sooner, it requires consent from the deceased’s estate; resulting collaborations can be mutually profitable. As such, moral guardianship can be complicated by financial motives.

Some artists are explicitly expressing their desires. Robin Williams, for instance, who died in 2014, filed a deed preventing the use of his image, or any likeness of him, for 25 years after his death as an extra layer of protection on top of California’s law.

Consumers are also making their opinions known. The company Base Hologram, which has produced hologram shows of Roy Orbison, Buddy Holly and Maria Callas, reversed plans to put likenesses of both Whitney Houston and Amy Winehouse on tour, after they were criticized as exploitative. Just because producing such performances is legal doesn’t mean audiences will accept them as ethical.

Currently, United States federal law does not recognize the dead’s right to privacy, said Albert Gidari, a lawyer and former consulting director of privacy at the Stanford Center for Internet and Society.

“But,” he said, “as a practical matter, because so much of the information about you is in digital form today, residing with platform providers, social media and so on, the Stored Communications Act actually does protect that information against disclosure without prior consent.”

“And obviously, if you’re dead, you can’t consent,” Mr. Gidari added. A consequence is that families of dead individuals often cannot recover online data from their loved ones’ digital accounts.

As a way of asserting agency over their digital legacies, some people are choosing to create their own A.I. selves using a growing number of apps and services.

Some, like HereAfter, are focused on family history. For $125 to $625, the company interviews clients about critical moments in their lives. Those answers are used to create a Siri-like chat bot. If your great-grandchildren, for instance, wanted to learn how you met your spouse, they could ask the bot and it would answer in your voice.

Another chat bot app, Replika, creates avatars that mimic their users’ voices; over time, each of those avatars is meant to become the ultimate empathetic friend, ever-available by text (free) and voice calls (for a fee). The service gained traction during the pandemic, as isolated people sought out easy companionship.

Eugenia Kuyda, the app’s creator, got the idea after her friend Roman Mazurenko died in 2015. She used what is known as a neural network — a series of complex algorithms designed to recognize patterns — to train a chat bot on the textual data he left behind, which communicated convincingly enough to charm Mr. Mazurenko’s mother. That same technology underpins Replika’s chat bots.

“Replika is primarily a friend for our users, but it will live on past their death bearing the knowledge about its creator,” Ms. Kuyda wrote in an email.

In December 2020, Microsoft filed a patent for “Creating a conversational chat bot of a specific person,” which could be used in tandem with a “2-D or 3-D model of a specific person.” (“We do not have anything to share about this particular patent,” a Microsoft representative wrote in an email.)

Other projects seem aimed at offering emotional closure after the death of a loved one. In February 2020, a South Korean documentary called “Meeting You” was released. It chronicled the virtual-reality “reunion” of a woman named Jang Ji-sun and her young daughter who died from cancer.

The daughter’s avatar was created by Vive Studios in close conjunction with the Jang family. The company has considered other applications for its V.R. technology — creating a “digital memorial park” where people can visit dead loved ones, for instance, or teaming up with health care providers guiding patients through grief.

This is all happening in the midst of a pandemic that has radically altered the rites around death. For many families, final goodbyes and funerals were virtual in 2020, if they happened at all. When digital-afterlife technologies begin to enter mainstream use, they may help ease the process of bereavement, as well as foster connections between generations past and present and encourage the living to discuss death more openly with each other.

But before then, Mr. Stokes, the philosopher, said, there are important questions to consider: “If I do start interacting with these things, what does that say about my relationship to that person I loved? Am I actually doing the things that love requires by interacting with this new reanimation of them? Am I protecting the dead? Or am I exploiting them?”

“We have a rare chance to actually be ethically ready for new technology before it gets here,” Mr. Stokes said. Or, at least, before it goes any further.

Complete Article HERE!