
One of the potential applications of AI text generators such as ChatGPT is creating a chatbot based on people who have died so that users can speak to those “people” after they are gone. This could be done with famous figures from history or personal loved ones. Such “grief tech,” as it is called, is already being created: HereAfter, You Only Virtual, Character.ai, and MindBank are just a few examples. There are currently apps where living users answer questions now to help create an AI chatbot clone of themselves that others can speak to after they die.
Theoretically, if a person has enough textual data to input into the model (from books, journals, social media posts, emails, and text messages), then the AI trained on that data can anticipate what that person is likely to say given any prompt (which is essentially how all LLMs work). The chatbot will learn to write in the style of the deceased person based on their personal data. Using continually updated data from the internet, the “deadbot” can comment on current events, making it seem as though the person is still alive. Users can learn what the deceased person would think about things that have happened in the world since they passed away. Or they can ask the chatbot all the questions they wish they had asked while the person was still alive. At least that is what the chatbot’s creators will claim their AI can do. But this is a false hope, a facade. AI cannot predict what a deceased human being would think or say years or decades later. You cannot create an accurate chatbot based on the data of the dead.
Continue reading