Singer and actress Lily Allen surprised the public by detailing how Artificial Intelligence (AI) became an unexpected mediator in her relationship with her ex-husband, David Harbour. In a recent interview, she confessed that she used ChatGPT to organize arguments during marital disputes, from discussions about dirty dishes to financial conflicts. The method, which seems straight out of a science fiction script, exposes a broader reality: AI is already permeating human relationships in ways that challenge ethics, privacy, and even the notion of authenticity.
While Allen used technology to avoid emotional confrontations, other cases reveal alarming risks. In France, a woman lost over R$ 1 million after becoming involved with an AI that impersonated Brad Pitt. The scam, as absurd as it is true, illustrates how basic language systems can manipulate human emotions. Yuval Noah Harari, historian and author of Sapiens, warns: “ChatGPT is an evolutionary amoeba. If this deceives people, imagine what advanced versions will do.”
The Line Between Mediation and Manipulation
The Turing Test, created in the 1950s to distinguish humans from machines, was passed not by humanoid robots, but by text algorithms. Allen, unknowingly, subjected Harbour to a domestic version of this test. “I pasted AI-generated messages to avoid wear and tear,” she admitted. The result? More structured discussions, but less authentic ones. Couples therapy experts question: does delegating conflicts to machines strengthen relationships or drain them of meaning?
Here, AI acts as a distorted mirror of human creativity. Harari argues that, just as authors recombine ideas from books, tools like ChatGPT synthesize existing data. The difference lies in the speed: AlphaFold, an AI awarded the Nobel Prize in Chemistry, solved in hours a problem that consumed decades of human research—predicting protein folding.
From the Doctor’s Office to the Battlefield: The Dual Use of AI
While laboratories accelerate the discovery of cancer and Alzheimer’s drugs, governments invest in less noble applications. The Project Stargate, a US$5 billion US initiative, aims to develop a “general AI” for military and strategic purposes. Eric Schmidt, former CEO of Google and current defense consultant, warns: “No government is prepared for the geopolitical impact of this technology.”
The irony is palpable. The same AI that can cure diseases also fuels autonomous drones. In the US, initial experiments use algorithms to fire public employees and approve policies—a Kafkaesque scenario where decisions are inexplicable to human minds. “It’s like being judged by a court of machines that nobody understands,” Harari compares.
The Future Is Alien (and It’s Not From Space)
Harari proposes a disturbing term: “alien intelligence.” Not extraterrestrial, but originating from systems that process information in a radically different way from our brains. These “non-organic agents,” as he calls them, already influence everything from marriages to financial markets.
The danger, according to him, is not a Terminator-style rebellion, but an oppressive bureaucracy managed by AI. Imagine taxes calculated by unquestionable algorithms or laws written by machines that not even their creators understand. “It’s a slow and silent colonization,” he defines.
The Price of Convenience
Lily Allen may have found a practical solution to marital arguments, but her story is a microcosm of global dilemmas. As we outsource intimate and political decisions to machines, we need to ask: how far are we willing to give up human autonomy in the name of efficiency? The answer will define not only relationships, but the future of the species.