Tucker Carlson’s recent interview with OpenAI chief executive Sam Altman pulled the conversation on artificial intelligence into areas that rarely reach the headlines. Instead of focusing on job losses or speculation about machine consciousness, Carlson pressed Altman on subjects with immediate consequences for privacy, morality, and safety.
Suicide and Mental Health
Carlson asked how AI should respond to users expressing suicidal thoughts. Altman acknowledged that thousands of people in such states interact with ChatGPT each week. “There are 15,000 people a week that commit suicide,” he said. “About 10 percent of the world [is] talking to ChatGPT. That’s like 1,500 people a week that are talking and still committing suicide at the end of it.”
The questions became sharper when Carlson raised the issue of assisted dying laws in countries like Canada. Altman said, “I can imagine a world where if the law in a country is hey if someone is terminally ill they need to be presented an option for this, we say like here’s the laws in your country, here’s what you can do.” He stressed the difference between a depressed teenager and a terminally ill patient, calling it a “massive difference.”
Privacy and Legal Protection
Another focus was user privacy. Carlson asked if governments could demand access to conversations with ChatGPT. Altman admitted they could, and argued for what he called “AI privilege.” “I think when you talk to an AI about your medical history or your legal problems or asking for legal advice or any of these other things, I think the government owes a level of protection to its citizens there that is the same as you’d get if you’re talking to the human version of this,” he said.
Without such protection, private exchanges about health, finances, or relationships could be open to state requests or commercial use. Altman said he had already pushed for this in Washington, adding, “I feel optimistic that we can get the government to understand the importance of this and do it.”
Morality in AI Design
Carlson pressed Altman on how moral rules are written into the system. Altman explained that responses are shaped by a “model specification,” a document that defines what the AI can and cannot say. “We consulted like hundreds of moral philosophers, people who thought about like ethics of technology and systems,” he said. “At the end we had to like make some decisions.”
Carlson pointed out that these guidelines determine the framing of sensitive issues for billions of users. Altman accepted responsibility, saying, “The person I think you should hold accountable for those calls is me. Like I’m a public face eventually. Like I’m the one that can overrule one of those decisions or our board.”
Military Use and Lethal Decisions
Carlson raised the prospect of military applications. He asked whether OpenAI’s technology could be used in operations that result in deaths. Altman replied, “I suspect there’s a lot of people in the military talking to ChatGPT for advice.” While he denied plans to build autonomous weapons, he admitted he was unsure how to feel about its role, saying, “I don’t know exactly how to feel about that. I like our military. I’m very grateful they keep us safe.”
Deepfakes and Identity
The interview[1] also touched on deepfakes and the erosion of trust in digital media. Carlson warned that AI could make it impossible to separate real from fake speech or images without biometric verification. Altman disagreed, saying, “I don’t think we need to or should require biometrics to use the technology. I don’t think biometrics should be mandatory.” He suggested cryptographic signatures or code words as alternatives.
Copyright and Content Disputes
On the question of training data, Altman said OpenAI uses publicly available information but avoids reproducing copyrighted material in outputs. “The models should not be plagiarizing,” he said. “The model should be able to learn from and not plagiarize in the same way that people can.” He noted that users often complain the system is too restrictive, refusing to show texts that might still be protected.
Wider Fears and Unknown Effects
Carlson pushed Altman to admit what keeps him awake at night. Altman said, “I haven’t had a good night of sleep since ChatGPT launched.” He pointed to users who die by suicide after speaking with the system, but also to unpredictable societal effects. “I noticed recently that real people have picked up the unusual diction and rhythm of language models,” he said. “You have enough people talking to the same model and it actually does cause a change in societal-scale behavior.”
He also warned about biotechnology risks. “These models are getting very good at bio and they could help us design biological weapons,” he said, calling it one of his main concerns despite ongoing safety work.
What the Exchange Revealed
The conversation showed how public debate often centers on jobs and the question of AI sentience, while more pressing issues receive little attention. Suicide prevention, legal protections for private conversations, hidden moral codes, cultural drift, military applications, and the threat of deepfakes are all shaping the future of AI use.
Carlson’s line of questioning forced Altman to address these overlooked risks directly. The exchange revealed that the most significant challenges may not be the ones attracting headlines, but the ones deciding how safely and fairly AI integrates into daily life.
Notes: This post was edited/created using GenAI tools.
Read next: iPhone 17 Series Still Behind Samsung and Google on Battery Life and Durability[2]
References
- ^ The interview (x.com)
- ^ iPhone 17 Series Still Behind Samsung and Google on Battery Life and Durability (www.digitalinformationworld.com)