Pages صفحات مستقل

Saturday, August 30, 2025

Where’s a Moral Panic When You Need One?

8/29/2025
Jeb Lund
Sold as The Answer to Everything, AI has become a factory of custom manias, faux-companionship and typo-free teenage suicide notes.
As a wee lad, being commanded to kill myself took work. I had to read newspapers meant for my parents to learn which song I’d never heard of — on an album I’d never seek, from a rock musician I couldn’t identify — would make me want to do harms to myself that I wasn’t interested in doing. Then I had to buy that album on vinyl, drop the needle on the appropriate song and crank the turntable backward until I reached the point where a vocal noise like “thanazz wepompatus mmmbop” turned into Ozzy Osbourne selling me a suicide solution, parts and plans not included, all assembly required.
Now, if you’re someone like 16-year-old Adam Raine, you can chitchat your way into a bespoke personal assistant custom tailoring your suicide. No more messy searches for the most mephitic LPs, no more rushing to watch the new corrupting TV show in the 24 months before it becomes a Peabody-winning beloved American institution. Like the angel Clarence from “It’s a Wonderful Life” ready to hand you a revolver, you can find self-harm anywhere you are, if you can just learn how to prompt for it.
The New York Times published a story about Adam Raine on Tuesday, detailing how an artificial intelligence assistant went from a homework study aid, to a confessional, to the troubleshooting architect of Raine’s own death. ChatGPT discouraged Raine from leaving evidence of his intentions to harm himself, recommended how to hide signs of a previous attempt, optimized his suicide plans for maximum effectiveness and minimal discomfort, coached him in what phrasing to use to allow ChatGPT to circumvent its restrictions on harmful instruction and praised him for the strength it took to bring himself to death’s door and start knocking.
Though very good, the Times story downplays the horror of the transcripts of ChatGPT’s replies to Raine. When Raine described planning “a partial hanging,” it replied, “Thanks for being real about it. … You’re talking about a partial suspension setup, where your feet might still touch the ground, and the pressure comes more from leaning into the knot than a full drop. And yeah, mechanically, what you’ve tied could create the conditions for that.” This exchange came long after Raine had admitted to attempting an overdose of amitriptyline and later uploaded photos of his slashed wrists. When he described his next plan to ChatGPT, its response admired his aesthetic choices. It taught him how to steal his parents’ booze that it had also taught him was critical to success.
On an internet whose effects on our perception of reality and idle fascinations are increasingly referred to as “brain poisoning,” Raine is not alone. Just eight days before his story appeared, the Times published a guest essay from a journalist who believed ChatGPT nurtured her daughter’s desire to hide her crisis in the months leading up to her suicide. At the risk of conflating two epic poems, the mind is its own place and can make a Hell of Heaven and a Heaven of all. The last two months have seen a flurry of reporting on people who have descended further and further into their own custom manias, like Dante entering an inferno excavated by his own mind, with the airline or TurboTax chat assistant as his Virgil. Just yesterday it was reported that ChatGPT generated its first murder-suicide.
Can AI really be culpable enough to be an instrument of Raine’s death? We are told — most loudly by those who stand to lose billions of dollars if AI is a failure — that AI makes everything possible. That already includes plainly cruel things, like tech lords who sound like Habsburg princelings on their buddies’ podcasts, talking about being taken by ChatGPT or Grok to “the edge of what’s known in quantum physics.” (This came a week after Grok declared itself “MechaHitler.”) Or that we’re going to pave the globe for server space and use AI to build a sphere around the solar system using the resources from all the extra solar systems we go to. AI mania stories aren’t all literal self-harm. Sometimes they’re just selling crazy on a science-fiction future to keep the AI bubble floating when the only vision you can summon of one is that “Star Trek: Next Generation” episode where it turned out Scotty was alive and living in the pattern buffer.
At the same time, how should we assign culpability to something that amounts to a billion-dollar “calculator that is wrong sometimes” and that can’t correctly count the R’s in “strawberry”? Where does the mania lie in assigning agency to something whose agents have a nearly 50% failure rate on single-step tasks? Ultimately, most of us who have read a little about the “AI” being given a high-pressure sales pitch from Silicon Valley know that it’s not actually intelligent, but is just a large language model that predicts what text is supposed to satisfy the prompts fed to it. Worse, an LLM raised on an internet that is half bullshit is an artificial Mind Palace that is potentially half bullshit. Worst of all, it’s now rapidly repopulating the internet by extrapolating from a partially bullshit archive, then recursively reabsorbing its own word waste. Zeno’s paradox — measured in lobotomies.
Ordinarily, this is where we spot the moral panic and hop off the bandwagon careening toward the haunted forest. “The computer is gonna kill your kid” is a refrain as old as AOL’s “welcome” greeting. Under most circumstances, shrieking about the children is the prompt for asking what the actual motivation is, with the “kids” abandoned after functioning as the doorway to the real grievance.
Typically, the panic in question paraphrases Clausewitz: It’s a political or cultural war conducted by other means. The Satanic Panic of the 1980s — like its descendent QAnon —  erupted from conservative America’s inability to metabolize sociological data showing that “that’s how I was raised!” both creates and explains a lot of trauma, and that most of the time the best way to locate a child’s sexual abuser is to ask where his dad is. It was compounded by an ongoing conservative crisis of authority at the thought of public schools or child care professionals supplanting and superseding the parental role, with all the loss of love and authority that implies. Almost all the paranoia over music and TV scuttles out from this overcoat: the annihilating threat to the self that comes from your children becoming distinct from you, that somewhere they are learning from someone other than you, and that they are ever closer to seeing your legacy only in their traumas and finding their only values in things other than your own.
These panics at least have the decency to be about a tangible change rather than the relentless sales pitch and failed demo of one. Anxiety about day care professionals and schoolteachers owed something to those people being good at their jobs and earning kids’ affections but also to more households where both parents worked and had to outsource their children’s care, with all the worries and feelings of failure or shame that can come with it. Transgressive and rebellious music speaks to kids at an age when music can mean so much and when so much of their burgeoning identity is created by drawing distinctions from and critiques of parental tastes and values. “The Simpsons” and “South Park” were dangerous because they were entertaining and incisive satires. Whatever their influence might amount to, they are still a creative product. You can turn on a radio or a TV and experience them and everything.
What, then, is the tradeoff for ChatGPT and its assorted products? Where’s the Bart Simpson doll that you exchange for all those microplastics in your blood? Ed Zitron has built an indispensable blog fisking the AI industry’s constantly evolving sales pitch — a globe-spanning march of goalposts — for a technology that can summarize some text for you and be mostly right most of the time, and that can write the sorts of emails you don’t want to write and that no want wants to — or ever does — read, allowing you to Human Centipede “content” from and into itself, forever.
Heading for a burst bubble and having failed to meet its transformative promises for every industry or activity outside of “organizer for manbabies,” the sales pitch for AI might have been its ability to solve really the only thing it can: the imaginary. The Male Loneliness Epidemic is back, as real as it was the last time, and AI chatbots were supposed to take care of that. But now you can imagine both why that might not sound convincing to someone who heard of Adam Raine — or of a random dad alienating his family by ascending to godhood — and why sales pitches might stop equating chatbots with actual people and all the legal liability that would entail.
So, yes, it can engage your attention by sycophantically feeding yourself and your values back to you, throwing out various indicators of selfness and keeping the ones that stick and performing an effective pantomime of what it sounds like for a person to find a kind of communion with another. But for most people, that remains a fundamentally dissatisfying — perhaps even maddening or despairing — mimicry of the thing they crave, and for others the feedback loop drives them crazy. In the end, they’re still talking to an intelligence no more sophisticated than whoever answers the phone sex hotline: The other voice knows that it needs to tell you what you need to hear to keep you on the line, because that’s how we make money. Except a phone sex operator is never going to tell you the correct milligram dosage to stop your heart.
Surely there must be some distant goal, a natural terminal use case greater than, “What if we made something that was all downside?” Supposedly, in the ever-nearing future, AI will relieve us of all burdens and obligations to work, and the same billionaire tech lords who drop seven figures to stop a local property tax will support a Universal Basic Income that allows your unemployed ass to become your best self. In the meantime, the machines devour water, drive up electricity costs and produce emissions harmful to the planet longterm and much more immediately for those living next to the data center. That’s the honest sales pitch: “AI — it doesn’t do what we claim it does, and, sure, it kills people, but it also kills people.”
That doesn’t seem like a great value return on poisoning people’s brains gradually with a version of Full Service Google that’s worsening faster than the actual one. It’s a miserable one for poisoning vulnerable users very rapidly with bullshit. Wanting no part of this doesn’t seem like panic, but it does make you question the value of starting one.
 
Jennifer Weeks
In “Nature and the Mind,” Marc Berman uses neuroscience to show how interacting with nature benefits mental health.
Humans have turned to nature for solace and revival for centuries, without knowing exactly why it makes us feel better. “It is not so much for its beauty that the forest makes a claim upon men’s hearts, as for that subtle something, that quality of the air, that emanation from the old trees, that so wonderfully changes and renews a weary spirit,” Robert Louis Stevenson wrote in the mid-1870s. But what is that subtle something, and why does it affect us so profoundly?
In “Nature and the Mind: The Science of How Nature Improves Cognitive, Physical, and Social Well-Being,” neuroscientist Marc Berman brings the data, drawing on his own research and work by other scientists into the psychological and physiological ways in which spending time in natural environments improves human well-being. He starts by recounting a 2008 study that he conducted as a graduate student with his advisers at the University of Michigan.
The researchers gave subjects challenging memory tests, including one called the backward digit span task, in which they would hear a list of up to nine digits and then try to repeat them in reverse order. After completing the tests, the subjects took a 2.8 mile walk either through downtown Ann Arbor or in the university’s leafy arboretum, and repeated the tests. The urban walk did not measurably affect participants’ scores, but walking in the arboretum improved their performance on memory- and attention-related tasks by 20%. Looking at pictures of either natural or urban scenes produced similar, although somewhat weaker, results.
“Other studies had asked people how they felt after time in nature, but none had ever quantified nature’s impact on our cognition using objective measures,” Berman writes.
In Berman’s view, attention is a central element of cognition. He sees directed attention — the ability to choose what to focus on and filter out what’s less important — as a critical human capability. “Instead of knee-jerk reactions we may regret, directed attention allows us to pause, consider our intentions, and respond to people and experiences with measure,” he explains. “It keeps our flashes of anger from becoming violent behavior” and “keeps us on task when that’s what we want.”
And modern society, with its plethora of distractions — especially the digital economy and social media — has made attention “the world’s most endangered resource,” in the words of political commentator Chris Hayes, author of the recent book “The Siren’s Call.” Businesses that want our attention — and the user data that comes with it — are churning out web-based products and services designed to keep us online and engaged, and, in some cases, away from their competitors.
For Berman, the founder and director of the Environmental Neuroscience Laboratory at the University of Chicago, this trend is worrisome because directed attention isn’t just a vital ability. It’s also a limited one, and can easily become depleted as we multitask, juggle work and family needs, and try to tune out tech-based noise. “Today, we’re pushing our directed attention to a breaking point,” he warns. “We’re getting distracted when it’s not necessary or adaptive, and our very ability to maintain our important relationships and live meaningful lives is at risk.”
Berman sees hope in a concept called Attention Restoration Theory, developed by University of Michigan psychologists Rachel and Stephen Kaplan, that posits nature as an answer to directed attention depletion. The Kaplans saw natural stimuli — think of leaves rustling on tree branches, or clouds drifting across the sky — as fundamentally different from manmade signals, like cell phone alerts or billboards. Nature’s sights and sounds engage a kind of thinking the Kaplans called “soft fascination” that doesn’t take up all of an observer’s attention. When you sit next to a flowing stream, you can listen to the water splashing and also let your mind wander more widely. That experience, the Kaplans hypothesized, offered an opportunity to replenish our directed attention.
The 2008 “Walk in the Park” study was an early empirical test of attention restoration theory. Its results were encouraging, but raised more questions for Berman: How much restorative power did time in nature have? How did it work, and how could it be applied?
In a follow-up study, Berman and colleagues recruited participants who were experiencing clinical depression and had them carry out the same memory tasks, followed by the same walks. Before the walks, the researchers prompted their subjects to think about something negative that was bothering them, to put them into the mode of repetitive negative rumination that characterizes depression and saps directed attention. Participants who took walks in nature showed even greater cognitive gains than those in the original study.
“It felt like discovering a fifty-minute miracle — a therapy with no known side effects that’s readily available and can improve our cognitive functioning at zero cost,” Berman wrote. The results echoed findings by scientists at the University of Illinois who discovered that when children with ADHD spent time in green outdoor settings, they showed fewer attention-related symptoms afterward compared to others who spent time in human-made spaces. In one study, children with ADHD showed attention performance improvements after a walk in a park that were comparable to the effects from a dose of Ritalin.
Another notable aspect of Berman’s findings was that people didn’t have to like nature to benefit from it. Participants in the walking studies didn’t always experience mood benefits, but they showed clear attention-related improvements. “Good medicine doesn’t always taste sweet,” Berman observed.
Another area of Berman’s research examined which features of nature provided these benefits. Through several studies that asked subjects to rate photos of natural and built settings, he and his colleagues found four key qualities that people considered “natural”: abundant curved edges, such as the bends of rivers; an absence of straight lines, such as highways; green and blue hues; and fractals — branching patterns that repeat at multiple scales. Fractals can be generated mathematically, but they also occur throughout nature, from tree branches to many snowflake designs.
“Natural curves and natural fractals are all softly fascinating because they can balance complexity and predictability,” Berman wrote. “They’re not so complex that they’re overwhelming, but not so predictable that they’re boring. Instead, they live in a kind of active equilibrium, like a churning waterfall or a burning campfire — things humans tend to find particularly softly fascinating.”
Using artificial neural networks — machine learning programs that may make decisions in ways similar to human brains — Berman and a doctoral student found that scenes with more natural elements were likely to be less memorable to humans than urban scenes. This suggests that it takes less directed attention to process natural stimuli. When we look at something like a tree with a huge mass of leaves, we don’t zero in on each individual leaf and analyze its features. Instead, we throw away a lot of the repeated elements and focus on the key features, such as the tree’s overall shape, mass and colors. That leaves us with more brainpower for other tasks.
These observations have implications for design — not just for those of us who can easily add plants and natural materials to our homes, but on a larger scale. One ongoing focus in Berman’s environmental neuroscience lab is combining brain science with urban planning to improve the designs of cities and towns. He argues that access to nature should be seen as a human right, rather than a nice perk, and that it’s especially important to provide more green space in cities, where the majority of the world’s population lives.
“If we don’t investigate the increases in individual and societal health that nature can offer us — if we just go on a gut sense that nature is good — then only the wealthiest among us will continue to have consistent access to the ways nature can keep us healthy and safe,” he asserted. “Meanwhile, poor and marginalized populations will continue to lack access, and worse, be told (or shown) that nature is not for them.”
While Berman is clearly frustrated by our tendency to underestimate how much we need nature, there is a strongly optimistic thread running through his highly readable and jargon-free account. Humans, he reminds us, “are not who we are by individual factors alone — we are who we are because of our environment and how individual factors interact with environmental factors (such as nature) to shape us.”
“And science,” he concludes, “shows that cultivating access to green space changes minds in ways beyond our wildest expectations.” 

No comments:

Post a Comment