Mental Health鈥檚 Stalled (Biological) Revolution: Its Origins, Aftermath & Future Opportunities
The 1980s, by common consensus, saw a big and remarkably rapid pivot away from previously dominant psychoanalytic and social science perspectives in American psychiatry and toward a so-called medical model foregrounding biology and the brain. The standard understanding is that this happened because, after years of wandering lost in a Freudian desert, the field had finally gained some fundamental new biological understandings of mental illness. The standard understanding is wrong. Nothing of sudden significance had happened on the biological front. There had been no major scientific or therapeutic breakthroughs. Why, then, did the field really pivot? This essay aims to explain. The answer is important, not least because choices made back then have directly shaped the fraught world of psychiatry with which we live today.
In the 1980s, the field of American psychiatry pivoted suddenly and decisively away from previously dominant psychotherapeutic, social scientific, and psychoanalytic approaches to mental disorder, and instead embraced biological, brain-based, and pharmaceutical approaches. Why did all this happen?
For decades, the answer seemed clear: Before the 1980s, American psychiatry was lost in a Freudian wilderness. It had turned its back on all the fundamental principles of medical practice. It had lost interest in rigorous scientific research. It was hobbled by an incredibly sloppy approach to diagnostics. It was in the thrall of fantastical theories, and interminable, ineffective treatment practices. Then, sometime in the early 1980s, just as things could hardly get worse, some heroes arrived: biochemistry and neuroscience researchers armed with new science and new treatments. They made clear that the Freudian dinosaurs had to go. And the Freudians, now outed as the charlatans they were, left. The world celebrated, and psychiatry has never looked back since. As journalist Jon Franklin put the matter in his Pulitzer Prize鈥搘inning series, 鈥淭he Mind Fixers鈥:
Since the days of Sigmund Freud, the practice of psychiatry has been more art than science. Surrounded by an aura of witchcraft, proceeding on impression and hunch, often ineffective, it was the bumbling and sometimes humorous stepchild of modern science. But for a decade and more, research psychiatrists have been working quietly in laboratories, dissecting the brains of mice and men and teasing out the chemical formulas that unlock the secrets of the mind. Now, in the 1980s, their work is paying off.1
In the years since Franklin鈥檚 series, that basic story continued to make the rounds in both textbooks and popular writings for the public. With time, it took on new elements, such as an insistence that German anatomist and diagnostician Emil Kraepelin was the father of modern psychiatry, not Sigmund Freud. By way of example, Richard Noll鈥檚 The Encyclopedia of Schizophrenia and other Psychotic Disorders told the updated story this way:
It took major advances in medical technology, specifically the computer revolution and the rise of new techniques in neuroimaging, genetics research and psychopharmacology to swing the pendulum back to Kraepelin鈥檚 search for the biological cases of psychotic disorders. Historians of science now regard psychoanalysis as a pseudoscience that inexplicably dominated a subfield of medicine鈥攑sychiatry.2
Let us start by conceding the obvious: we have here a great and bracing story, a story with a strong moral message, a story with clear heroes and villains. We also have a story with a purpose: to be inspiring to researchers and members of the general public alike. The only problem with the story is that it is wrong. And not just a little wrong, but wrong in almost all its particulars. And this matters beyond the obvious reason that we should do right by the facts of history. It also matters because it implies that psychiatry, having shaken off the errors of the past, must be today in a stable and upward-trending space, steadily harvesting the fruits of its investments in biological research.
Psychiatry, however, is not in such a space. It is instead in a place of stalemate and uncertainty. On April 1, 2021鈥攊n his final essay prior to retiring from The New York Times鈥攍ong-serving science journalist Benedict Carey told a different story about the state of the field, as he had experienced it over the decades. 鈥淲hen I joined the Science staff in 2004,鈥 he reflected, 鈥渞eporters in the department had a saying, a reassuring mantra of sorts: 鈥楶eople will always come to the science section, if only to read about progress.鈥 I think about that a lot as I say goodbye to my job, covering psychiatry, psychology, brain biology and big-data social science, as if they were all somehow related.鈥 The truth was, he said, 鈥渄uring my tenure, the science informing mental health care did not proceed smoothly along any trajectory.鈥 It did chalk up the occasional significant discovery (for example, identifying levels of consciousness in brain-injured patients who appear unresponsive), but 鈥渁lmost every measure of our collective mental health鈥攕uicide rate, anxiety, depression, addiction鈥攚ent in the wrong direction.鈥3 In his 2022 book Healing, Thomas Insel, former director of the National Institute of Mental Health, told a similar story from the vantage point of a long-serving scientific leader in the field:
The scientific progress in our field was stunning, but while we studied the risk factors for suicide, the death rate had climbed 33 percent. While we identified the neuroanatomy of addiction, overdose deaths had increased by threefold. While we mapped the genes for schizophrenia, people with this disease were still chronically unemployed and dying 20 years early.4
The conclusion is obvious: the field is being called to update its image of itself and to forge a path to a different future. To do that successfully, however, it also needs to begin by shedding its attachment to self-serving origin myths and start on a more honest path to understanding how it has arrived in its present state.
When the field declared its liberation from Freud and announced a biological revolution was at hand, nothing of sudden significance had happened on the biological front. There had been no new treatments. All the treatments that were extolled in those years, especially drugs, were thirty years old, products of the 1950s, when the field was supposedly stalled and in the thrall of the Freudians. There had also been no major scientific breakthroughs. The most significant scientific advances in the field, such as they were, had also happened more than a generation earlier, during the alleged Freudian dark ages. In the 1950s and early 1960s, scientists, largely working at the NIH, had shown that different drugs can act to raise or lower levels of various newly discovered neurochemicals, with names like dopamine, norepinephrine, and serotonin. At the time, no one had used that work as the basis for declaring a wholesale revolution in mental health care or treatment.
Why then did the field really pivot? The short answer is: not because of science, but because of complacency, arrogance, and professional overreach that led to an internal revolt. The long answer, however, is more illuminating and worth taking time to understand.
In the decades just before World War II, American psychiatry was an eclectic patchwork of practices and perspectives, some biological and some more environmental. The biologically oriented psychiatrists worked mostly in state hospitals and looked after the severely and chronically mentally ill. While there had been a tendency since the early twentieth century to see hospital psychiatry as a backwater branch of medicine, the 1930s had also seen a modest rise in its public reputation, as new somatic interventions like shock and surgical treatments were introduced.5 Even lobotomies, today remembered as one of the most barbaric and ill-considered technologies ever employed in the history of psychiatry, were back then often discussed by the press in relatively optimistic ways.6
The more environmentally oriented psychiatrists, working largely outside the hospital system, were meanwhile focused on a very different kind of mission: to identify and treat people who were not yet truly mentally ill, but who were also not quite right: troubled people, nervous people, neurotic people, maladjusted people. Virtually everyone admitted that some of these people might be incorrigibly defective, and therefore best handled through institutionalization in a colony of the 鈥渇eeble-minded鈥 or through more radical measures like sterilization.7
Nevertheless, there was a general view that, for many others, the roots of their troubles lay not in some biological defect but in bad habits, bad neighborhoods, and bad families. This suggested that many might still be salvageable. To rescue them, this branch of psychiatry invented a wide range of new institutions and programs: new kinds of public education efforts, new forms of outreach into schools and communities, new professions like psychiatric social work, and new institutions like child-guidance centers and psychiatric outpatient clinics. By the 1930s, many of the psychiatrists involved in these programs had also discovered psychoanalysis and were incorporating Freudian ideas about unconscious conflict, fantasy, and early childhood trauma into the ways they thought about their patients.8
Through the 1920s and 1930s, the biological and environmental approaches to managing mental distress, disorder, and deficiency coexisted, more or less equitably if a bit uneasily. World War II changed that dynamic. When the war came, it was primarily the psychiatrists who were focused on 鈥渘early normal鈥 populations of patients who stepped up. Their tools and approaches seemed far better suited for treating the epidemic of traumatized soldiers, and patching them back together using techniques they had used on their neurotic and maladjusted patients back home, such as quick psychotherapy and suggestive therapy. They were sent into the fields, and many documented the impressive results of their techniques. 鈥淭he stuporous become alert, the mute can talk, the deaf can hear, the paralyzed can move, and the terror-stricken psychotics become well-organized individuals.鈥9
Widely seen as a team that had gotten the job done鈥攅ven as it was quietly recognized internally that they had fallen short in many ways鈥攖he Freudian-leaning contingent of psychiatry next took the position that, because they had helped win the war in ways that their biological colleagues had not, it was they who were now best placed to maintain the peace.10 The battle mentality that had served them so well during World War II now had to be applied to the urgent mental health needs of civilians in a dangerous postwar world, they said. In May 1948, William Menninger鈥攚ho had served during the war as the Chief Psychiatric Consultant to the Surgeon General of the Army鈥攎et with President Harry Truman, and asked if he would be willing to send 鈥渁 message of greeting鈥 to be read at the upcoming annual meeting of the American Psychiatric Association. Truman approved the following statement鈥攑robably written by Menninger himself:
Never have we had a more pressing need for experts in human engineering. The greatest prerequisite for peace, which is uppermost in the minds and hearts of all of us, must be sanity鈥攕anity in its broadest sense, which permits clear thinking on the part of all citizens. We must continue to look to the experts in the field of psychiatry and other mental sciences for guidance in the evaluation of our mental health resources.11
鈥淭he greatest prerequisite for peace . . . must be sanity.鈥 This hardly seems like a medical project in the ways that most people would understand the term鈥斅璪ecause it really wasn鈥檛. It was a political project. Building on the environmentalist thinking of the interwar years that had produced social workers and child-颅guidance clinics, Menninger and many of his colleagues had come to believe that most social problems had their origins in individual psychological deficits. For this reason, psychiatry in the postwar era was crucial for any and all efforts to tackle the great social and political threats of the age: the allure of authoritarian governments, the persistence of anti-Semitism, and the scourge of chronic poverty, social deviance, crime, and social unrest. In 1946, a group of bold psychiatrists headed by Menninger fashioned themselves into an organization called the Group for the Advancement of Psychiatry (GAP) to map out a new and expansive agenda for their field.12
As they shored up their authority, GAP鈥檚 leadership also went to the trouble of explicitly attacking the treatments within biological psychiatry that had once won them some claims to respectability: shock and surgical treatments. Their very first white paper targeted electroshock treatment, warned against its 鈥渞eported promiscuous and indiscriminate use,鈥 and insisted that it should never be seen as a primary treatment in its own right, but employed, if at all, only as an 鈥渁djuvant in a total psychiatric treatment program鈥 that centered psychotherapy and other psychosocial interventions.13
That same year, Truman was persuaded to sign legislation that would establish the very first federal agency devoted to psychiatry. Tellingly, the decision was made to call the agency not the National Institute of Mental Illness or the National Institute for Insanity, but the National Institute of Mental Health (NIMH). The choice of name was intended to signal that the institute was charged to extend beyond a focus on disease, beyond a conventional medical agenda.14 The first director of the NIMH, Robert Felix, had a primary background in public health and a keen interest in the psychosocial causes of drug addiction. As he explained, 鈥淚 was interested in the stories I was getting from these people about why they relapsed to drugs or why they got on drugs in the first place. I鈥檇 get stories like bad companions, disappointment with life, I couldn鈥檛 stand the pressure.鈥15
Felix鈥檚 disciplinary leanings helped ensure that, from the beginning, the new NIMH prioritized a community-minded, social science-inflected approach to mental health and illness above the somatic concerns of the old hospital-based psychiatry (though the older concerns were not wholly absent). In 1952, Felix asked a psychoanalyst named Robert Cohen to take charge of developing the NIMH intramural research portfolio. Cohen brought an expansive, interdisciplinary vision to the charge, with lots of space for social science, developmental, and psychoanalytic perspectives, including a laboratory of socioenvironmental studies.16
It was obvious which way the winds were blowing. Already, by 1947, more than half of all American psychiatrists (the elite half) worked in private practice or at outpatient clinics. By 1958, only about 16 percent of psychiatrists鈥攎any of them foreign nationals鈥攚ere working in state hospitals.17 Two years later, 95 percent of medical schools reported teaching psychoanalytic and psychodynamic methods, and virtually every departmental chairperson affirmed that psychodynamic approaches dominated the field.18
Contrary to what many of us today might suppose, the arrival of antipsychotics, anxiolytics, and antidepressants in the 1950s was not widely perceived as a threat to any of this. All products of clinical serendipity rather than biological research, the drugs were, to be sure, almost immediately embraced by clinicians (including general practitioners) for their practical benefits. Within psychiatry, hospital administrators welcomed especially the ability of the class of drugs then known as 鈥渕ajor tranquilizers鈥 to manage people with agitated psychoses, and speculated that their existence might even allow the hospitals to begin to discharge more patients.19
Nevertheless, the intellectual leadership within psychiatry was reluctant to pronounce the drugs to be some kind of game-changer for the field. Looking back in 1975, NIMH Director Robert Felix explained his own position at the time. Electroconvulsive treatment, insulin shock therapy, and lobotomy, he recalled, had also once been hyped as game-changers, only to fall short of expectations and cause more harm than good. What reason was there to think that the drugs would be any different?
We had all been praying for the pill or a draught of medicine or whatnot which would cure the madman. Well, we would sit, and over and over again, something would come up, and it was the answer. Shock was. Insulin was. Lobotomy was another one. One thing after another was going to cure all kinds of ills. . . . [For this reason] I wanted to approach [the new drugs] a little more conservatively and I think I was wrong.20
Nevertheless, some mental health activists at the time (led by journalist-turned-lobbyist Michael Gorman) began to put pressure on Congress to allocate funds to the NIMH so its researchers could study these drugs more systematically. And, under pressure, Felix finally agreed in 1956 to create a new research unit within the NIMH: the Psychopharmacology Service Center (PSC). The purpose of this center was to figure out strategies for evaluating the efficacy of the drugs. Did your study need drug-naive subjects? Did you need a placebo in your control group? How long would you look for possible improvement, and what measures would you use to assess it? All these questions needed to be answered, and a young psychiatrist named Jonathan Cole was hired to spearhead the effort.21 The upshot was that not only was the staff at the PSC able to demonstrate that new drugs like chlorpromazine worked better than placebos, but along the way, they also largely invented the toolkit for a new field called clinical psychopharmacology.
By the mid-1950s, some of the new antidepressant drugs had begun to inspire new kinds of laboratory research. More specifically, physiologists at the National Heart Institute of the NIH (not the NIMH itself) had begun to experiment with the behavior and physiology of laboratory animals by first dosing the animals with reserpine (one of the new major tranquilizers), and then injecting them with one of the new antidepressants. They found that a protocol like this first sedated and then energized the animals, while simultaneously altering levels of newly discovered chemicals in their nervous systems (serotonin and norepinephrine). The ongoing efforts to figure out the mechanism responsible for these changes led to Julius Axelrod being awarded a Nobel Prize in 1970 for his work on the ways antidepressants act to inhibit the reuptake of certain neurotransmitters in the synapse.22
Even with these developments, Freudian and psychosocial ideas still dominated both research and practice. Few if any drew the conclusion, at least publicly, that psychopharmaceutical researchers鈥 wins justified calling for a radical changing of the guard. Quite the contrary, in the years following President Johnson鈥檚 declaration of a 鈥渨ar on poverty鈥 in 1964, the NIMH itself doubled down on its commitment to psychosocial research, investing in projects like ongoing outreach for troubled children; understanding the effects of poverty, social isolation, and racism on mental health; and addressing social ills such as juvenile delinquency and violence.
Among their many projects in these years, however, none was more consequential than the so-called community mental health initiative. It envisioned a dramatic recentering of the nation鈥檚 care of the severely mentally ill away from the century-颅old state hospital system and toward community-based care that would allow patients to live among ordinary people in the neighborhoods from which they came.
Discontent with the state mental hospital system went back to at least the immediate postwar years when conscientious objectors undertook a campaign to expose the hospitals鈥 appalling conditions.23 The most famous of the expos茅s was a Life magazine spread called 鈥淏edlam 1946.鈥 The photographs in this spread had self-consciously aimed to remind people of other images recently seared in their imaginations: Nazi concentration camps.
Thousands spend their days鈥攐ften for weeks at a stretch鈥攍ocked in devices euphemistically called 鈥渞estraints鈥: thick leather handcuffs, great canvas camisoles, 鈥渕uffs,鈥 鈥渕itts,鈥 wristlets, locks and straps, and restraining sheets. Hundreds are confined in 鈥渓odges鈥濃攂are, bedless rooms reeking with filth and feces鈥攂y day lit only through half-inch holes in steel-plated windows, by night merely black tombs in which the cries of the insane echo unheard from the peeling plaster of the walls.24
The idea that mental health care was most successful when carried out in the community was also not new. It had its origins in so-called 鈥渇irst-aid鈥 psychiatry: early-intervention care for soldiers during World War II carried out in settings that kept the men close to their platoons and friends. After the war, when psychiatry began to turn its attention to the mental health challenges found in the civilian population, many remembered these wartime experiences and wondered if there were lessons for the postwar era. Should psychiatry still privilege an approach to care that involved shipping mentally ill people away to remote hospitals, disconnecting them from familiar communities and neighborhoods? Was there possibly another way forward?
Even with all this restless desire for change, no one had been able to imagine a workable alternative to the mental hospital for the seriously or chronically mentally ill. For decades, it was simply assumed that such people either could not care for themselves outside of an institutional setting, that they would pose a risk to society if they lived in the community, or both.
What was different now? Drugs. Not because the leaders in the field believed that the drugs were key to a new biologically based approach to mental health care, but because they were persuaded that the drugs were critical managerial tools for realizing their bold policy goals. The argument was that even if the drugs did not cure any ailment, they might nevertheless be able to stabilize many patients to the point at which they could be discharged to the community. In the optimistic words of John F. Kennedy when he announced his hopes for a new community mental health care program in February 1963: 鈥淭his approach relies primarily upon the new knowledge and new drugs acquired and developed in recent years which make it possible for most of the mentally ill to be successfully and quickly treated in their own communities and returned to a useful place in society.25
By October 1963, Kennedy had signed the relevant legislation, and the NIMH began to hand out grants for states to build community mental health centers. The centers started to get built, though not as many as had been expected, and with staffing levels that often fell far short of need. The states nevertheless began to release the patients from their hospitals in great numbers. To get a sense of the scale of the shift: In 1955, there were 350 state hospitals with a resident population of about 560,000. By 1977, there were 160,000 patients in public mental hospitals, a drop of 400,000 (71 percent) in just two decades. By 1994, there were only about 70,000 patients being treated in mental hospitals around the country鈥攁nd this during a time when the U.S. population as a whole nearly doubled (from 150 million to about 260 million). The state governors embraced these changes as an opportunity to slash budgets. The hospitals had always cost too much anyway.26
The drugs were supposed to stabilize all these people sufficiently to make it possible for them to be looked after in the community, but it soon became clear that the drugs achieved this imperfectly. Medicated patients were still often unwell on many levels: they lacked motivation, they still acted in ways that discomfited their neighbors, and they failed to keep appointments. Moreover, because the drugs also produced significant unpleasant side effects, many patients, once they were released from the hospital, stopped taking them. By the late 1970s, countless mentally ill people who had previously lived in hospitals were now living instead in dreary for-profit boarding houses with little health care, on the streets, or in jails. Or, if they were lucky, they were living with their aging parents, who felt betrayed by the system, were desperate for better care and resources, and were becoming increasingly angry.27
Trouble started to brew for the psychiatrists driving all of these programs, and the growing recognized failures of deinstitutionalization were only part of the reason. The 1970s brought a perfect storm of crises that increasingly shook the palace of their authority. Protests against the Vietnam War began to target not just the government but also psychiatry, as clinicians working in the VA hospitals found themselves accused of covering up for the government鈥檚 failings by withholding the truth about what the war was doing to soldiers鈥 mental health.28 Feminism was on the rise, and in that context, psychoanalysts found themselves accused of covering up the scandalous truth of childhood sexual abuse.29 Gay, lesbian, and bisexual activists began to picket outside meetings of the American Psychiatric Association, insisting that they were sick and tired of having their love interests made into a sign of disease.30 Multiple critics associated with a movement sometimes called 鈥渁ntipsychiatry鈥 began to notice that psychiatry did not seem to be very interested in conventional medical issues, and suggested the field only cared about managing social deviance.31 As a recession hit the American economy in the mid-1970s, with all these critiques in the air, health insurance companies began to ask why they should reimburse clinicians who didn鈥檛 seem to practice medicine, and didn鈥檛 seem to know or care much about disease.
As the storms whipped around psychiatry, the out-of-power biological wing of the field sensed an opportunity and, perhaps, some responsibility to step up. Enough was enough. The field had gotten itself into the problems it had by being both unscientific and hubristic. It was time to pull back and get down to brass tacks鈥攂ecome 鈥渕edical鈥 once more. Or to put the matter more bluntly, it was time for biologists to be in charge. As Samuel Guze, one of these biologists, mused in 1994: 鈥淥ne of the things we began to realize is that there were people around the country who felt that they wanted something different and were looking for someplace to take the lead.鈥32
How did they make their case? Tellingly, while they gestured to the research from the 1950s and 1960s, their arguments were largely waged on a platform of common sense. Of course psychiatry is a branch of medicine! Of course mental illnesses are real diseases with real biology! Of course the field should respect scientific methods! Of course exact diagnosis is important! How could we have ever let the situation degenerate to the point where such things could be questioned?33
In 1978, Gerald Klerman, director of the Alcohol, Drug Abuse and Mental Health Administration (which at the time oversaw the NIMH and several related NIH institutes), appointed Herbert Pardes as director to the NIMH, and charged him to turn the institute around. The organization needed to shed its long-standing psychosocial activist mission, and align itself with the medical mission of the rest of the NIH. In pursuing this project, Pardes found an unexpected but ultimately very powerful ally: families of schizophrenic patients. Families who had lived through the traumas of deinstitutionalization and the chronic stresses of trying to navigate a community-based mental health system that generally failed to deliver adequate services. Families who, at the same time, had been told by psychoanalytic psychiatrists that they鈥攁nd especially the mothers鈥攚ere responsible for making their children sick in the first place.
In 1982, a young psychiatrist named E. Fuller Torrey published a book titled Surviving Schizophrenia. The audience for the book was not patients or doctors but families. They too needed a manual to help them 鈥渟urvive鈥 the disorder, he said, especially in light of the enormous burden now being placed on them. Surviving Schizophrenia opened by making perfectly clear that these families were as much victims as their offspring. Schizophrenia, Torrey told them, was 鈥渘ow definitively known鈥 to be a 鈥渂rain disease,鈥 and they could best help both themselves and their children by working to persuade the government and the profession to acknowledge this fact and commit to biological solutions for a biological problem.34
They took this advice to heart. Taking the name of NAMI鈥攖he National Alliance for the Mentally Ill鈥攖hese families embarked on a stunningly successful media, fundraising, and governmental pressure campaign to redirect psychiatry along biological lines. 鈥淩emedicalization is what we families want,鈥 declared one of them in 1979.35 Pardes, who attended their first meeting that same year, marveled at their energy and effectiveness.36 One anonymous NIMH official later called NAMI, ferocious as they were, 鈥渢he barracuda that laid the golden egg.鈥37 It was perhaps an unlikely partnership, but it worked because both families and a profession in crisis had decided, for different reasons, that biology was a road to redemption for the profession and a fresh start for patients.
And so it went that biology won the day鈥攑artly with the help of those activists and partly because Freudian psychiatry proved unable to recover from all of the self-inflicted wounds of the 1970s. In 1980, an initially humdrum project to revise the profession鈥檚 diagnostic and statistical manual turned into an opportunity to expunge virtually all psychoanalytic language and concepts from the universe of psychiatric diagnostic categories, and (in the eyes of many) to set the field up for a new era of rigorous, biological practice and research.38 In 1997, Edward Shorter summed up the 1980s consensus (as well as his own at the time):
The appearance of DSM-III was . . . an event of capital importance not just for American but for world psychiatry, a turning of the page on psychodynamics, a redirection of the discipline towards a scientific course, a reembrace of the positivistic principles of the 19th-century, a denial of the antipsychiatric doctrine of the myth of mental illness. . . . Freud鈥檚 ideas, which dominated the history of psychiatry for the past half century, are now vanishing like the last snows of winter.39
The biological psychiatrists had declared victory, but had done so in the absence of any new radical breakthroughs in biological understanding or treatment. Their next task was to deliver on the promises that most people thought they had already kept. Reality needed to catch up with rhetoric. Initially, some felt that the 1990s would be the decade when it would all come together. Biological research would finally get the money it had been starved of for so many decades, and new insights and evidence-based treatments would follow in short order.40
Early on, the field was particularly bullish about the potential of new brain imaging technologies (both PET and fMRI) to be a game-changer. The hope was that, in due course, technologies like these would allow psychiatrists to look at the brains of their patients in the same way that a cardiologist looks at the heart of patients using an angiogram鈥攊n order to 鈥渟ee鈥 what is wrong. Intensive investment in these technologies failed, however, to move knowledge of mental illness forward in the definitive ways that so many psychiatrists had hoped. There were plenty of findings, but they varied across studies and proved hard to replicate and interpret.41 Above all, the new neuroimaging work failed to have any appreciable impact on how the overwhelming majority of patients were diagnosed and treated. As Thomas Insel, director of NIMH, soberly concluded in 2010: 鈥淒uring the so-called Decade of the Brain, there was neither a marked increase in the rate of recovery from mental illness, nor a detectable decrease in suicide or homelessness鈥攅ach of which is associated with a failure to recover from mental illness.鈥42
What about genetic research? In the late 1980s, it briefly looked like there had been a decisive breakthrough, when the claim was made that a certain segment of DNA on a particular chromosome was found in some 80 percent of people suffering from manic depression鈥攁t least, in a particular community of Amish people, where the work had been carried out.43 But that turned out to be a false lead, and the original hope that there would be a 鈥渂ipolar gene鈥 was deemed naive, and gave way to a hunt for multiple genes.44 This was followed by a recognition that genetic risk factors might be shared across disorders. And it all led to a growing reluctant understanding that research into the genetics of mental disorders was going to be very complicated, and it could be not years but decades before any of the work yielded practical results for patients. In 2001, David Dunner, a leading researcher on mood disorders, reflected wistfully on this period of recalibration:
I am disappointed that we have never identified the 鈥渂ipolar gene.鈥 . . . I realize now how complicated it is and how na茂ve we were. Very good people are now looking for the genes, not a single gene. I am not going to be the one to find them, but it would be nice to know that there really are genes when patients ask, 鈥淚s this a genetic disorder?鈥 and I can only say, 鈥淲ell, we think so.鈥45
There were also no fundamental breakthroughs in drug development. New variants on older drugs鈥攍ike the SSRI (selective serotonin reuptake inhibitor) antidepressants and the new antipsychotics like clozapine鈥攚ere an improvement in the sense that they caused fewer acute side effects than their predecessors鈥攏o small thing. Their side-effects profile also meant they tended to be far more widely prescribed than their counterparts had been. But they generally did not work better than the older drugs, they did not work for everyone, and over time their own long-term health consequences began to become clearer.46
Nevertheless, and rather paradoxically, this was still the era when drugs began to dominate virtually all conversations about how to handle mental suffering, certainly among psychiatrists (as opposed to psychologists and social workers). This new consensus, however, did not happen simply because everyone now 鈥渂elieved鈥 in the medical model, or because prescribing privileges were one of the few things that still allowed psychiatrists to assert their identity as physicians, or because in the 1990s, psychoanalysis continued to suffer an onslaught of steady blows to its reputation. All these factors were true and relevant, but by the late 1980s, they were dramatically amplified by a critical mass of clinicians and researchers who had aligned their professional interests with the commercial interests of the pharmaceutical industry. Feeling like the poor relations of the medical world鈥攁nd financially pinched by the incursion of psychology and social work onto their turf鈥攖he siren call of consulting work was difficult to resist. In 2008, disclosure reports filed by 273 speakers at the annual meeting of the American Psychiatric Association revealed that, among them, the speakers had signed 888 consulting contracts and 483 contracts to serve on so-called speakers鈥 bureaus for drug companies.47
None of these developments, though, changed the bottom line: there had been no significant scientific advances to guide drug development since the 1960s. In spite of what the public believed, when drugs dominated conversations about mental health from the 1990s through 2010, that period was in fact, as one article from Nature Review admitted, 鈥渁 barren time for the discovery of novel drugs for psychiatric disorders.鈥48 As their patents ran out, as they struggled with a growing and puzzling placebo-effect problem, and as nothing genuinely new seemed to be coming through the pipeline, the drug companies began to abandon the field. They just couldn鈥檛 figure out any new ways to make big money anymore.49
And then came one final blow. Psychiatry鈥檚 diagnostic manual, the so-called DSM, once hailed as a foundational text for a new, medically minded psychiatry, came under public attack鈥攏ot just by disgruntled outsiders (that had been happening since the 1990s), but by informed insiders. More specifically, in 2013, Insel, director of the NIMH, declared that the DSM had not only failed to deliver on its promise to drive biological research but had actually impeded such research, adding: 鈥淏iology never read that book.鈥 He announced that the NIMH would no longer be using it as a basis for any of its research initiatives. It was an amazing slap-down. This, after all, was the book that was supposed to act as the foundation for psychiatry鈥檚 biological mission.50
The DSM upset happened in 2013. Two years later, in 2015, Insel made another move that suggested the malaise within the field had now reached endemic levels. He declared that he was resigning from the directorship of the NIMH and abandoning biological research, because, despite billions of dollars in investment, it just hadn鈥檛 been able to deliver on its promises. A year or two later, he told a journalist what had driven his thinking at the time.
I spent 13 years at NIMH. . . . I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs鈥擨 think $20 billion鈥擨 don鈥檛 think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness. . . . I hold myself accountable for that.51
The conclusion seems clear. The 鈥渞evolutionary鈥 biological psychiatry that was born in the 1980s had, by 2017 or so, largely run into the sands. It just had not been able to advance at a pace needed to maintain its relevance in response to the urgent mental health needs of the times.
A year or two after that moment of confession, though, there were some signs that the story around drugs might be shifting for the first time in years. In 2019, the FDA approved Janssen Pharmaceuticals鈥 request to market what some hailed as the first truly new kind of antidepressant in decades: esketamine, a reworked version of an old veterinary anesthetic drug, but better known to most as a trance-inducing party drug called Special K. Later that same year, in November, the FDA designated the psychedelic psilocybin (magic mushrooms) a breakthrough therapy for severe depression. The 鈥渂reakthrough therapy鈥 category is used for drugs deemed to have so much promise that the FDA wants to expedite the process of bringing them to the market.52 In July 2022, the U.S. Department of Health and Human Services under the Biden administration indicated that the FDA was also now on track to approve, within two years, not just psilocybin but also MDMA (ecstasy) as treatments for depression and post-traumatic stress disorder, respectively.
Both psilocybin and MDMA are currently classified as Schedule 1 drugs under the Controlled Substances Act, meaning they had previously been deemed to have both no recognized medical use and a high potential for abuse. The new drive to reframe them as promising psychotherapeutic tools is of course partly a response to the flight of the pharmaceutical industry from the mental health sector, and the sense that something has to be done.53 But we also need to understand these developments as part of a larger political story: the growing backlash against the legacies of the 1970s and 1980s War on Drugs, a phenomenon that became shamefully racialized, especially in the United States. In that context, some have already begun to call attention to the ongoing if more quiet racial politics operating behind the partial rehabilitation of the psychedelics. Efforts to decriminalize psychedelics, in the absence of a more wholesale review of the relationship between currently illegal drug use and our carceral system, they say, represents a kind of 鈥減sychedelic exceptionalism鈥 that implicitly privileges the experiences of the wealthy and the white.54
Both hope and hype seem to have returned, at least in this one modest sector of the field. For the first time in decades, we see newspapers announcing a new 鈥渞evolution鈥 in mental health care.55 We see investors getting excited: the market for psychedelic substances has been projected to grow from $2 billion in 2020 to $10.75 billion by 2027.56 We learn from a new generation of company websites that we are no longer dealing with the psychopharmaceutical industry of our parents鈥 or grandparents鈥 generation. This new version of pharma is no longer big but intimate. It is no longer run by middle-aged white men but by a new generation of diverse visionaries. It 鈥渢hinks differently鈥 than the industry that failed patients for so long, and is 鈥渞edefining鈥 the field so that 鈥渦nmet needs鈥 can finally be addressed.57
The story here is unfinished, but there is good reason to think that future scholars will go far if they focus on following the money. It is notable, for example, that Compass Pathways has recently (in 2021) come under scrutiny for its allegedly 鈥渟corched earth鈥 approach to the filing of international patents for multiple aspects of its treatment protocols and target disorders.58 Meanwhile, while the therapeutic benefits of these developments for patients remain unclear, the turn to psychedelics does not represent an obvious professional win for biological psychiatry, at least the kind of biological psychiatry that has dominated in the field for the past forty or more years. On the contrary, the psychedelic therapies together challenge a basic assumption of conventional biological psychiatry: namely, that the way to address symptoms of depression or anxiety is to take a pill and wait for one鈥檚 symptoms to improve. The model here is different: to ingest a substance in order to create a mind-altering experience鈥攕upported by one or more trained psychotherapists鈥攖hat is supposed to result in new and enduring insights and emotional recalibrations. At a 2017 conference held on the promise of psychedelics, Insel noted that he was struck by the way that people involved in this new work emphasized that it was 鈥減sychedelic-assisted psychotherapy.鈥 In all his years as a psychiatrist and as director of the NIMH, he commented wryly, he had never heard anyone ever talk about 鈥渁ntidepressant-assisted psychotherapy.鈥59
Back in the 1980s, biological psychiatry was largely successful in stepping in and setting the agenda and funding priorities for the field of mental health care as a whole. It could do so because the field was at risk of losing its medical identity, as well as its credibility, and there was little perceived room for compromise. But it is not the 1980s. The field no longer needs to protect itself from imagined powerful rivals. There is an opportunity now to do a reset, in which the field locates itself not at the top of the hierarchy but in a larger and more collaborative ecosystem of mental health research and care. Embedded in such an ecosystem, biological psychiatry will come to discern when its approaches will dominate that system and when they will play a smaller role.
Here is just one recent example of when its approaches should not dominate. In May 2021, responding to the nationwide reckonings with racial inequity triggered by the murder of George Floyd, the American Psychiatric Association declared that the theme of their annual meeting would be 鈥淔inding Equity through Advances in Mind and Brain in Unsettled Times.鈥 It was a remarkably unstable title, one that seemed to still be trying to hold onto a conventional medical research mission (鈥渁dvances in mind and brain鈥), even as it acknowledged the 鈥渦nsettled times鈥 in which the field now had to pursue that mission.60 There is little reason to suppose that a conventional research strategy focused on 鈥渁dvances in mind and brain鈥 will help the field 鈥渇ind equity.鈥 Brain scientists and geneticists can be as committed to a social and political mission of reform as much as anyone else, but they do not possess the tools or expertise to lead the way. Something different is needed, and, if this point gets made more and more plainly, we are likely to see the emergence of new kinds of leaders who will insist on funding priorities, research questions, and forms of training for clinicians that will have little to do with advancing conventional biological research. And that is okay. Knowing when to step up and when to step back is arguably one of the most powerful acts of leadership that any discipline or field can offer. This is the kind of future I wish for the field of American psychiatry.