Contents | Previous | Next |
Adulthood is a time for achieving productive vocations and for sustaining close relationships at home and in the community. These aspirations are readily attainable for adults who are mentally healthy. And they are within reach for adults who have mental disorders, thanks to major strides in diagnosis, treatment, and service delivery.
This chapter reviews the current state of knowledge about mental health in adults, along with selected mental disorders: anxiety disorders, mood disorders, and schizophrenia. These disorders are highlighted largely because of their prevalence in the population and the burden of illness associated with each. The chapter then turns to service delivery, describing the effective organization and range of services for adults with the most severe mental disorders. It also reviews an array of other services and supports designed to provide comprehensive care beyond the formal therapeutic setting.
Mental health in adulthood is characterized by the successful performance of mental function, enabling individuals to cope with adversity and to flourish in their education, vocation, and personal relationships. These are the areas of functioning most widely recognized by the mental health field. Yet, from the perspective of different cultures, these measures may define the concept of mental health too narrowly. As noted in Chapter 2, many groups, particularly ethnic and racial minority group members, also emphasize community, spiritual, and religious ties as necessary for mental health. The mental health profession is becoming more aware of the importance of reaching out to other cultures; an innovation termed “linguistically and culturally competent services” is pertinent both to the field’s conception of mental health and to the diagnosis and treatment of mental disorders.
An assortment of traits or personal characteristics have been viewed as contributing to mental health, including self-esteem, optimism, and resilience (Alloy & Abramson, 1988; Seligman, 1991; Institute of Medicine [IOM], 1994; Beardslee & Vaillant, 1997). These and related traits are seen as sources of personal resilience needed to weather the storms of stressful life events.
Stressful life events in adulthood include the breakup of intimate romantic relationships, death of a family member or friend, economic hardship, role conflict, work overload, racism and discrimination, poor physical health, accidental injuries, and intentional assaults on physical safety (Holmes & Rahe, 1967; Lazarus & Folkman, 1984; Kreiger et al., 1993). Stressful life events in adulthood also may reflect past events. Severe trauma in childhood, including sexual and physical abuse, may persist as a stressor into adulthood, or may make the individual more vulnerable to ongoing stresses (Browne & Finkelhor, 1986). Although some kinds of stressful life events are encountered almost universally, certain demographic groups have greater exposure and/or vulnerability to their cumulative impact. These groups include women, younger adults, unmarried adults, African Americans, and individuals of lower socioeconomic status (Ulbrich et al., 1989; McLeod & Kessler, 1990; Turner et al., 1995; Miranda & Green, 1999).
Anxiety disorders are the most prevalent mental disorders in adults (Regier et al., 1990). The anxiety disorders affect twice as many women as men. A broad category, anxiety disorders include panic disorder, phobias, obsessive-compulsive disorder, post-traumatic stress disorder, and generalized anxiety disorder, among others. Underlying this heterogeneous group of disorders is a state of heightened arousal or fear in relation to stressful events or feelings. The biological manifestations of anxiety, which are grounded in the “fight-or-flight” response, are unmistakable: they include surge in heart rate, sweating, and tensing of muscles. But this is certainly not the whole picture. Although the full array of biological causes and correlates of anxiety are not yet in our grasp, numerous effective treatments for anxiety disorders exist now. Treatment draws on an assortment of psychosocial and pharmacological approaches, administered alone or in combination.
Mood disorders take a monumental toll in human suffering, lost productivity, and suicide. Moreover, when unrecognized, they can result in unnecessary health care use. Mood disorders rank among the top 10 causes of worldwide disability (Murray & Lopez, 1996). Major depression and bipolar disorder are the most familiar mood disorders, but there are others including cyclothymia (alternating manic and depressive states that, while protracted, do not meet criteria for bipolar disorder) and dysthymia (a chronic, albeit symptomatically milder form of depression). The causes of mood disorders are not fully known. They may be triggered by stressful life events and enduring stressful social conditions (e.g., poverty and discrimination). With the exception of bipolar disorder, they too, like the anxiety disorders, are twice as common in women as men. One subtype of mood disorder, seasonal affective disorder, in which episodes of depression tend to occur in the late fall and winter, is seven times more common in women than in men (Blumenthal, 1988). Many psychosocial and genetic factors interact to dictate the appearance and persistence of mood disorders, according to the biopsychosocial model presented in Chapter 2.
Mood disorders, like anxiety disorders, can be treated with a host of effective pharmacological and psychosocial treatments. Either type of treatment is effective for about 50 to 70 percent of patients in outpatient settings (Depression Guideline Panel, 1993). Severe depression seems to resolve more quickly with pharmacotherapy (Depression Guideline Panel, 1993) and may be helped further by multimodal therapy (the combination of pharmacotherapy and psychotherapy) (Thase et al., 1997b). Despite the efficacy of treatment, a surprising fraction of those with mood disorders go untreated (Katon et al., 1992; Narrow et al., 1993; Wells et al., 1994; Thase, 1996). The foremost barriers to treatment include cost, stigma, and problems in the organization of service systems that contribute to the under recognition of these disorders.
Schizophrenia affects about 1 percent of the population, yet its severity and persistence reverberate throughout the mental health service system. Schizophrenia is marked by profound alterations in cognition and emotion. Symptoms frequently include hearing internal voices or experiencing other sensations not connected to an obvious source (hallucinations) and assigning unusual significance or meaning to normal events or holding false personal beliefs (delusions). The course of illness in schizophrenia is quite variable, with most people having periods of exacerbation and remission. Schizophrenia had once been thought to have a uniformly downhill course, but recent research dispels this view. Long-term followup studies show that many individuals with schizophrenia significantly improve and some recover (Ciompi, 1980; Harding et al., 1992). Although the causes of schizophrenia are not fully known, research points to the prominent role of genetic factors and to the impact of adverse environmental influences during early brain development (Tsuang et al., 1991; Weinberger & Lipska, 1995; Andreasen, 1997b). New pharmacological treatments are at least as effective as past pharmacological treatments with fewer troubling side effects.
Effective treatment of schizophrenia extends well beyond pharmacological therapy: it also includes psychosocial interventions, family interventions, and vocational and psychosocial rehabilitation. For those patients who are high service users, treatment should be coordinated by an interdisciplinary team that provides high-intensity, community-based services (Lehman & Steinwachs, 1998a). The prototype for this intensive case-management approach, which is useful for persons with other severe and persistent mental disorders as well, is assertive community treatment, described more thoroughly later in this chapter. Among the services included in this approach is substance abuse treatment. Its inclusion stems from findings that about half of patients with serious mental disorders (including schizophrenia) develop alcohol or other drug abuse problems (Drake & Osher, 1997). Even though research generated a range of recommendations for effective treatment of schizophrenia, it is alarming that less than 50 percent of patients actually receive many of the recommended treatments and that the gap was more pronounced in African Americans (Lehman & Steinwachs, 1998b).
The social consequences of serious mental disorders—family disruption, loss of employment and housing—can be calamitous. Comprehensive treatment, which includes services that exist outside the formal treatment system, is crucial to ameliorate symptoms, assist recovery, and, to the extent that these efforts are successful, redress stigma. Consumer self-help programs, family self-help, advocacy, and services for housing and vocational assistance complement and supplement the formal treatment system. Many of these services are operated by consumers, that is, people who use mental health services themselves. The logic behind their leadership in delivery of these services is that consumers are thought to be capable of engaging others with mental disorders, serving as role models, and increasing the sensitivity of service systems to the needs of people with mental disorders (Mowbray et al., 1996).
What constitutes mental health during the adult years? A widely used standard of mental health is the absence of a defined mental disorder. This standard has its limitations (discussed later), yet remains useful for epidemiological purposes. Epidemiology studies investigate the prevalence of mental disorders within several time frames: current, the past 12 months, and across a lifetime. Two well-designed national epidemiologic surveys estimate that about 80 percent of the adult population of the United States do not have a mental disorder during a year and hence may be considered “mentally healthy” (i.e., absence of a mental disorder) during any given year (Regier et al., 1993; Kessler et al., 1994). Thus, the popular notion that everyone is “dysfunctional” is far from the truth (Table 4-1). Yet, from time to time, many adults experience mental health problems.
Defining mental health by the absence of mental disorder does not convey the full picture of mental health. Among its limitations, this definition excludes adults with mental disorders who function well between episodes of illness. These people often are considered by themselves, and by coworkers, friends, and families, to be “mentally healthy” in spite of a history of mental illness and the risk of recurrence.
In addition to the mental health criteria cited earlier—that is, the successful performance of mental function, enabling individuals to cope with adversity and to flourish in their education, vocation, and personal relationships—a complementary approach defines the positive features of mental health in terms of attaining developmental milestones of adulthood, or in terms of displaying selected personality characteristics, traits, or attributes. Developmental theorist Erik Erikson viewed mental health in adulthood as achieving developmental tasks or milestones. According to Erikson’s formulation and his subsequent empirical research on adult men, adulthood was the time for overcoming what he termed “psychosocial crises,” the resolution of which led to satisfactory interpersonal and sexual relationships and to the pursuit of broader concerns for society and future generations (Erikson, 1963; Vaillant, 1977). However, these milestones, and the developmental theories that underpin them, have been criticized as reflecting the norms of European males rather than of women and other cultures.
Mental health and mental illness can be seen as the product of various personality traits, behavior patterns, and other characteristics which have roots in the individual’s prior life experiences or biology.
Table 4-1. Best estimate 1-year prevalence based on ECV and NCS, ages 18-54 | |||
ECA Prevalence (%) | NCS Prevalence (%) | Best Estimate ** (%) | |
Any Anxiety Disorder | 13.1 | 18.7 | 16.4 |
Simple Phobia | 8.3 | 8.6 | 8.3 |
Social Phobia | 2.0 | 7.4 | 2.0 |
Agoraphobia | 4.9 | 3.7 | 4.9 |
GAD | (1.5)* | 3.4 | 3.4 |
Panic Disorder | 1.6 | 2.2 | 1.6 |
OCD | 2.4 | (0.9)* | 2.4 |
PTSD | (1.9)* | 3.6 | 3.6 |
Any Mood Disorder | 7.1 | 11.1 | 7.1 |
MD Episode | 6.5 | 10.1 | 6.5 |
Unipolar MD | 5.3 | 8.9 | 5.3 |
Dysthymia | 1.6 | 2.5 | 1.6 |
Biopolar I | 1.1 | 1.3 | 1.1 |
Biopolar II | 0.6 | 0.2 | 0.6 |
Schizophrenia | 1.3 | — | 1.3 |
Nonaffective Psychosis | — | 0.2 | 0.2 |
Somatization | 0.2 | — | 0.2 |
ASP | 2.1 | — | 2.1 |
Anorexia Nervosa | 0.1 | — | 0.1 |
Severe Cognitive Impairment | 1.2 | — | 1.2 |
Any Disorder | 19.5 | 23.4 | 21.0 |
*Numbers in parentheses indicate the prevalence of the disorder without any comorbidity. These rates were calculated using the NCS data for GAD and PTSD, and the ECA data for OCD. The rates were not used in calculating the any anxiety disorder and any disorder totals for the ECA and NCS columns. The unduplicated GAD and PTSD rates were added to the best estimate total for any anxiety disorder (3.3%) and any disorder (1.5%). **In developing best-estimate 1-year prevalence rates from the two studies, a conservative procedure was followed that had previously been used in an independent scientific analysis comparing these two data sets (Andrews, 1995). For any mood disorder and any anxiety disorder, the lower estimate of the two surveys was selected, which for these data was the ECA. The best estimate rates for the individual mood and anxiety disorders were then chosen from the ECA only, in order to maintain the relationships between the individual disorders. For other disorders that were not covered in both surveys, the available estimate was used. Key to abbreviations: ECA, Epidemiologic Catchment Area; NCS, National Comorbidity Study; GAD, generalized anxiety disorder; OCD, obsessive-compulsive disorder; PTSD, post-traumatic stress disorder; MD, major depression; ASP, antisocial personality disorder. Source: D. Regier, W. Narrow, & D. Rae, personal communication, 1999 |
Personality traits are thought to confer either beneficial or detrimental effects on mental health during adulthood. Here too, however, there may be insufficient attention to gender and culture. The culture-bound nature of much of behavior has limited widespread predictive validity of personality research. (Mischel & Shoda, 1968). With this caveat in mind, a brief summary of healthy and maladaptive characteristics follows.
Self-esteem refers to an abiding set of beliefs about one’s own worth, competence, and abilities to relate to others (Vaughan & Oldham, 1997). Self-esteem also has been conceptualized as buffering the individual from adverse life events. Emotional well-being is often associated with a slightly positive, yet realistic, outlook (Alloy & Abramson, 1988). The opposite outlook is characterized by pessimism, demoralization, or minor symptoms of anxiety and depression. One seminal aspect of self-esteem has garnered much research attention: self-efficacy (Bandura, 1977). Self-efficacy is defined as confidence in one’s own abilities to cope with adversity, either independently or by obtaining appropriate assistance from others. Self-efficacy is a major component of the construct known as resilience (i.e., the ability to withstand and overcome adversity). Other components of resilience include intelligence and problem solving, although resilience is also facilitated by having adequate social support (Beardslee & Vaillant, 1997).
Neuroticism is a construct that refers to a broad pattern of psychological, emotional, and psychophysiologic reactivity (Eysenck & Eysenck, 1975). The opposite of neuroticism is stability or equanimity, which are major components of mental health. A high level of neuroticism is associated with a predisposition toward recognizing the dangerous, harmful, or defeating aspects of a situation and the tendency to respond with worry, anticipatory anxiety, emotionality, pessimism, and dissatisfaction. Neuroticism is associated with a greater risk of early-onset depressive and anxiety disorders (Clark et al., 1994). Neuroticism also may be linked to a particular cognitive attributional style in which life events are perceived to be large in impact and more difficult to change (Alloy et al., 1984). For example, this attributional style is embodied by pessimists who see every setback or failure as lasting forever, undermining everything, and being their fault (Seligman, 1991). Neuroticism also is associated with more rigid or distorted attitudes and beliefs about one’s competence (Beck, 1976).
Avoidance describes an exaggerated predisposition to withdraw from novel situations and to avoid personal challenges as threats. This is the behavioral state that often accompanies the distress of someone who has a high level of neuroticism and low self-efficacy (Vaughan & Oldham, 1997). Closely related to the characteristics of behavioral inhibition or introversion, the trait of avoidance appears to be partly inherited and is associated with shyness, anxiety, and depressive disorders in both childhood and adult life, as well as the subsequent development of substance abuse disorders (Vaughan & Oldham, 1997; Kagan et al., 1988). The people with low levels of harm avoidance are described as “healthy extroverts” and are characterized by confident, carefree, or outgoing behaviors.
Impulsivity is a trait that is associated with poor modulation of emotions, especially anger, difficulty delaying gratification, and novelty seeking. There is some developmental continuity between high levels of impulsivity in childhood and several adult mental disorders, including attention deficit hyperactivity disorder, bipolar disorder, and substance abuse disorders (Svrakic et al., 1993; Rothbart & Ahadi, 1994). Impulsivity also is associated with physical abuse (both as victim and, subsequently, as perpetrator) and antisocial personality traits (Vaughan & Oldham, 1997).
This set of traits and behaviors refers to the predisposition to engage in dishonest, hurtful, unfaithful, and at times dangerous conduct to benefit one’s own ends. The opposite of sociopathy may be referred to as character or scrupulosity. In its full form, sociopathy is referred to as antisocial personality disorder (DSM-IV). Sociopathy is characterized by a tendency and ability to disregard laws and rules, difficulties reciprocating within empathic and intimate relationships, less internalization of moral standards (i.e., a weaker conscience or superego), and an insensitivity to the needs and rights of others. People scoring high in sociopathy often have problems with aggressivity and are overrepresented among criminal populations. Although not invariably associated with criminality, sociopathy is associated with problematic, unethical, and morally questionable conduct in the workplace and within social systems. Marked sociopathy is much more common among men than women, although several other disorders (borderline and histrionic personality disorders and somatization disorder) are overrepresented among women within the same families (Widiger & Costa, 1994).
In summary, the various traits and behavioral patterns that epitomize strong mental health do not, of course, exist in a vacuum: they develop in a social context, and they underpin people’s ability to handle psychological and social adversity and the exposure to stressful life events. Furthermore, as reviewed in Chapter 3, severe or repeated trauma during youth may have enduring effects on both neurobiological and psychological development, altering stress responsivity and adult behavior patterns. Perhaps the best documented evidence of such enduring effects has been shown in young adults who experienced severe sexual or physical abuse in childhood. These individuals experience a greatly increased risk of mood, anxiety, and personality disorders throughout adult life.
The most common psychological and social stressors in adult life include the breakup of intimate romantic relationships, death of a family member or friend, economic hardships, racism and discrimination, poor physical health, and accidental and intentional assaults on physical safety (Holmes & Rahe, 1967; Lazarus & Folkman, 1984; Kreiger et al., 1993). Although some stressors are so powerful that they would evoke significant emotional distress in most otherwise mentally healthy people, the majority of stressful life events do not invariably trigger mental disorders. Rather, they are more likely to spawn mental disorders in people who are vulnerable biologically, socially, and/or psychologically (Lazarus & Folkman, 1984; Brown & Harris, 1989; Kendler et al., 1995). Understanding variability among individuals to a stressful life event is a major challenge to research. Groups at greater statistical risk include women, young and unmarried people, African Americans, and individuals with lower socioeconomic status (Ulbrich et al., 1989; McLeod & Kessler, 1990; Turner et al., 1995; Miranda & Green, 1999).
Divorce is a common example. Approximately one-half of all marriages now end in divorce, and about 30 to 40 percent of those undergoing divorce report a significant increase in symptoms of depression and anxiety (Brown & Harris, 1989). Vulnerability to depression and anxiety is greater among those with a personal history of mental disorders earlier in life and is lessened by strong social support. For many, divorce conveys additional economic adversities and the stress of single parenting. Single mothers face twice the risk of depression as do married mothers (Brown & Moran, 1997).
The death of a child or spouse during early or midadult life is much less common than divorce but generally is of greater potency in provoking emotional distress (Kim & Jacobs, 1995). Rates of diagnosable mental disorders during periods of grief are attenuated by the convention not to diagnose depression during the first 2 months of bereavement (Clayton & Darvish, 1979). In fact, people are generally unlikely to seek professional treatment during bereavement unless the severity of the emotional and behavioral disturbance is incapacitating.
A majority of Americans never will confront the stress of surviving a severe, life-threatening accident or physical assault (e.g., mugging, robbery, rape); however, some segments of the population, particularly urban youths and young adults, have exposure rates as high as 25 to 30 percent (Helzer et al., 1987; Breslau et al., 1991). Life-threatening trauma frequently provokes emotional and behavioral reactions that jeopardize mental health. In the most fully developed form, this syndrome is called post-traumatic stress disorder (DSM-IV), which is described later in this chapter. Women are twice as likely as men to develop post-traumatic stress disorder following exposure to life-threatening trauma (Breslau et al., 1998.)
More familiar to many Americans is the chronic strain that poor physical health and relationship problems place on day-to-day well-being. Relationship problems include unsatisfactory intimate relationships; conflicted relationships with parents, siblings, and children; and “falling-out” with coworkers, friends, and neighbors. In mid-adult life, the stress of caretaking for elderly parents also becomes more common.
Relationship problems at least double the risk of developing a mental disorder, although they are less immediately threatening or potentially cataclysmic than divorce or the death of a spouse or child (Brown & Harris, 1989). Finally, cumulative adversity appears to be more potent than stressful events in isolation as a predictor of psychological distress and mental disorders (Turner & Lloyd, 1995).
Severe trauma in childhood may have enduring effects into adulthood (Browne & Finkelhor, 1986). Past trauma includes sexual and physical abuse, and parental death, divorce, psychopathology, and substance abuse (reviewed in Turner & Lloyd, 1995).
Child sexual abuse is one of the most common stressors, with effects that persist into adulthood. It disproportionately affects females. Although definitions are still evolving, child sexual abuse is often defined as forcible touching of breasts or genitals or forcible intercourse (including anal, oral, or vaginal sex) before the age of 16 or 18 (Goodman et al., 1997). Epidemiology studies of adults in varying segments of the community have found that 15 to 33 percent of females and 13 to 16 percent of males were sexually abused in childhood (Polusny & Follette, 1995). A recent, large epidemiological study of adults in the general community found a lower prevalence (12.8 percent for females and 4.3 percent for males); however, the definition of sexual abuse was more restricted than in past studies (MacMillan et al., 1997). Sexual abuse in childhood has a mean age of onset estimated at 7 to 9 years of age (Polusny & Follette, 1995). In over 25 percent of cases of child sexual abuse, the offense was committed by a parent or parent substitute (Sedlak & Broadhurst, 1996).
The long-term consequences of past childhood sexual abuse are profound, yet vary in expression. They range from depression and anxiety to problems with social functioning and adult interpersonal relationships (Polusny & Follette, 1995). Post-traumatic stress disorder is a common sequela, found in 33 to 86 percent of adult survivors of child sexual abuse (Polusny & Follette, 1995). In a recent review, Weiss et al. (1999) found that sexual abuse was a specific risk factor for adult-onset depression and twice as many women as men reported a history of abuse. Other long-term effects include self-destructive behavior, social isolation, poor sexual adjustment, substance abuse, and increased risk of revictimization (Browne & Finkelhor, 1986; Briere, 1992).
Very few treatments specifically for adult survivors of childhood abuse have been studied in randomized controlled trials (IOM, 1998). Group therapy and Interpersonal Transaction group therapy were found to be more effective for female survivors than an experimental control condition that offered a less appropriate intervention (Alexander et al., 1989, 1991). In the practice setting, most psychosocial and pharmacological treatments are tailored to the primary diagnosis, which, as noted above, varies widely and may not attend to the special needs of those also reporting abuse history.
Domestic violence is a serious and startlingly common public health problem with mental health consequences for victims, who are overwhelmingly female, and for children who witness the violence. Domestic violence (also known as intimate partner violence) features a pattern of physical and sexual abuse, psychological abuse with verbal intimidation, and/or social isolation or deprivation. Estimates are that 8 to 17 percent of women are victimized annually in the United States (Wilt & Olsen, 1996). Pinpointing the prevalence is hindered by variations in the way domestic violence is defined and by problems in detection and underreporting. Women are often fearful that their reporting of domestic violence will precipitate retaliation by the batterer, a fear that is not unwarranted (Sisley et al., 1999).
Victims of domestic violence are at increased risk for mental health problems and disorders as well as physical injury and death. Domestic violence is considered one of the foremost causes of serious injury to women ages 15 to 44, accounting for about 30 percent of all acute injuries to women seen in emergency departments (Wilt & Olsen, 1996). According to the U.S. Department of Justice, females were victims in about 75 percent of the almost 2,000 homicides between intimates in 1996 (cited in Sisley et al., 1999). The mental health consequences of domestic violence include depression, anxiety disorders (e.g., post-traumatic stress disorder), suicide, eating disorders, and substance abuse (IOM, 1998; Eisenstat & Bancroft, 1999). Children who witness domestic violence may suffer acute and long-term emotional disturbances, including nightmares, depression, learning difficulties, and aggressive behavior. Children also become at risk for subsequent use of violence against their dating partners and wives (el-Bayoumi et al., 1998; NRC, 1998; Sisley et al., 1999).
Mental health interventions for victims, children, and batterers are highly important. Individual counseling and peer support groups are the interventions most frequently used by battered women. However, there is a lack of carefully controlled, methodologically robust studies of interventions and their outcomes, according to a report by the Institute of Medicine and National Research Council (IOM, 1998). A research agenda for violence against women was developed (IOM, 1996) and has served as an impetus for an ongoing research program sponsored by the U.S. Departments of Justice and Health and Human Services. Clearly, there is an urgent need for development and rigorous evaluation of prevention programs to safeguard against intimate partner violence and its impact on children.
Stressful life events, even for those at the peak of mental health, erode quality of life and place people at risk for symptoms and signs of mental disorders. There is an ever-expanding list of formal and informal interventions to aid individuals coping with adversity. Sources of informal interventions include family and friends, education, community services, self-help groups, social support networks, religious and spiritual endeavors, complementary healers, and physical activities. As valuable as these activities may be for promoting mental health, they have received less research attention than have interventions for mental disorders. Nevertheless, there are selected interventions to help people cope with stressors, such as bereavement programs and programs for caregivers (see Chapter 5) as well as couples therapy and physical activity.
Couples therapy is the umbrella term applied to interventions that aid couples in distress. The best studied interventions are behavioral couples therapy, cognitive-behavioral couples therapy, and emotion-focused couples therapy. A recent review article evaluated the body of evidence on the effectiveness of couples therapy and programs to prevent marital discord (Christensen & Heavey, 1999). The review found that about 65 percent of couples in therapy did improve, whereas 35 percent of control couples also improved. Couples therapy ameliorates relationship distress and appears to alleviate depression. The gains from couples therapy generally last through 6 months, but there are few long-term assessments (Christensen & Heavey, 1999). Similarly, interventions to prevent marital discord yield short-term improvements in marital adjustment and stability, but there is insufficient study of long-term outcomes. The prevention programs receiving the most study are the Couple Communication Program, Relationship Enhancement, and the Prevention and Relationship Enhancement Program (Christensen & Heavey, 1999). Greater research is needed to overcome gaps in knowledge and to extend findings to a broader array of programs, to diverse populations of couples, and to a wider set of outcomes, including effects on children.
Physical activities are a means to enhance somatic health as well as to deal with stress. A recent Surgeon General’s Report on Physical Activity and Health evaluated the evidence for physical activities serving to enhance mental health (U.S. Department of Health and Human Services [DHHS], 1996). Aerobic physical activities, such as brisk walking and running, were found to improve mental health for people who report symptoms of anxiety and depression and for those who are diagnosed with some forms of depression. The mental health benefits of physical activity for individuals in relatively good physical and mental health were not as evident, but the studies did not have sufficient rigor from which to draw unequivocal conclusions (DHHS, 1996).
A promising development in prevention of a specific mental disorder in adults occurred with the publication of results from the San Francisco Depression Research Project (Munoz et al., 1995). This study investigated 150 primary care patients who did not meet diagnostic criteria for depression and who were being seen in a public clinic for other problems. They were randomized to either psychoeducation—an 8-week cognitive behavioral course to help them control and manage moods—or to a control condition. One year later, those who received psychoeducation were found to have developed significantly fewer depression symptoms than members of the control group. This trial is noteworthy in two major respects: it was a randomized controlled trial and its participants were low-income individuals, with high representation of all major minority groups. Low-income individuals are considered a high-risk population because of studies documenting their higher prevalence of mental disorders. This study demonstrated in a methodologically rigorous fashion that depression may be preventable in some cases. It serves as a model for extending the concept of prevention to many mental disorders. Prevention research is vitally important and needs to be enhanced.
The anxiety disorders are the most common, or frequently occurring, mental disorders. They encompass a group of conditions that share extreme or pathological anxiety as the principal disturbance of mood or emotional tone. Anxiety, which may be understood as the pathological counterpart of normal fear, is manifest by disturbances of mood, as well as of thinking, behavior, and physiological activity.
The anxiety disorders include panic disorder (with and without a history of agoraphobia), agoraphobia (with and without a history of panic disorder), generalized anxiety disorder, specific phobia, social phobia, obsessive-compulsive disorder, acute stress disorder, and post-traumatic stress disorder (DSM-IV). In addition, there are adjustment disorders with anxious features, anxiety disorders due to general medical conditions, substance-induced anxiety disorders, and the residual category of anxiety disorder not otherwise specified (DSM-IV).
Anxiety disorders not only are common in the United States, but they are ubiquitous across human cultures (Regier et al., 1993; Kessler et al., 1994; Weissman et al., 1997). In the United States, 1-year prevalence for all anxiety disorders among adults ages 18 to 54 exceeds 16 percent (Table 4-1), and there is significant overlap or comorbidity with mood and substance abuse disorders (Regier et al., 1990; Goldberg & Lecrubier, 1995; Magee et al., 1996). The longitudinal course of these disorders is characterized by relatively early ages of onset, chronicity, relapsing or recurrent episodes of illness, and periods of disability (Keller & Hanks, 1994; Gorman & Coplan, 1996; Liebowitz, 1997; Marcus et al., 1997). Although few psychological autopsy studies of adult suicides have included a focus on comorbid conditions (Conwell & Brent, 1995), it is likely that the rate of comorbid anxiety in suicide is underestimated. Panic disorder and agoraphobia, particularly, are associated with increased risks of attempted suicide (Hornig & McNally, 1995; American Psychiatric Association, 1998).
A panic attack is a discrete period of intense fear or discomfort that is associated with numerous somatic and cognitive symptoms (DSM-IV). These symptoms include palpitations, sweating, trembling, shortness of breath, sensations of choking or smothering, chest pain, nausea or gastrointestinal distress, dizziness or lightheadedness, tingling sensations, and chills or blushing and “hot flashes.” The attack typically has an abrupt onset, building to maximum intensity within 10 to 15 minutes. Most people report a fear of dying, “going crazy,” or losing control of emotions or behavior. The experiences generally provoke a strong urge to escape or flee the place where the attack begins and, when associated with chest pain or shortness of breath, frequently results in seeking aid from a hospital emergency room or other type of urgent assistance. Yet an attack rarely lasts longer than 30 minutes. Current diagnostic practice specifies that a panic attack must be characterized by at least four of the associated somatic and cognitive symptoms described above. The panic attack is distinguished from other forms of anxiety by its intensity and its sudden, episodic nature. Panic attacks may be further characterized by the relationship between the onset of the attack and the presence or absence of situational factors. For example, a panic attack may be described as unexpected, situationally bound, or situationally predisposed (usually, but not invariably occurring in a particular situation). There are also attenuated or “limited symptom” forms of panic attacks.
Panic attacks are not always indicative of a mental disorder, and up to 10 percent of otherwise healthy people experience an isolated panic attack per year (Barlow, 1988; Klerman et al., 1991). Panic attacks also are not limited to panic disorder. They commonly occur in the course of social phobia, generalized anxiety disorder, and major depressive disorder (DSM-IV).
Panic disorder is diagnosed when a person has experienced at least two unexpected panic attacks and develops persistent concern or worry about having further attacks or changes his or her behavior to avoid or minimize such attacks. Whereas the number and severity of the attacks varies widely, the concern and avoidance behavior are essential features. The diagnosis is inapplicable when the attacks are presumed to be caused by a drug or medication or a general medical disorder, such as hyperthyroidism.
Lifetime rates of panic disorder of 2 to 4 percent and 1-year rates of about 2 percent are documented consistently in epidemiological studies (Kessler et al., 1994; Weissman et al., 1997) (Table 4-1). Panic disorder is frequently complicated by major depressive disorder (50 to 65 percent lifetime comorbidity rates) and alcoholism and substance abuse disorders (20 to 30 percent comorbidity) (Keller & Hanks, 1994; Magee et al., 1996; Liebowitz, 1997). Panic disorder is also concomitantly diagnosed, or co-occurs, with other specific anxiety disorders, including social phobia (up to 30 percent), generalized anxiety disorder (up to 25 percent), specific phobia (up to 20 percent), and obsessive-compulsive disorder (up to 10 percent) (DSM-IV). As discussed subsequently, approximately one-half of people with panic disorder at some point develop such severe avoidance as to warrant a separate description, panic disorder with agoraphobia.
Panic disorder is about twice as common among women as men (American Psychiatric Association, 1998). Age of onset is most common between late adolescence and midadult life, with onset relatively uncommon past age 50. There is developmental continuity between the anxiety syndromes of youth, such as separation anxiety disorder. Typically, an early age of onset of panic disorder carries greater risks of comorbidity, chronicity, and impairment. Panic disorder is a familial condition and can be distinguished from depressive disorders by family studies (Rush et al., 1998).
The ancient term agoraphobia is translated from Greek as fear of an open marketplace. Agoraphobia today describes severe and pervasive anxiety about being in situations from which escape might be difficult or avoidance of situations such as being alone outside of the home, traveling in a car, bus, or airplane, or being in a crowded area (DSM-IV).
Most people who present to mental health specialists develop agoraphobia after the onset of panic disorder (American Psychiatric Association, 1998). Agoraphobia is best understood as an adverse behavioral outcome of repeated panic attacks and the subsequent worry, preoccupation, and avoidance (Barlow, 1988). Thus, the formal diagnosis of panic disorder with agoraphobia was established. However, for those people in communities or clinical settings who do not meet full criteria for panic disorder, the formal diagnosis of agoraphobia without history of panic disorder is used (DSM-IV).
The 1-year prevalence of agoraphobia is about 5 percent (Table 4-1). Agoraphobia occurs about two times more commonly among women than men (Magee et al., 1996). The gender difference may be attributable to social-cultural factors that encourage, or permit, the greater expression of avoidant coping strategies by women (DSM-IV), although other explanations are possible.
These common conditions are characterized by marked fear of specific objects or situations (DSM-IV). Exposure to the object of the phobia, either in real life or via imagination or video, invariably elicits intense anxiety, which may include a (situationally bound) panic attack. Adults generally recognize that this intense fear is irrational. Nevertheless, they typically avoid the phobic stimulus or endure exposure with great difficulty. The most common specific phobias include the following feared stimuli or situations: animals (especially snakes, rodents, birds, and dogs); insects (especially spiders and bees or hornets); heights; elevators; flying; automobile driving; water; storms; and blood or injections.
Approximately 8 percent of the adult population suffers from one or more specific phobias in 1 year (Table 4-1). Much higher rates would be recorded if less rigorous diagnostic requirements for avoidance or functional impairment were employed. Typically, the specific phobias begin in childhood, although there is a second “peak” of onset in the middle 20s of adulthood (DSM-IV). Most phobias persist for years or even decades, and relatively few remit spontaneously or without treatment.
The specific phobias generally do not result from exposure to a single traumatic event (i.e., being bitten by a dog or nearly drowning) (Marks, 1969). Rather, there is evidence of phobia in other family members and social or vicarious learning of phobias (Cook & Mineka, 1989). Spontaneous, unexpected panic attacks also appear to play a role in the development of specific phobia, although the particular pattern of avoidance is much more focal and circumscribed.
Social phobia, also known as social anxiety disorder, describes people with marked and persistent anxiety in social situations, including performances and public speaking (Ballenger et al., 1998). The critical element of the fearfulness is the possibility of embarrassment or ridicule. Like specific phobias, the fear is recognized by adults as excessive or unreasonable, but the dreaded social situation is avoided or is tolerated with great discomfort. Many people with social phobia are preoccupied with concerns that others will see their anxiety symptoms (i.e., trembling, sweating, or blushing); or notice their halting or rapid speech; or judge them to be weak, stupid, or “crazy.” Fears of fainting, losing control of bowel or bladder function, or having one’s mind going blank are also not uncommon. Social phobias generally are associated with significant anticipatory anxiety for days or weeks before the dreaded event, which in turn may further handicap performance and heighten embarrassment.
The 1-year prevalence of social phobia ranges from 2 to 7 percent (Table 4-1), although the lower figure probably better captures the number of people who experience significant impairment and distress. Social phobia is more common in women (Wells et al., 1994). Social phobia typically begins in childhood or adolescence and, for many, it is associated with the traits of shyness and social inhibition (Kagan et al., 1988). A public humiliation, severe embarrassment, or other stressful experience may provoke an intensification of difficulties (Barlow, 1988). Once the disorder is established, complete remissions are uncommon without treatment. More commonly, the severity of symptoms and impairments tends to fluctuate in relation to vocational demands and the stability of social relationships. Preliminary data suggest social phobia to be familial (Rush et al., 1998).
Generalized anxiety disorder is defined by a protracted (> 6 months’ duration) period of anxiety and worry, accompanied by multiple associated symptoms (DSM-IV). These symptoms include muscle tension, easy fatiguability, poor concentration, insomnia, and irritability. In youth, the condition is known as overanxious disorder of childhood. In DSM-IV, an essential feature of generalized anxiety disorder is that the anxiety and worry cannot be attributable to the more focal distress of panic disorder, social phobia, obsessive-compulsive disorder, or other conditions. Rather, as implied by the name, the excessive worries often pertain to many areas, including work, relationships, finances, the well-being of one’s family, potential misfortunes, and impending deadlines. Somatic anxiety symptoms are common, as are sporadic panic attacks.
Generalized anxiety disorder occurs more often in women, with a sex ratio of about 2 women to 1 man (Brawman-Mintzer & Lydiard, 1996). The 1-year population prevalence is about 3 percent (Table 4-1). Approximately 50 percent of cases begin in childhood or adolescence. The disorder typically runs a fluctuating course, with periods of increased symptoms usually associated with life stress or impending difficulties. There does not appear to be a specific familial association for general anxiety disorder. Rather, rates of other mood and anxiety disorders typically are greater among first-degree relatives of people with generalized anxiety disorder (Kendler et al., 1987).
Obsessions are recurrent, intrusive thoughts, impulses, or images that are perceived as inappropriate, grotesque, or forbidden (DSM-IV). The obsessions, which elicit anxiety and marked distress, are termed “ego-alien” or “ego-dystonic” because their content is quite unlike the thoughts that the person usually has. Obsessions are perceived as uncontrollable, and the sufferer often fears that he or she will lose control and act upon such thoughts or impulses. Common themes include contamination with germs or body fluids, doubts (i.e., the worry that something important has been overlooked or that the sufferer has unknowingly inflicted harm on someone), order or symmetry, or loss of control of violent or sexual impulses.
Compulsions are repetitive behaviors or mental acts that reduce the anxiety that accompanies an obsession or “prevent” some dreaded event from happening (DSM-IV). Compulsions include both overt behaviors, such as hand washing or checking, and mental acts including counting or praying. Not uncommonly, compulsive rituals take up long periods of time, even hours, to complete. For example, repeated hand washing, intended to remedy anxiety about contamination, is a common cause of contact dermatitis.
Although once thought to be rare, obsessive-compulsive disorder has now been documented to have a 1-year prevalence of 2.4 percent (Table 4-1). Obsessive-compulsive disorder is equally common among men and women.
Obsessive-compulsive disorder typically begins in adolescence to young adult life (males) or in young adult life (females) (Burke et al., 1990; DSM-IV). For most, the course is fluctuating and, like generalized anxiety disorder, symptom exacerbations are usually associated with life stress. Common comorbidities include major depressive disorder and other anxiety disorders. Approximately 20 to 30 percent of people in clinical samples with obsessive-compulsive disorder report a past history of tics, and about one-quarter of these people meet the full criteria for Tourette’s disorder (DSM-IV). Conversely, up to 50 percent of people with Tourette’s disorder develop obsessive-compulsive disorder (Pitman et al., 1987).
Obsessive-compulsive disorder has a clear familial pattern and somewhat greater familial specificity than most other anxiety disorders. Furthermore, there is an increased risk of obsessive-compulsive disorder among first-degree relatives with Tourette’s disorder. Other mental disorders that may fall within the spectrum of obsessive-compulsive disorder include trichotillomania (compulsive hair pulling), compulsive shoplifting, gambling, and sexual behavior disorders (Hollander, 1996). The latter conditions are somewhat discrepant because the compulsive behaviors are less ritualistic and yield some outcomes that are pleasurable or gratifying. Body dysmorphic disorder is a more circumscribed condition in which the compulsive and obsessive behavior centers around a preoccupation with one’s appearance (i.e., the syndrome of imagined ugliness) (Phillips, 1991).
Acute stress disorder refers to the anxiety and behavioral disturbances that develop within the first month after exposure to an extreme trauma. Generally, the symptoms of an acute stress disorder begin during or shortly following the trauma. Such extreme traumatic events include rape or other severe physical assault, near-death experiences in accidents, witnessing a murder, and combat. The symptom of dissociation, which reflects a perceived detachment of the mind from the emotional state or even the body, is a critical feature. Dissociation also is characterized by a sense of the world as a dreamlike or unreal place and may be accompanied by poor memory of the specific events, which in severe form is known as dissociative amnesia. Other features of an acute stress disorder include symptoms of generalized anxiety and hyperarousal, avoidance of situations or stimuli that elicit memories of the trauma, and persistent, intrusive recollections of the event via flashbacks, dreams, or recurrent thoughts or visual images.
If the symptoms and behavioral disturbances of the acute stress disorder persist for more than 1 month, and if these features are associated with functional impairment or significant distress to the sufferer, the diagnosis is changed to post-traumatic stress disorder. Post-traumatic stress disorder is further defined in DSM-IV as having three subforms: acute1 (< 3 months’ duration), chronic (> 3 months’ duration), and delayed onset (symptoms began at least 6 months after exposure to the trauma).
By virtue of the more sustained nature of post-traumatic stress disorder (relative to acute stress disorder), a number of changes, including decreased self-esteem, loss of sustained beliefs about people or society, hopelessness, a sense of being permanently damaged, and difficulties in previously established relationships, are typically observed. Substance abuse often develops, especially involving alcohol, marijuana, and sedative-hypnotic drugs.
About 50 percent of cases of post-traumatic stress disorder remit within 6 months. For the remainder, the disorder typically persists for years and can dominate the sufferer’s life. A longitudinal study of Vietnam veterans, for example, found 15 percent of veterans to be suffering from post-traumatic stress disorder 19 years after combat exposure (cited in McFarlane & Yehuda, 1996). In the general population, the 1-year prevalence is about 3.6 percent, with women having almost twice the prevalence of men (Kessler et al., 1995) (Table 4-1). The highest rates of post-traumatic stress disorder are found among women who are victims of crime, especially rape, as well as among torture and concentration camp survivors (Yehuda, 1999). Overall, among those exposed to extreme trauma, about 9 percent develop post-traumatic stress disorder (Breslau et al., 1998).
The etiology of most anxiety disorders, although not fully understood, has come into sharper focus in the last decade. In broad terms, the likelihood of developing anxiety involves a combination of life experiences, psychological traits, and/or genetic factors. The anxiety disorders are so heterogeneous that the relative roles of these factors are likely to differ. Some anxiety disorders, like panic disorder, appear to have a stronger genetic basis than others (National Institute of Mental Health [NIMH], 1998), although actual genes have not been identified. Other anxiety disorders are more rooted in stressful life events.
It is not clear why females have higher rates than males of most anxiety disorders, although some theories have suggested a role for the gonadal steroids. Other research on women’s responses to stress also suggests that women experience a wider range of life events (e.g., those happening to friends) as stressful as compared with men who react to a more limited range of stressful events, specifically those affecting themselves or close family members (Maciejewski et al., 1999).
What the myriad of anxiety disorders have in common is a state of increased arousal or fear (Barbee, 1998). Anxiety disorders often are conceptualized as an abnormal or exaggerated version of arousal. Much is known about arousal because of decades of study in animals2 and humans of the so-called “fight-or-flight response,” which also is referred to as the acute stress response. The acute stress response is critical to understanding the normal response to stressors and has galvanized research, but its limitations for understanding anxiety have come to the forefront in recent years, as this section later explains.
In common parlance, the term “stress” refers either to the external stressor, which can be physical or psychosocial in nature, as well as to the internal response to the stressor. Yet researchers distinguish the two, calling the stressor the stimulus and the body’s reaction the stress response. This is an important distinction because in many anxiety states there is no immediate external stressor. The following paragraphs describe the biology of the acute stress response, as well as its limitations, in understanding human anxiety. Emerging views about the neurobiology of anxiety attempt to integrate and understand psychosocial views of anxiety and behavior in relation to the structure and function of the central and peripheral nervous system.
When a fearful or threatening event is perceived, humans react innately to survive: they either are ready for battle or run away (hence the term “fight-or-flight response”). The nature of the acute stress response is all too familiar. Its hallmarks are an almost instantaneous surge in heart rate, blood pressure, sweating, breathing, and metabolism, and a tensing of muscles. Enhanced cardiac output and accelerated metabolism are essential for mobilizing fast action. The host of physiological changes activated by a stressful event are unleashed in part by activation of a nucleus in the brain stem called the locus ceruleus. This nucleus is the origin of most norepinephrine pathways in the brain. Neurons using norepinephrine as their neurotransmitter project bilaterally from the locus ceruleus along distinct pathways to the cerebral cortex, limbic system, and the spinal cord, among other projections.
Normally, when someone is in a serene, unstimulated state, the “firing” of neurons in the locus ceruleus is minimal. A novel stimulus, once perceived, is relayed from the sensory cortex of the brain through the thalamus to the brain stem. That route of signaling increases the rate of noradrenergic activity in the locus ceruleus, and the person becomes alert and attentive to the environment. If the stimulus is perceived as a threat, a more intense and prolonged discharge of the locus ceruleus activates the sympathetic division of the autonomic nervous system (Thase & Howland, 1995). The activation of the sympathetic nervous system leads to the release of norepinephrine from nerve endings acting on the heart, blood vessels, respiratory centers, and other sites. The ensuing physiological changes constitute a major part of the acute stress response. The other major player in the acute stress response is the hypothalamic-pituitary-adrenal axis, which is discussed in the next section.
In the 1980s, the prevailing view was that excess discharge of the locus ceruleus with the acute stress response was a major contributor to the etiology of anxiety (Coplan & Lydiard, 1998). Yet over the past decade, the limitations of the acute stress response as a model for understanding anxiety have become more apparent. The first and most obvious limitation is that the acute stress response relates to arousal rather than anxiety. Anxiety differs from arousal in several ways (Barlow, 1988; Nutt et al., 1998). First, with anxiety, the concern about the stressor is out of proportion to the realistic threat. Second, anxiety is often associated with elaborate mental and behavioral activities designed to avoid the unpleasant symptoms of a full-blown anxiety or panic attack. Third, anxiety is usually longer lived than arousal. Fourth, anxiety can occur without exposure to an external stressor.
Other limitations of this model became evident from a lack of support from clinical and basic research (Coplan & Lydiard, 1998). Furthermore, with its emphasis on the neurotransmitter norepinephrine, the model could not explain why medications that acted on the neurotransmitter serotonin (the selective serotonin reuptake inhibitors, or SSRIs) helped to alleviate anxiety symptoms. In fact, these medications are becoming the first-line treatment for anxiety disorders (Kent et al., 1998). To probe the etiology of anxiety, researchers began to devote their energies to the study of other brain circuits and the neurotransmitters on which they rely. The locus ceruleus still participates in anxiety but is understood to play a lesser role.
An exciting new line of research proposes that anxiety engages a wide range of neurocircuits. This line of research catapults to prominence two key regulatory centers found in the cerebral hemispheres of the brain—the hippocampus and the amygdala. These centers, in turn, are thought to activate the hypothalamic-pituitary-adrenocortical (HPA) axis3 (Goddard & Charney, 1997; Coplan & Lydiard, 1998; Sullivan et al., 1998). Researchers have long established the contribution of the HPA axis to anxiety but have been perplexed by how it is regulated. They are buoyed by new findings about the roles of the hippocampus and the amygdala.
The hippocampus and the amygdala govern memory storage and emotions, respectively, among their other functions. The hippocampus is considered important in verbal memory, especially of time and place for events with strong emotional overtones (McEwen, 1998). The hippocampus and amygdala are major nuclei of the limbic system, a pathway known to underlie emotions. There are anatomical projections between the hippocampus, amygdala, and hypothalalamus (Jacobson & Sapolsky, 1991; Charney & Deutch, 1996; Coplan & Lydiard, 1998).
Studies of emotional processing in rodents (LeDoux, 1996; Rogan & LeDoux, 1996; Davis, 1997) and in humans with brain lesions (Adolphs et al., 1998) have identified the amygdala as critical to fear responses. Sensory information enters the lateral amygdala, from which processed information is passed to the central nucleus, the major output nucleus of the amygdala. The central nucleus projects, in turn, to multiple brain systems involved in the physiologic and behavioral responses to fear. Projections to different regions of the hypothalamus activate the sympathetic nervous system and induce the release of stress hormones, such as CRH.4 The production of CRH in the paraventricular nucleus of the hypothalamus activates a cascade leading to release of glucocorticoids from the adrenal cortex. Projections from the central nucleus innervate different parts of the periaqueductal gray matter, which initiates descending analgesic responses (involving the body's endogenous opioids) that can suppress pain in an emergency, and which also activates species-typical defensive responses (e.g., many animals freeze when fearful).
Anxiety differs from fear in that the fear-producing stimulus is either not present or not immediately threatening, but in anticipation of danger, the same arousal, vigilance, physiologic preparedness, and negative affects and cognitions occur. Different types of internal or external factors or triggers act to produce the anxiety symptoms of panic disorder, agoraphobia, post-traumatic stress disorder, specific phobias, and generalized anxiety disorder, and the prominent anxiety that commonly occurs in major depression. It is currently a matter of research to determine whether dysregulation of these fear pathways leads to the symptoms of anxiety disorders. It has now been established, using noninvasive neuroimaging, that the human amygdala is also involved in fear responses. Fearful facial expressions have been shown to activate the amygdala in MRI studies of normal human subjects (Breiter et al., 1996). Functional imaging studies in anxiety disorders, such as PET studies of brain activation in phobias (Rauch et al., 1995), are also beginning to investigate the precise neural circuits involved in the anxiety disorders.
What is especially exciting is that neuroimaging has furnished direct evidence in humans of the damaging effects of glucocorticoids. In people with post-traumatic stress disorder, neuroimaging studies have found a reduction in the size of the hippocampus. The reduced volume appears to reflect the atrophy of dendrites—the receptive portion of nerve cells—in a select region of the hippocampus. Similarly, animals exposed to chronic psychosocial stress display atrophy in the same hippocampal region (McEwen & Magarinos, 1997). Stress-induced increases in glucocorticoids are thought to be responsible for the atrophy (McEwen, 1998). If the hippocampus is impaired, the individual is thought to be less able to draw on memory to evaluate the nature of the stressor (McEwen, 1998).
There are many neurotransmitter alterations in anxiety disorders. In keeping with the broader view of anxiety, at least five neurotransmitters are perturbed in anxiety: serotonin, norepinephrine, gamma-aminobutyric acid (GABA), corticotropin-releasing hormone (CRH),5 and cholecystokinin (Coplan & Lydiard 1998; Rush et al., 1998). There is such careful orchestration between these neurotransmitters that changes in one neurotransmitter system invariably elicit changes in another, including extensive feedback mechanisms. Serotonin and GABA are inhibitory neurotransmitters that quiet the stress response (Rush et al., 1998). All of these neurotransmitters have become important targets for therapeutic agents either already marketed or in development (as discussed in the section on treatment of anxiety disorders).
There are several major psychological theories of anxiety: psychoanalytic and psychodynamic theory, behavioral theories, and cognitive theories (Thorn et al., 1999). Psychodynamic theories have focused on symptoms as an expression of underlying conflicts (Rush et al., 1998; Thorn et al., 1999). Although there are no empirical studies to support these psychodynamic theories, they are amenable to scientific study (Kandel, 1999) and some therapists find them useful. For example, ritualistic compulsive behavior can be viewed as a result of a specific defense mechanism that serves to channel psychic energy away from conflicted or forbidden impulses. Phobic behaviors similarly have been viewed as a result of the defense mechanism of displacement. From the psychodynamic perspective, anxiety usually reflects more basic, unresolved conflicts in intimate relationships or expression of anger.
More recent behavioral theories have emphasized the importance of two types of learning: classical conditioning and vicarious or observational learning. These theories have some empirical evidence to support them. In classical conditioning, a neutral stimulus acquires the ability to elicit a fear response after repeated pairings with a frightening (unconditioned) stimulus. In vicarious learning, fearful behavior is acquired by observing others’ reactions to fear-inducing stimuli (Thorn et al., 1999). With general anxiety disorder, unpredictable positive and negative reinforcement is seen as leading to anxiety, especially because the person is unsure about whether avoidance behaviors are effective.
Cognitive factors, especially the way people interpret or think about stressful events, play a critical role in the etiology of anxiety (Barlow et al., 1996; Thorn et al., 1999). A decisive factor is the individual’s perception, which can intensify or dampen the response. One of the most salient negative cognitions in anxiety is the sense of uncontrollability. It is typified by a state of helplessness due to a perceived inability to predict, control, or obtain desired results (Barlow et al., 1996). Negative cognitions are frequently found in individuals with anxiety (Ingram et al., 1998). Many modern psychological models of anxiety incorporate the role of individual vulnerability, which includes both genetic (Smoller & Tsuang, 1998) and acquired (Coplan et al., 1997) predispositions. There is evidence that women may ruminate more about distressing life events compared with men, suggesting that a cognitive risk factor may predispose them to higher rates of anxiety and depression (Nolen-Hoeksema et al., in press).
The anxiety disorders are treated with some form of counseling or psychotherapy or pharmacotherapy, either singly or in combination (Barlow & Lehman, 1996; March et al., 1997; American Psychiatric Association, 1998; Kent et al., 1998).
Anxiety disorders are responsive to counseling and to a wide variety of psychotherapies. More severe and persistent symptoms also may require pharmacotherapy (American Psychiatric Association, 1998).
During the past several decades, there has been increasing enthusiasm for more focused, time-limited therapies that address ways of coping with anxiety symptoms more directly rather than exploring unconscious conflicts or other personal vulnerabilities (Barlow & Lehman, 1996). These therapies typically emphasize cognitive and behavioral assessment and interventions.
The hallmarks of cognitive-behavioral therapies are evaluating apparent cause and effect relationships between thoughts, feelings, and behaviors, as well as implementing relatively straightforward strategies to lessen symptoms and reduce avoidant behavior (Barlow, 1988). A critical element of therapy is to increase exposure to the stimuli or situations that provoke anxiety. Without such therapeutic assistance, the sufferer typically withdraws from anxiety-inducing situations, inadvertently reinforcing avoidant or escape behavior.
The therapist provides reassurance that the feared situation is not deadly and introduces a plan to enhance mastery. This plan may include approaching the feared situation in a graduated or stepwise hierarchy or teaching the patient to use responses that dampen anxiety, such as deep muscle relaxation or coping. One fundamental principle is that prolonged exposure to a feared stimulus reliably decreases cognitive and physiologic symptoms of anxiety (Marks, 1969; Barlow, 1988). With such experience generally comes greater self-efficacy and a greater willingness to encounter other feared stimuli. For panic disorder, interoceptive training (a type of conditioning technique) and breathing exercises are often employed to help the sufferer become more capable of recognizing and coping with the social cues, antecedents, or early signs of a panic attack. Cognitive interventions are used to counteract the exaggerated or catastrophic thoughts that characterize anxiety. For treatment of obsessive-compulsive disorder, the strategy of response prevention must be added to exposure to ensure that compulsions are not performed (Barlow, 1988).
There is now extensive evidence that cognitive-behavioral therapies are useful treatments for a majority of patients with anxiety disorders (Chambless et al., 1998). Poorer outcomes are observed, however, in more complicated patient groups. With obsessive-compulsive disorder, approximately 20 to 25 percent of patients are unwilling to participate in therapy (March et al., 1997). Another major limitation of cognitive-behavioral therapies is not their effectiveness but, rather, the limited availability of skilled practitioners (Ballenger et al., 1998).
It is possible that more traditional forms of therapy based on psychodynamic or interpersonal theories of anxiety also may prove to be effective treatments (Shear, 1995). However, these therapies have not yet received extensive empirical support. As a result, more traditional therapies are generally deemphasized in evidence-based treatment guidelines for anxiety disorders.
The medications typically used to treat patients with anxiety disorders are benzodiazepines, antidepressants, and the novel compound buspirone (Lydiard et al., 1996). In light of increasing awareness of numerous neurochemical alterations in anxiety disorders, many new classes of drugs are likely to be developed, expressly targeting CRH and other neuroactive agents (Nemeroff, 1998).
The benzodiazepines are a large class of relatively safe and widely prescribed medications that have rapid and profound antianxiety and sedative-hypnotic effects. The benzodiazepines are thought to exert their therapeutic effects by enhancing the inhibitory neurotransmitter systems utilizing GABA. Benzodiazepines bind to a site on the GABA receptor and act as receptor agonists (Perry et al., 1997). Benzodiazepines differ in terms of potency, pharmacokinetics (i.e., elimination half-life), and lipid solubility.
The four benzodiazepines currently widely prescribed for treatment of anxiety disorders are diazepam, lorazepam, clonazepam, and alprazolam. Each is now available in generic formulations (Davidson, 1998). Among these agents, alprazolam and lorazepam have shorter elimination half-lives—that is, are removed from the body more quickly—while diazepam and clonazepam have a long period of action (i.e., up to 24 hours). Diazepam also has multiple active metabolites, which increase the risk of “carryover” effects such as sedation and “hangover.” Benzodiazepines that undergo conjugation appear to have longer elimination time in women, and oral contraceptive can decrease clearance (Dawlans, 1995). Since Asians are more likely to metabolize diazepam more slowly, they may require lower doses to achieve the same blood concentrations as Caucasians (Lin et al., 1997).
Benzodiazepines have the potential for producing drug dependence (i.e., physiological or behavioral symptoms after discontinuation of use). Shorter acting compounds have somewhat greater liability because of more rapid and abrupt onset of withdrawal symptoms.
Because the benzodiazepines do not have strong antiobsessional effects, their use in obsessive-compulsive disorder and post-traumatic stress disorder is generally viewed as palliative (i.e., relieving, but not eliminating symptoms). Rather, obsessive-compulsive disorder and post-traumatic stress disorder are more effectively treated by antidepressants, especially the SSRIs (as discussed below). When effective, benzodiazepines should be tapered after several months of use, although there is a substantial risk of relapse. Many clinicians favor a combined treatment approach for panic disorder and generalized anxiety disorder, in which benzodiazepines are used acutely in tandem with an antidepressant. The benzodiazepines are subsequently tapered as the antidepressant’s therapeutic effects begin to emerge (American Psychiatric Association, 1998).
Most antidepressant medications have substantial antianxiety and antipanic effects in addition to their antidepressant action (Kent et al., 1998). Moreover, a large number of antidepressants have antiobsessional effects (Perry et al., 1997). The observation that the tricyclic antidepressant imipramine had a different anxiolytic profile than diazepam helped to differentiate panic disorder from generalized anxiety disorder and, subsequently, social phobia.
Clomipramine, a tricyclic antidepressant (TCA) with relatively potent reuptake inhibitory effects on serotonin (5-HT) neurons, subsequently was found to be the only TCA to have specific antiobsessional effects (March et al., 1997). The importance of this effect on 5-HT was highlighted when the SSRIs became available. By the late 1990s, it became clear that all of the SSRIs have antiobsessional effects (Greist et al., 1995; Kent et al., 1998).
Current practice guidelines rank the TCAs below the SSRIs for treatment of anxiety disorders because of the SSRIs’ more favorable tolerability and safety profiles (March et al., 1997; American Psychiatric Association, 1998; Ballenger et al., 1998). Nevertheless, there are patients who respond to the TCAs after failing to respond to one or more of the newer agents. Similarly, although relatively rarely used, the monoamine oxidate inhibitors (MAOIs) have significant antiobsessional, antipanic, and anxiolytic effects (Sheehan et al., 1980; American Psychiatric Association, 1998). In the United States, the MAOIs phenelzine, tranylcypromine, and isocarboxazid (which has not been consistently marketed this decade) are seldom used unless simpler medication strategies have failed (American Psychiatric Association, 1998).
The five drugs within the SSRI class—fluoxetine, sertraline, paroxetine, fluvoxamine, and citalopram—have emerged as the preferred type of antidepressant for treatment of anxiety disorders (Westenberg, 1996; Kent et al., 1998). In addition to well-established efficacy in obsessive-compulsive disorder, there is convincing and growing evidence of antipanic and broader anxiolytic effects (American Psychiatric Association, 1998; Kent et al., 1998). Treatment of panic disorder often requires lower initial doses and slower upward titration. By contrast, treatment for obsessive-compulsive disorder ultimately may entail higher doses (for example, 60 or 80 mg/day of fluoxetine or 200 mg per day of sertraline) and longer durations to achieve desired outcomes (March et al., 1997). As all of the SSRIs are currently protected by patents, there are no generic forms yet available. This adds to the direct costs of treatment. Cost may be offset indirectly, however, by virtue of need for fewer treatment visits and fewer concomitant medications, and cost likely will abate when these agents begin to lose patent protection in a few years.
Other newer antidepressants, including venlafaxine, nefazodone, and mirtazapine, also may have significant antianxiety effects, for which clinical trials are under way (March et al., 1997; American Psychiatric Association, 1998). Paroxetine has been approved by the Food and Drug Administration (FDA) for social phobia, and sertraline is being developed for post-traumatic stress disorder. Nefazodone, which also is being studied in post-traumatic stress disorder, and mirtazapine may possess lower levels of sexual side effects, a problem that complicates longer term treatment with SSRIs, venlafaxine, TCAs, and MAOIs (Baldwin & Birtwistle, 1998).
When effective in treating anxiety, antidepressants should be maintained for at least 4 to 6 months, then tapered slowly to avoid discontinuation-emergent activation of anxiety symptoms (March et al., 1997; American Psychiatric Association, 1998; Ballenger et al., 1998). Although less extensively researched than depression, it is likely that many patients with anxiety disorders may warrant longer term, indefinite treatment to prevent relapse or chronicity.
This azopyrine compound is a relatively selective 5- HT1A partial agonist (Stahl, 1996). It was approved by the FDA in the mid-1980s as an anxiolytic. However, unlike the benzodiazepines, buspirone is not habit forming and has no abuse potential. Buspirone also has a safety profile comparable to the SSRIs, and it is significantly better tolerated than the TCAs.
Buspirone does not block panic attacks, and it is not efficacious as a primary treatment of obsessive-compulsive disorder or post-traumatic stress disorder (Stahl, 1996). Buspirone is most useful for treatment of generalized anxiety disorder, and it is now frequently used as an adjunct to SSRIs (Lydiard et al., 1996). Buspirone takes 4 to 6 weeks to exert therapeutic effects, like antidepressants, and it has little value for patients when taken on an “as needed” basis.
Some patients with anxiety disorders may benefit from both psychotherapy and pharmacotherapy treatment modalities, either combined or used in sequence (March et al., 1997; American Psychiatric Association, 1998). Drawing from the experiences of depression researchers, it seems likely that such combinations are not uniformly necessary and are probably more cost-effective when reserved for patients with more complex, complicated, severe, or comorbid disorders. The benefits of multimodal therapies for anxiety need further study.
In 1 year, about 7 percent of Americans suffer from mood disorders, a cluster of mental disorders best recognized by depression or mania (Table 4-1). Mood disorders are outside the bounds of normal fluctuations from sadness to elation. They have potentially severe consequences for morbidity and mortality.
This section covers four mood disorders. As the predominant mood disorder, major depressive disorder (also known as unipolar major depression), garners the greatest attention. It is twice more common in women than in men, a gender difference that is discussed later in this section. The other mood disorders covered below are bipolar disorder, dysthymia, and cyclothymia.
Mood disorders rank among the top 10 causes of worldwide disability (Murray & Lopez, 1996). Unipolar major depression ranks first, and bipolar disorder ranks in the top 10. Moreover, disability and suffering are not limited to the patient. Spouses, children, parents, siblings, and friends experience frustration, guilt, anger, financial hardship, and, on occasion, physical abuse in their attempts to assuage or cope with the depressed person’s suffering. Women between the ages of 18 and 45 comprise the majority of those with major depression (Regier et al., 1993).
Depression also has a deleterious impact on the economy, both in diminished productivity and in use of health care resources (Greenberg et al., 1993). In the workplace, depression is a leading cause of absenteeism and diminished productivity. Although only a minority seek professional help to relieve a mood disorder, depressed people are significantly more likely than others to visit a physician for some other reason. Depression-related visits to physicians thus account for a large portion of health care expenditures. Seeking another or a less stigmatized explanation for their difficulties, some depressed patients undergo extensive and expensive diagnostic procedures and then get treated for various other complaints while the mood disorder goes undiagnosed and untreated (Wells et al., 1989).
Suicide is the most dreaded complication of major depressive disorders. About 10 to 15 percent of patients formerly hospitalized with depression commit suicide (Angst et al., 1999). Major depressive disorders account for about 20 to 35 percent of all deaths by suicide (Angst et al., 1999). Completed suicide is more common among those with more severe and/or psychotic symptoms, with late onset, with co-existing mental and addictive disorders (Angst et al., 1999), as well as among those who have experienced stressful life events, who have medical illnesses, and who have a family history of suicidal behavior (Blumenthal, 1988). In the United States, men complete suicide four times as often as women; women attempt suicide four times as frequently as do men (Blumenthal, 1988). Recognizing the magnitude of this public health problem, the Surgeon General issued a Call to Action on Suicide in 1999 (see Figure 4-1). Individuals with depression also face an increased risk of death from coronary artery disease (Glassman & Shapiro, 1998).
Mood disorders often coexist, or are comorbid, with other mental and somatic disorders. Anxiety is commonly comorbid with major depression. About one-half of those with a primary diagnosis of major depression also have an anxiety disorder (Barbee, 1998; Regier et al., 1998). The comorbidity of anxiety and depression is so pronounced that it has led to theories of similar etiologies, which are discussed below. Substance use disorders are found in 24 to 40 percent of individuals with mood disorders in the United States (Merikangas et al., 1998). Without treatment, substance abuse worsens the course of mood disorders. Other common comorbidities include personality disorders (DSM-IV) and medical illness, especially chronic conditions such as hypertension and arthritis. People with depression have a high prevalence (65 to 71 percent) of any of eight common chronic medical conditions (Wells et al., 1991). The mood disorders also may alter or “scar” personality development.
|
Figure 4-1. Sugeon General's Call to Action to Prevent Suicide–1999 |
People have been plagued by disorders of mood for at least as long as they have been able to record their experiences. One of the earliest terms for depression, “melancholy,” literally meaning “black bile,” dates back to Hippocrates. Since antiquity, dysphoric states outside the range of normal sadness or grief have been recognized, but only within the past 40 years or so have researchers had the means to study the changes in cognition and brain functioning that are associated with severe depressive states.
At some time or another, virtually all adult human beings will experience a tragic or unexpected loss, romantic heartbreak, or a serious setback and times of profound sadness, grief, or distress. Indeed, something is awry if the usual expressions of sadness do not accompany such situations so common to the human condition—death of a loved one, severe illness, prolonged disability, loss of employment or social status, or a child’s difficulties, for example.
What is now called major depressive disorder, however, differs both quantitatively and qualitatively from normal sadness or grief. Normal states of dysphoria (a negative or aversive mood state) are typically less pervasive and generally run a more time-limited course. Moreover, some of the symptoms of severe depression, such as anhedonia (the inability to experience pleasure), hopelessness, and loss of mood reactivity (the ability to feel a mood uplift in response to something positive) only rarely accompany “normal” sadness. Suicidal thoughts and psychotic symptoms such as delusions or hallucinations virtually always signify a pathological state.
Nevertheless, many other symptoms commonly associated with depression are experienced during times of stress or bereavement. Among them are sleep disturbances, changes in appetite, poor concentration, and ruminations on sad thoughts and feelings. When a person suffering such distress seeks help, the diagnostician’s task is to differentiate the normal from the pathologic and, when appropriate, to recommend treatment.
The criteria for diagnosing major depressive episode, dysthymia, mania, and cyclothymia are presented in Tables 4-2 through 4-5. Mania is an essential feature of bipolar disorder, which is marked by episodes of mania or mixed episodes of mania and depression. The reliability of the diagnostic criteria for major depressive disorder and bipolar disorder is impressive, with greater than 90 percent agreement reached by independent evaluators (DSM-IV).
Major depressive disorder features one or more major depressive episodes (see Table 4-2), each of which lasts at least 2 weeks (DSM-IV). Since these episodes are also characteristic of bipolar disorder, the term “major6 depression” refers to both major depressive disorder and the depression of bipolar disorder.
The cardinal symptoms of major depressive disorder are depressed mood and loss of interest or pleasure. Other symptoms vary enormously. For example, insomnia and weight loss are considered to be classic signs, even though many depressed patients gain weight and sleep excessively. Such heterogeneity is partly dealt with by the use of diagnostic subtypes (or course modifiers) with differing presentations and prevalence. For example, a more severe depressive syndrome characterized by a constellation of classical signs and symptoms, called melancholia, is more common among older than among younger people, as are depressions characterized by psychotic features (i.e., delusions and hallucinations) (DSM-IV). In fact, the presentation of psychotic features without concomitant melancholia should always raise suspicion about the accuracy of the diagnosis (vis-à-vis schizophrenia or a related psychotic disorder). The so-called reversed vegetative symptoms (oversleeping, overeating, and weight gain) may be more prevalent in women than men (Nemeroff, 1992). Anxiety symptoms such as panic attacks, phobias, and obsessions also are not uncommon.
When untreated, a major depressive episode may last, on average, about 9 months. Eighty to 90 percent of individuals will remit within 2 years of the first episode (Kapur & Mann, 1992). Thereafter, at least 50 percent of depressions will recur, and after three or more episodes the odds of recurrence within 3 years increases to 70 to 80 percent if the patient has not had preventive treatment (Thase & Sullivan, 1995). Thus, for many, an initial episode of major depression will evolve over time into the more recurrent illness sometimes referred to as unipolar major depression (Thase & Sullivan, 1995). Each new episode also confers new risks of chronicity, disability, and suicide.
Dysthymia is a chronic form of depression. Its early onset and unrelenting, “smoldering” course are among the features that distinguish it from major depressive disorder (DSM-IV). Dysthymia becomes so intertwined with a person’s self-concept or personality that the individual may be misidentified as “neurotic” (resulting from unresolved early conflicts expressed through unconscious personality defenses or characterologic disorders) (Akiskal, 1985). Indeed, the onset of dysthymia in childhood or adolescence undoubtedly affects personality development and coping styles, particularly prompting passive, avoidant, and dependent “traits.” To avoid the pejorative connotations associated with the terms “neurotic” and “characterologic,” the term “dysthymia” is used in DSM-IV as a descriptive, or atheoretical, diagnosis for a chronic form of depression (see Table 4-3) (DSM-IV). Affecting about 2 percent of the adult population in 1 year, dysthymia is defined by its subsyndromal nature (i.e., fewer than the five persistent symptoms required to diagnose a major depressive episode) and a protracted duration of at least 2 years for adults and 1 year for children. Like other early-onset disorders, dysthymic disorder is associated with higher rates of comorbid substance abuse. People with dysthymia also are susceptible to major depression. When this occurs, their illness is sometimes referred to as “double depression,” that is, the combination of dysthymia and major depression (Keller & Shapiro, 1982). Unlike the superimposed major depressive episode, however, the underlying dysthymia seldom remits spontaneously. Women are twice as likely to be diagnosed with dysthymia as men (Robins & Regier, 1991).
Table 4-2. DSM-IV criteria for major depressive episode |
|
Table 4-3. DSM-IV diagnostic criteria for Dysthymic Disorder |
|
Bipolar disorder is a recurrent mood disorder featuring one or more episodes of mania or mixed episodes of mania and depression (DSM-IV; Goodwin & Jamison,1990). Bipolar disorder7 is distinct from major depressive disorder by virtue of a history of manic or hypomanic (milder and not psychotic) episodes. Other differences concern the nature of depression in bipolar disorder. Its depressive episodes are typically associated with an earlier age at onset, a greater likelihood of reversed vegetative symptoms, more frequent episodes or recurrences, and a higher familial prevalence (DSM-IV; Goodwin & Jamison, 1990). Another noteworthy difference between bipolar and nonbipolar groups is the differential therapeutic effect of lithium salts, which are more helpful for bipolar disorder (Goodwin & Jamison, 1990).
Mania is derived from a French word that literally means crazed or frenzied. The mood disturbance can range from pure euphoria or elation to irritability to a labile admixture that also includes dysphoria (Table 4-4). Thought content is usually grandiose but also can be paranoid. Grandiosity usually takes the form both of overvalued ideas (e.g., “My book is the best one ever written”) and of frank delusions (e.g., “I have radio transmitters implanted in my head and the Martians are monitoring my thoughts.”) Auditory and visual hallucinations complicate more severe episodes. Speed of thought increases, and ideas typically race through the manic person’s consciousness. Nevertheless, distractibility and poor concentration commonly impair implementation. Judgment also can be severely compromised; spending sprees, offensive or disinhibited behavior, and promiscuity or other objectively reckless behaviors are commonplace. Subjective energy, libido, and activity typically increase but a perceived reduced need for sleep can sap physical reserves. Sleep deprivation also can exacerbate cognitive difficulties and contribute to development of catatonia or a florid, confusional state known as delirious mania. If the manic patient is delirious, paranoid, or catatonic, the behavior is difficult to distinguish from that of a schizophrenic patient. Clinicians are prone to misdiagnose mania as schizophrenia in African Americans (Bell & Mehta, 1981). Most people with bipolar disorder have a history of remission and at least satisfactory functioning before onset of the index episode of illness.
In DSM-IV, bipolar depressions are divided into type I (prior mania) and type II (prior hypomanic episodes only). About 1.1 percent of the adult population suffers from the type I form, and 0.6 percent from the type II form (Goodwin & Jamison, 1990; Kessler et al., 1994) (Table 4-5). Episodes of mania occur, on average, every 2 to 4 years, although accelerated mood cycles can occur annually or even more frequently. The type I form of bipolar disorder is about equally common in men and women, unlike major depressive disorder, which is more common in women.
Hypomania, as suggested above, is the subsyndromal counterpart of mania (DSM-IV; Goodwin & Jamison, 1990). By definition, an episode of hypomania is never psychotic nor are hypomanic episodes associated with marked impairments in judgment or performance. In fact, some people with bipolar disorder long for the productive energy and heightened creativity of the hypomanic phase.
Hypomania can be a transitional state (i.e., early in an episode of mania), although at least 50 percent of those who have hypomanic episodes never become manic (Goodwin & Jamison, 1990). Whereas a majority have a history of major depressive episodes (bipolar type II disorder), others become hypomanic only during antidepressant treatment (Goodwin & Jamison, 1990). Despite the relatively mild nature of hypomania, the prognosis for patients with bipolar type II disorder is poorer than that for recurrent (unipolar) major depression, and there is some evidence that the risk of rapid cycling (four or more episodes each year) is greater than with bipolar type I (Coryell et al., 1992). Women are at higher risk for rapid cycling bipolar disorder than men (Coryell et al., 1992). Women with bipolar disorder are also at increased risk for an episode during pregnancy and the months following childbirth (Blehar et al., 1998).
Table 4-4. DSM-IV criteria for manic episode |
|
Table 4-5. DSM-IV diagnostic criteria for Cyclothymic Disorder |
|
Cyclothymia is marked by manic and depressive states, yet neither are of sufficient intensity nor duration to merit a diagnosis of bipolar disorder or major depressive disorder. The diagnosis of cyclothymia is appropriate if there is a history of hypomania, but no prior episodes of mania or major depression (Table 4-5). Longitudinal followup studies indicate that the risk of bipolar disorder developing in patients with cyclothymia is about 33 percent; although 33 times greater than that for the general population, this rate of risk still is too low to justify viewing cyclothymia as merely an early manifestation of bipolar type I disorder (Howland & Thase, 1993).
Mood disorders are sometimes caused by general medical conditions or medications. Classic examples include the depressive syndromes associated with dominant hemispheric strokes, hypothyroidism, Cushing’s disease, and pancreatic cancer (DSM-IV). Among medications associated with depression, antihypertensives and oral contraceptives are the most frequent examples. Transient depressive syndromes are also common during withdrawal from alcohol and various other drugs of abuse. Mania is not uncommon during high-dose systemic therapy with glucocorticoids and has been associated with intoxication by stimulant and sympathomimetic drugs and with central nervous system (CNS) lupus, CNS human immunodeficiency viral (HIV) infections, and nondominant hemispheric strokes or tumors. Together, mood disorders due to known physiological or medical causes may account for as many as 5 to 15 percent of all treated cases (Quitkin et al., 1993b). They often go unrecognized until after standard therapies have failed.
A challenge to diagnosticians is to balance their search for relatively uncommon disorders with their sensitivity to aspects of the medical history or review of symptoms that might have etiologic significance. For example, the onset of a depressive episode a few weeks or months after the patient has begun taking a new blood-pressure medication should raise the physician’s index of suspicion. Ultimately, occult or covert medical illnesses must always be considered when an apparently clear-cut case of a mood disorder is refractory to standard treatments (Depression Guideline Panel, 1993). Cultural influences on the manifestation and diagnosis of depression are also important for the diagnostician to identify (DSM-IV). As discussed in Chapter 2, somatization is especially prevalent in individuals from ethnic minority backgrounds (Lu et al., 1995). Somatization is the expression of mental distress in terms of physical suffering.
The etiology of depression, the mood disorder most frequently studied, is far from ideally understood. Many cases of depression are triggered by stressful life events, yet not everyone becomes depressed under such circumstances. The intensity and duration of these events, as well as each individual’s genetic endowment, coping skills and reaction, and social support network contribute to the likelihood of depression. That is why depression and many other mental disorders are broadly described as the product of a complex interaction between biological and psychosocial factors (see Chapter 2). The relative importance of biological and psychosocial factors may vary across individuals and across different types of depression.
This section of the chapter describes the biological, genetic, and psychosocial factors—such as cognition, personality, and gender—that correlate with, or predispose to, depression. The discussion of genetic factors also incorporates the latest findings about bipolar disorder. Genes are implicated even more strongly in bipolar disorder than they are in major depression, galvanizing a worldwide search to identify chromosomal regions where genes may be located and ultimately to pinpoint the genes themselves (NIMH, 1998).
Much of the scientific effort expended over the past 40 years on the study of depression has been devoted to the search for biologic alterations in brain function. From the beginning, it has been recognized that the clinical heterogeneity of depression disorders may preclude the possibility of finding a single defect. Researchers have detected abnormal concentrations of many neurotransmitters and their metabolites in urine, plasma, and cerebrospinal fluid in subgroups of patients (Thase & Howland, 1995); dysregulation of the HPA axis (Thase & Howland, 1995); elevated levels of corticotropin-releasing factor (Nemeroff, 1992, 1998; Mitchell, 1998); and, most recently, abnormalities in second messenger systems and neuroimaging (Drevets, 1998; Rush et al., 1998, Steffens & Krishnan, 1998). Much current research focuses on how the biological abnormalities interrelate, how they correlate with behavioral and emotional patterns that seem to distinguish one subcategory of major depression from another, and how they respond to diverse forms of therapy.
In the search for biological changes with depression, it must be understood that a biological abnormality reliably associated with depression may not actually be a causal factor. For example, a biologic alteration could be a consequence of sleep deprivation or weight loss. Any biological abnormality found in conjunction with any mental disorder may be a cause, a correlate, or a consequence, as discussed in Chapter 2. What drives research is the determination to find which of the biological abnormalities in depression are true causes, especially ones that might be detectable and treatable before the onset of clinical symptoms.
For many years the prevailing hypothesis was that depression was caused by an absolute or relative deficiency of monoamine8 transmitters in the brain. This line of research was bolstered by the discovery many years ago that reserpine, a medication for hypertension, inadvertently caused depression. It did so by depleting the brain of both serotonin and the three principal catecholamines (dopamine, norepinephrine, and epinephrine). Such findings led to the “catecholamine hypothesis” and the “indoleamine (i.e., serotonin) hypothesis,” which in due course led to an integrated “monoamine hypothesis” (Thase & Howland, 1995).
After more than 30 years of research, however, the monoamine hypothesis has been found insufficient to explain the complex etiology of depression. One problem is that many other neurotransmitter systems are altered in depression, including GABA and acetylcholine (Rush et al., 1998). Another problem is that improvement of monoamine neurotransmission with medications and lifting of the clinical signs of depression do not prove that depression actually is caused by defective monoamine neurotransmission. For example, diuretic medications do not specifically correct the physiological defect underlying congestive heart failure, but they do treat its symptoms. Neither impairment of monoamine synthesis, nor excessive degradation of monoamines, is consistently present in association with depression; monoamine precursors do not have consistent antidepressant effects, and a definite temporal lag exists between the quick elevation in monoamine levels and the symptom relief that does not emerge until weeks later (Duman et al., 1997). To account for these discrepancies, one new model of depression proposes that depression results from reductions in neurotrophic factors that are necessary for the survival and function of particular neurons, especially those found in the hippocampus (Duman et al., 1997).
Despite the problems with the hypothesis that monoamine depletion is the primary cause of depression, monoamine impairment is certainly one of the manifestations, or correlates, of depression. Therefore, the monoamine hypothesis remains important for treatment purposes. Many currently available pharmacotherapies that relieve depression or cause mania, or both, enhance monoamine activity. One of the foremost classes of drugs for depression, SSRIs, for example, boost the level of serotonin in the brain.
An important shortcoming of the monoamine hypothesis was its inattention to the psychosocial risk factors that influence the onset and persistence of depressive episodes. The nature and interpretation of, and the response to, stress clearly have important causal roles in depression. The following discussion illustrates ongoing work aimed at understanding the pathophysiology of depression. While incomplete, it offers a coherent integration of the biological, psychological, and social factors that have long been associated clinically with this disorder.
Many decades ago, Hans Selye demonstrated the damaging effects of chronic stress on the HPA axis, the gastrointestinal tract, and the immune system of rats: adrenal hypertrophy, gastric ulceration, and involution of the thymus and lymph nodes (Selye, 1956). Since that time, researchers have provided ample evidence that brain function, and perhaps even anatomic structure, can be influenced by stress, interpretation of stress, and learning (Weiss, 1991; Sapolsky, 1996; McEwen, 1998). Much current research has been directed at stress, the HPA axis, and CRH in the genesis of depression.
Depression can be the outcome of severe and prolonged stress (Brown et al., 1994; Frank et al., 1994; Ingram et al., 1998). The acute stress response is characterized by heightened arousal—the fight-or-flight response—that entails mobilization of the sympathetic nervous system and the HPA axis (see Etiology of Anxiety). Many aspects of the acute stress response are exaggerated, persistent, or dysregulated in depression (Thase & Howland, 1995). Increased activity in the HPA axis in depression is viewed as the “most venerable finding in all of biological psychiatry” (Nemeroff, 1998).
Increased activity of the HPA axis, however, may be secondary to more primary causes, as was the problem with the monoamine hypothesis of depression. For this reason, much attention has been focused on CRH, which is hypersecreted in depression (Nemeroff, 1992, 1998). CRH is the neuropeptide that is released by the hypothalamus to activate the pituitary in the acute stress response. Yet there are many other sources of CRH in the brain.
CRH injections into the brain of laboratory animals produce the signs and symptoms found in depressed patients, including decreased appetite and weight loss, decreased sexual behavior and sleep, and other changes (Sullivan et al., 1998). Furthermore, CRH is found in higher concentrations in the cerebrospinal fluid of depressed patients (Nemeroff, 1998). In autopsy studies of depressed patients, CRH gene expression is elevated, and there are greater numbers of hypothalamic neurons that express CRH (Nemeroff, 1998). These findings have ignited research to uncover how CRH expression in the hypothalamus is regulated, especially by other brain centers such as the hippocampus (Mitchell, 1998). The hippocampus exerts control over the HPA axis through feedback inhibition (Jacobson & Sapolsky, 1991). Shedding light on the regulation of CRH is expected to hold dividends for understanding both anxiety and depression.
Anxiety and depression frequently coexist, so much so that patients with combinations of anxiety and depression are the rule rather than the exception (Barbee, 1998). And many of the medications used to treat either one are often used to treat the other. Why are anxiety and depression so interrelated?
Clues to answering this question are expected to come from similarities in antecedents, correlates, and consequences of each condition. Certainly, stressful events are frequent, although not universal, antecedents. Overlapping biochemical correlates are found, most notably, an elevation in CRH (Arborelius et al., 1999). Interestingly, one new line of research finds that long-term consequences of anxiety and depression are evident at the same anatomical site—the hippocampus. Human imaging studies of the hippocampus revealed it to have smaller volume in patients with post-traumatic stress disorder (McEwen, 1998) and in patients with recurrent depression (Sheline, 1996). In the latter study, the degree of volume reduction was correlated with the duration of major depression. In both conditions, excess glucocorticoid exposure was thought to be the culprit in inducing the atrophy of hippocampal neurons. But the complete chain of events leading up to and following the hippocampal damage is not yet known.
If stressful events are the proximate causes of most cases of depression, then why is it that not all people become depressed in the face of stressful events? The answer appears to be that social, psychological, and genetic factors act together to predispose to, or protect against, depression. This section first discusses stressful life events, followed by a discussion of the factors that shape our responses to them.
Adult life can be rife with stressful events, as noted earlier, and although not all people with depression can point to some precipitating event, many episodes of depression are associated with some sort of acute or chronic adversity (Brown et al., 1994; Frank et al., 1994; Ingram et al., 1998).
The death of a loved one is viewed as one of the most powerful life stressors. The grief that ensues is a universal experience. Common symptoms associated with bereavement include crying spells, appetite and weight loss, and insomnia. Grief, in fact, has such emotional impact that the diagnosis of depressive disorder should not be made unless there are definite complications such as incapacity, psychosis, or suicidal thoughts.
The compelling impact of past parental neglect, physical and sexual abuse, and other forms of maltreatment on both adult emotional well-being and brain function is now firmly established for depression. Early disruption of attachment bonds can lead to enduring problems in developing and maintaining interpersonal relationships and problems with depression and anxiety. Research in animals bears this out as well. In both rodents and primates, maternal deprivation stresses young animals, and a pattern of repeated, severe, early trauma from maternal deprivation may predispose an animal to a lifetime of overreactivity to stress (Plotsky et al., 1995). Conversely, early experience with mild, nontraumatic stressors (such as gentle handling) may help to protect or “immunize” animals against more pathologic responses to subsequent severe stress.
According to cognitive theories of depression, how individuals view and interpret stressful events contributes to whether or not they become depressed. One prominent theory of depression stems from studies of learned helplessness in animals. The theory posits that depression arises from a cognitive state of helplessness and entrapment (Seligman, 1991). The theory was predicated on experiments in which animals were trained in an enclosure in which shocks were unavoidable and inescapable, regardless of avoidance measures that animals attempted. When they later were placed in enclosures in which evasive action could have succeeded, the animals were inactive, immobile, and unable to learn avoidance maneuvers. The earlier experience engendered a behavioral state of helplessness, one in which actions were seen as ineffectual.
In humans there is now ample evidence that the impact of a stressor is moderated by the personal meaning of the event or situation. In other words, the critical factor is the person’s interpretation of the stressor’s potential impact. Thus, an event interpreted as a threat or danger elicits a nonspecific stress response, and an event interpreted as a loss (of either an attachment bond or a sense of competence) elicits more grief-like depressive responses.
Heightened vulnerability to depression is linked to a constellation of cognitive patterns that predispose to distorted interpretations of a stressful event (Ingram et al., 1998). For example, a romantic breakup will trigger a much stronger emotional response if the affected person believes, “I am incomplete and empty without her love,” or “I will never find another who makes me feel the way he does.” The cognitive patterns associated with distorted interpretation of stress include relatively harsh or rigid beliefs or attitudes about the importance of romantic love or achievement (again, the centrality of love and work) as well as the tendency to attribute three specific qualities to adverse events: (1) global impact–“This event will have a big effect on me”; (2) internality–“I should have done something to prevent this,” or “This is my fault”; and (3) irreversibility–“I’ll never be able to recover from this.”
According to a recent model of cognitive vulnerability to depression, negative cognitions by themselves are not sufficient to engender depression. This model postulates, on the basis of previously gathered empirical evidence, that interactions between negative cognitions and mildly depressed mood are important in the etiology and recurrences of depression. Patterns or styles of thinking stem from prior negative experiences. When they are activated by adverse life events and a mildly depressed mood, a downward spiral ensues, leading to depression (Ingram et al., 1998).
Responses to life events also can be linked to personality (Hirschfeld & Shea, 1992). Personality may be understood in terms of one’s attitudes and beliefs as well as more enduring neurobehavioral predispositions referred to as temperaments. The study of personality and temperament is gaining momentum. Neuroticism (a temperament discussed earlier in this chapter) predisposes to anxiety and depression (Clark et al., 1994). Having an easy-going temperament, on the other hand, protects against depression (IOM, 1994). Further, those with severe personality disorder are particularly likely to have a history of early adversity or maltreatment (Browne & Finkelhor, 1986).
Temperaments are not destiny, however. Parental influences and individual life experiences may determine whether a shy child remains vulnerable or becomes a healthy, albeit somewhat reserved, adult. In adults, several constellations of personality traits are associated with mood disorders: avoidance, dependence, and traits such as reactivity and impulsivity (Hirschfeld & Shea, 1992). People who have such personality traits not only cope less effectively with stressors but also tend to provoke or elicit adversity. A personality disorder or temperamental disturbance may mediate the relationship between stress and depression.
Major depressive disorder and dysthymia are more prevalent among women than men, as noted earlier. This difference appears in different cultures throughout the world (Weissman et al., 1993). Understanding the gender-related difference is complex and likely related to the interaction of biological and psychosocial factors (Blumenthal, 1994a), including differences in stressful life events as well as to personality (Nolen-Hoeksema et al., in press).
Keys to understanding the sex-related difference in rates in the United States may be found in two types of epidemiologic findings: (1) there are no sex-related differences in rates of bipolar disorder (type I) (NIMH, 1998) and, (2) within the agrarian culture of the Old Order Amish of Lancaster, Pennsylvania, the rate of major depressive disorder is both low (i.e., comparable to that of bipolar disorder)9 and equivalent for men and women (Egeland et al., 1983). Something about the environment thus appears to interact with a woman’s biology to cause a disproportionate incidence of depressive episodes among women (Blumenthal, 1994a).
Research conducted in working-class neighborhoods suggests that the combination of life stress and inadequate social support contributes to women’s greater susceptibility to depressive symptoms (Brown et al., 1994). Because women tend to use more ruminative ways of coping (e.g., thinking and talking about a problem, rather than seeking out a distracting activity) and, on average, have less economic power, they may be more likely to perceive their problems as less solvable. That perception increases the likelihood of feeling helpless or entrapped by one’s problem. Subtle sex-related differences in hemispheric processing of emotional material may further predispose women to experience emotional stressors more intensely (Baxter et al., 1987). Women are also more likely than men to have experienced past sexual abuse; as noted earlier in this chapter, physical and sexual abuse is strongly associated with the subsequent development of major depressive disorder. Women’s greater vulnerability to depression may be amplified by endocrine and reproductive cycling, as well as by a greater susceptibility to hypothyroidism (Thase & Howland, 1995). Menopause, on the other hand, has little bearing on gender differences in depression. Contrary to popular beliefs, menopause does not appear to be associated with increased rates of depression in women (Pearlstein et al., 1997). Untreated mental health problems are likely to worsen at menopause, but menopause by itself is not a risk factor for depression (Pearce et al., 1995; Thacker, 1997). The increased risk for depression prenatally or after childbirth suggests a role for hormonal influences, although evidence also exists for the role of stressful life events. In short, psychosocial and environmental factors likely interact with biological factors to account for greater susceptibility to depression among women.
Poor young women (white, black, and Hispanic) appear to be at the greatest risk for depression compared with all other population groups (Miranda & Green, 1999). They have disproportionately higher rates of past exposure to trauma, including rape, sexual abuse, crime victimization, and physical abuse; poorer support systems; and greater barriers to treatment, including financial hardship and lack of insurance (Miranda & Green, 1999). Many of the same problems apply to single mothers, whose risk of depression is double that of married mothers (Brown & Moran, 1997).
The interaction between stressful life events, individual experiences, and genetic factors also plays a role in the etiology of depression in women. Some research suggests that genetic factors, which are discussed below, may alter women’s sensitivity to the depression-inducing effect of stressful life events (Kendler et al., 1995). A recent report of depression in a sample of 2,662 twins found genetic factors in depression to be stronger for women than men, for whom depression was only weakly familial. For both genders, individual environmental experiences played a large role in depression (Bierut et al., 1999).
Depression, and especially bipolar disorder, clearly tend to “run in families,” and a definite association has been scientifically established (Tsuang & Faraone, 1990). Numerous investigators have documented that susceptibility to a depressive disorder is twofold to fourfold greater among the first-degree relatives of patients with mood disorder than among other people (Tsuang & Faraone, 1990). The risk among first-degree relatives of people with bipolar disorder is about six to eight times greater. Some evidence indicates that first-degree relatives of people with mood disorders are also more susceptible than other people to anxiety and substance abuse disorders (Tsuang & Faraone, 1990).
Remarkable as those statistics may be, they do not by themselves prove a genetic connection. Inasmuch as first-degree relatives typically live in the same environment, share similar values and beliefs, and are subject to similar stressors, the vulnerability to depression could be due to nurture rather than nature. One method to distinguish environmental from genetic factors is to compare concordance rates among same-sex twins. At least in terms of simple genetic theory, a solely hereditary trait that appears in one member of a set of identical (monozygotic) twins also should always appear in the other twin, whereas the trait should appear only 50 percent of the time in same-sex fraternal (dizygotic) twins.
The results of studies comparing the prevalence of depression among twins vary, depending on the specific mood disorder, the age of the study population, and the way the depression is defined. In all instances, however, the reported concordance for mood disorders is greater among monozygotic than among dizygotic twins, and often the proportion is 2 to 1 (Tsuang & Faraone, 1990). In Denmark, Bertelsen and colleagues (1977) found that among 69 monozygotic twins with bipolar illness, 46 co-twins also had bipolar disorder and 14 other co-twins had psychoses, affective personality disorders, or had died by suicide. In studies of monozygotic twins reared separately (“adopted away”), the results also revealed an increased risk of depression and bipolar disorder compared with controls (Mendlewicz & Rainer 1977; Wender et al., 1986). Within the major depressive disorder grouping, greater heritable risk has been associated with more severe, recurrent, or psychotic forms of mood disorders (Tsuang & Faraone, 1990). Those at greater heritable risk also appear more vulnerable to stressful life events (Kendler et al., 1995).
The availability of modern molecular genetic methods now allows the translation of clinical associations into identification of specific genes (McInnis, 1993; Baron, 1997). Evidence collected to date strongly suggests that vulnerability to mood disorders may be associated with several genes distributed among various chromosomes. For bipolar disorder, numerous distinct chromosomal regions (called loci) show promise, yet the complex nature of inheritance and methodological problems have encumbered investigators (Baron, 1997). Heritability in some cases may be sex linked or vary depending on whether the affected parent is the father or mother of the individual being studied. The genetic process of anticipation (which has been associated with an expansion of trinucleotide repeats) may further alter the expression of illness across generations (McInnis, 1993). Thus, the genetic complexities of the common depressive disorders ultimately may rival their clinical heterogeneity (Tsuang & Faraone, 1990).
Based on a comprehensive review of the genetics literature, the National Institute of Mental Health Genetics Workgroup recently evaluated several mood disorders according to their readiness for large-scale genetics research initiatives. Bipolar disorder was rated in the highest category, meaning that the evidence was strong enough to justify large-scale molecular genetic studies. Depression, eating disorders, obsessive-compulsive disorder, and panic disorder were rated in the second highest category, which called for nonmolecular genetic and/or epidemiological studies to document further their estimated heritability (NIMH, 1998).
So much is known about the assortment of pharmacological and psychosocial treatments for mood disorders that the most salient problem is not with treatment, but rather with getting people into treatment.
Surveys consistently document that a majority of individuals with depression receive no specific form of treatment (Katon et al., 1992; Narrow et al., 1993; Wells et al., 1994; Thase, 1996). Nearly 40 percent of people with bipolar disorder are untreated in 1 year, according to the Epidemiologic Catchment Area survey (Regier et al., 1993). Undertreatment of mood disorders stems from many factors, including societal stigma, financial barriers to treatment, underrecognition by health care providers, and underappreciation by consumers of the potential benefits of treatment (e.g., Regier et al., 1988; Wells et al., 1994; Hirschfeld et al., 1997). The symptoms of depression, such as feelings of worthlessness, excessive guilt, and lack of motivation, also deter consumers from seeking treatment; and members of racial and ethnic minority groups often encounter special barriers, as discussed in Chapter 2.
Mood disorders have profoundly deleterious consequences on well-being: their toll on quality of life and economic productivity matches that of heart disease and is greater than that of peptic ulcer, arthritis, hypertension, or diabetes (Wells et al., 1989).
The treatment of mood disorders is complex because it involves several stages: acute, continuation, and maintenance stages. The stages apply to pharmacotherapy and psychosocial therapy alike. Most patients pass through these stages to restore full functioning.
Acute phase treatment with either psychotherapy or pharmacotherapy covers the time period leading up to an initial treatment response. A treatment response is defined by a significant reduction (i.e., > 50 percent) in symptom severity, such that the patient no longer meets syndromal criteria for the disorder (Frank et al., 1991b). The acute phase for medication typically requires 6 to 8 weeks (Depression Guideline Panel, 1993), during which patients are seen weekly or biweekly for monitoring of symptoms, side effects, dosage adjustments, and support (Fawcett et al., 1987). Psychotherapies during the acute phase for depression typically consist of 6 to 20 weekly visits.
Outpatient Treatment. In outpatient clinical trials, about 50 to 70 percent of depressed patients who complete treatment respond to either antidepressants or psychotherapies (Depression Guideline Panel, 1993). An acute treatment response includes the effects of placebo expectancy, spontaneous remission, and active treatment. The magnitude of the active treatment effect may be estimated from randomized clinical trials by subtracting the placebo response rate from that of active medication. Overall, the active treatment effect for major depression typically ranges from 20 to 40 percent, after accounting for a placebo response rate of about 30 percent (Depression Guideline Panel, 1993). Although psychotherapy trials do not employ placebos in the form of an inert pill, they do rely on comparisons of active treatment with psychological placebos (e.g., a form of therapy inappropriate for a given disorder), a comparison form of treatment, or wait list (i.e., no therapy). The figures cited above must be understood as rough averages. The efficacy of specific pharmacotherapies and psychotherapies is covered later in this section.
Acute phase therapy is often compromised by patients leaving treatment. Attrition rates from clinical trials often are as high as 30 to 40 percent, and rates of nonadherence10 are even higher (Depression Guideline Panel, 1993). Medication side effects are a factor, as are other factors such as inadequate psychoeducation (resulting in unrealistic expectations about treatment), ambivalence about seeing a therapist or taking medication, and practical roadblocks (e.g., the cost or accessibility of services).
Another problem is clinician failure to monitor symptomatic response and to change treatments in a timely manner. Antidepressants should be changed if there is no clear effect within 4 to 6 weeks (Nierenberg et al., 1995; Quitkin et al., 1996). Similar data are not available for psychotherapies, but revisions to the treatment plan should be considered, including the addition of antidepressant medication, if there is no symptomatic improvement within 3 or 4 months (Depression Guideline Panel, 1993).
Acute Inpatient Treatment. Hospitalization for acute treatment of depression is necessary for about 5 to 10 percent of major depressive episodes and for up to 50 percent of manic episodes. The principal reasons for hospitalization are overwhelming severity of symptoms and functional incapacity and suicidal or other life-threatening behavior. Hospital median lengths of stay now are about 5 to 7 days for depression and 9 to 14 days for mania. Such abbreviated stays have reduced costs but necessitate greater transitional or aftercare services. Few severely depressed or manic people are in remission after only 1 to 2 weeks of treatment.
Electroconvulsive Therapy. As described above, first-line treatment for most people with depression today consists of antidepressant medication, psychotherapy, or the combination (Potter et al., 1991; Depression Guideline Panel, 1993). In situations where these options are not effective or too slow (for example, in a person with delusional depression and intense, unremitting suicidality) electroconvulsive therapy (ECT) may be considered. ECT, sometimes referred to as electroshock or shock treatment, was developed in the 1930s based on the mistaken belief that epilepsy (seizure disorder) and schizophrenia could not exist at the same time in an individual. Accumulated clinical experience—later confirmed in controlled clinical trials, which included the use of simulated or “sham” ECT as a control (Janicak et al., 1985)—determined ECT to be highly effective against severe depression, some acute psychotic states, and mania (Small et al., 1988). No controlled study has shown any other treatment to have superior efficacy to ECT in the treatment of depression (Janicak et al., 1985; Rudorfer et al., 1997). ECT has not been demonstrated to be effective in dysthymia, substance abuse, or anxiety or personality disorders. The foregoing conclusions, and many of those discussed below, are the products of review of extensive research conducted over several decades (Depression Guideline Panel, 1993; Rudorfer et al., 1997) as well as by an independent panel of scientists, practitioners, and consumers (NIH & NIMH Consensus Conference, 1985).
ECT consists of a series of brief generalized seizures induced by passing an electric current through the brain by means of two electrodes placed on the scalp. A typical course of ECT entails 6 to 12 treatments, administered at a rate of three times per week, on either an inpatient or outpatient basis. The exact mechanisms by which ECT exerts its therapeutic effect are not yet known. The production of an adequate, generalized seizure using the proper amount of electrical stimulation at each treatment session is required for therapeutic efficacy (Sackheim et al., 1993).
With the development of effective medications for the treatment of major mental disorders a half-century ago, the need for ECT lessened but did not disappear. Prior to that time, ECT often had been administered for a variety of conditions for which it is not effective, and administered without anesthesia or neuromuscular blockade. The result was grand mal seizures that could produce injuries and even fractures. Despite the availability of a range of effective antidepressant medications and psychotherapies, as discussed above, ECT continues to be used (Rosenbach et al., 1997), occupying a narrower but important niche. It is generally reserved for the special circumstances where the usual first-line treatments are ineffective or cannot be taken, or where ECT is known to be particularly beneficial, such as depression or mania accompanied by psychosis or catatonia (NIH & NIMH Consensus Conference, 1985; Depression Guideline Panel, 1993; Potter & Rudorfer, 1993). Examples of specific indications include depression unresponsive to multiple medication trials, or accompanied by a physical illness or pregnancy, which renders the use of a usually preferred antidepressant dangerous to the patient or to a developing fetus. Under such circumstances, carefully weighing risks and benefits, ECT may be the safest treatment option for severe depression. It should be administered under controlled conditions, with appropriate personnel (Rudorfer et al., 1997).
Although the average 60 to 70 percent response rate seen with ECT is comparable to that obtained with pharmacotherapy, there is evidence that the antidepressant effect of ECT occurs faster than that seen with medication, encouraging the use of ECT where depression is accompanied by potentially uncontrollable suicidal ideas and actions (Rudorfer et al., 1997). However, ECT does not exert a long-term protection against suicide. Indeed, it is now recognized that a single course of ECT should be regarded as a short-term treatment for an acute episode of illness. To sustain the response to ECT, continuation treatment, often in the form of antidepressant and/or mood stabilizer medication, must be instituted (Sackeim, 1994). Individuals who repeatedly relapse following ECT despite continuation medication may be candidates for maintenance ECT, delivered on an outpatient basis at a rate of one treatment weekly to as infrequently as monthly (Sackeim, 1994; Rudorfer et al., 1997).
The major risks of ECT are those of brief general anesthesia, which was introduced along with muscle relaxation and oxygenation to protect against injury and to reduce patient anxiety. There are virtually no absolute health contraindications precluding its use where warranted (Potter & Rudorfer, 1993; Rudorfer et al., 1997).
The most common adverse effects of this treatment are confusion and memory loss for events surrounding the period of ECT treatment. The confusion and disorientation seen upon awakening after ECT typically clear within an hour. More persistent memory problems are variable. Most typical with standard, bilateral electrode placement (one electrode on each side of the head) has been a pattern of loss of memories for the time of the ECT series and extending back an average of 6 months, combined with impairment with learning new information, which continues for perhaps 2 months following ECT (NIH & NIMH Consensus Conference, 1985). Well-designed neuropsychological studies have consistently shown that by several months after completion of ECT, the ability to learn and remember are normal (Calev, 1994). Although most patients return to full functioning following successful ECT, the degree of post-treatment memory impairment and resulting impact on functioning are highly variable across individuals (NIH & NIMH Consensus Conference, 1985; CMHS, 1998). While clearly the exception rather than the rule, no reliable data on the incidence of severe post-ECT memory impairment are available. Fears that ECT causes gross structural brain pathology have not been supported by decades of methodologically sound research in both humans and animals (NIH & NIMH Consensus Conference, 1985; Devanand et al., 1994; Weiner & Krystal, 1994; Greenberg, 1997; CMHS, 1998). The decision to use ECT must be evaluated for each individual, weighing the potential benefits and known risks of all available and appropriate treatments in the context of informed consent (NIH & NIMH Consensus Conference, 1985).
Advances in treatment technique over the past generation have enabled a reduction of adverse cognitive effects of ECT (NIH & NIMH Consensus Conference, 1985; Rudorfer et al., 1997). Nearly all ECT devices deliver a lower current, brief-pulse electrical stimulation, rather than the original sine wave output; with a brief pulse electrical wave, a therapeutic seizure may be induced with as little as one-third the electrical power as with the older method, thereby reducing the potential for confusion and memory disturbance (Andrade et al., 1998). Placement of both stimulus electrodes on one side of the head (“unilateral” ECT), over the nondominant (generally right) cerebral hemisphere, results in delivery of the initial electrical stimulation away from the primary learning and memory centers. According to several controlled trials, unilateral ECT is associated with virtually no detectable, persistent memory loss (Horne et al., 1985; NIH Consensus Conference, 1985; Rudorfer et al., 1997). However, most clinicians find unilateral ECT less potent and more slowly acting an intervention than conventional bilateral ECT, particularly in the most severe cases of depression or in mania. One approach that is sometimes used is to begin a trial of ECT with unilateral electrode placement and switch to bilateral treatment after about six treatments if there has been no response. Research has demonstrated that the relationship of electrical dose to clinical response differs depending on electrode placement; for bilateral ECT, as long as an adequate seizure is obtained, any additional dosage will merely add to the cognitive toxicity, whereas for unilateral electrode placement, a therapeutic effect will not be achieved unless the electrical stimulus is more than minimally above the seizure threshold (Sackeim et al., 1993). Even a moderately high electrical dosage in unilateral ECT still has fewer cognitive adverse effects than bilateral ECT. On the other hand, high-dose bilateral ECT may be unnecessarily risky and may be a preventable cause of severe memory impairment. Some types of medication, such as lithium, also add to confusion and cognitive impairment when given during a course of ECT and are best avoided. Medications that raise the seizure threshold and make it harder to obtain a therapeutic effect from ECT, including anticonvulsants and some minor tranquilizers, may also need to be tapered or discontinued.
Informed consent is an integral part of the ECT process (NIH & NIMH Consensus Conference, 1985). The potential benefits and risks of this treatment, and of available alternative interventions, should be carefully reviewed and discussed with patients and, where appropriate, family or friends. Prospective candidates for ECT should be informed, for example, that its benefits are short-lived without active continuation treatment, and that there may be some risk of permanent severe memory loss after ECT. In most cases of depression, the benefit-to-risk ratio will favor the use of medication and/or psychotherapy as the preferred course of action (Depression Guideline Panel, 1993). Where medication has not succeeded, or is fraught with unusual risk, or where the potential benefits of ECT are great, such as in delusional depression, the balance of potential benefits to risks may tilt in favor of ECT. Active discussion with the treatment team, supplemented by the growing amount of printed and videotaped information packages for consumers, is necessary in the decisionmaking process, both prior to and throughout a course of ECT. Consent may be revoked at any time during a series of ECT sessions.
Although many people have fears related to stories of forced ECT in the past, the use of this modality on an involuntary basis today is uncommon. Involuntary ECT may not be initiated by a physician or family member without a judicial proceeding. In every state, the administration of ECT on an involuntary basis requires such a judicial proceeding at which patients may be represented by legal counsel. As a rule, such petitions are granted only where the prompt institution of ECT is regarded as potentially lifesaving, as in the case of a person who is in grave danger because of lack of food or fluid intake caused by catatonia. Recent epidemiological surveys show that the modern use of ECT is generally limited to evidence-based indications (Hermann et al., 1999). Indeed, concern has been raised that in some settings, particularly in the public sector and outside major metropolitan areas, ECT may be underutilized due to the wide variability in the availability of this treatment across the country (Hermann et al., 1995). Consequently, minority patients tend to be underrepresented among those receiving ECT (Rudorfer et al., 1997).
On balance, the evidence supports the conclusion that modern ECT is among those treatments effective for the treatment of select severe mental disorders, when used in accord with current standards of care, including appropriate informed consent.
Successful acute phase antidepressant pharmacotherapy or ECT should almost always be followed by at least 6 months of continued treatment (Prien & Kupfer, 1986; Depression Guideline Panel, 1993; Rudorfer et al., 1997). During this phase, known as the continuation phase, most patients are seen biweekly or monthly. The primary goal of continuation pharmacotherapy is to prevent relapse (i.e., an exacerbation of symptoms sufficient to meet syndromal criteria). Continuation pharmacotherapy reduces the risk of relapse from 40-60 percent to 10-20 percent (Prien & Kupfer, 1986; Thase, 1993). Relapse despite continuation pharmacotherapy might suggest either nonadherence (Myers & Branthwaithe, 1992) or loss of a placebo response (Quitkin et al., 1993a).
A second goal of continuation pharmacotherapy is consolidation of a response into a complete remission and subsequent recovery (i.e., 6 months of sustained remission). A remission is defined as a complete resolution of affective symptoms to a level similar to healthy people (Frank et al., 1991a). As residual symptoms are associated with increased relapse risk (Keller et al., 1992; Thase et al., 1992), recovery should be achieved before withdrawing antidepressant pharmacotherapy.
Many psychotherapists similarly taper a successful course of treatment by scheduling several sessions (every other week or monthly) prior to termination. There is some evidence, albeit weak, that relapse is less common following successful treatment with one type of psychotherapy—cognitive-behavioral therapy—than with antidepressants (Kovacs et al., 1981; Blackburn et al., 1986; Simons et al., 1986; Evans et al., 1992). If confirmed, this advantage may offset the greater short-term costs of psychotherapy.
Maintenance pharmacotherapy is intended to prevent future recurrences of mood disorders (Kupfer, 1991; Thase, 1993; Prien & Kocsis, 1995). A recurrence is viewed as a new episode of illness, in contrast to relapse, which represents reactivation of the index episode (Frank et al., 1991a). Maintenance pharmacotherapy is typically recommended for individuals with a history of three or more depressive episodes, chronic depression, or bipolar disorder (Kupfer, 1991; Thase, 1993; Prien & Kocsis, 1995). Maintenance pharmacotherapy, which may extend for years, typically requires monthly or quarterly visits.
Longer term, preventive psychotherapy to prevent recurrences has not been studied extensively. However, in one study of patients with highly recurrent depression, monthly sessions of interpersonal psychotherapy were significantly more effective than placebo but less effective than pharmacotherapy (Frank et al., 1991a).
This section describes specific types of pharmacotherapies and psychosocial therapies for episodes of depression and mania. Treatment generally targets symptom patterns rather than specific disorders. Differences in the treatment strategy for unipolar and bipolar depression are described where relevant.
Antidepressant medications are effective across the full range of severity of major depressive episodes in major depressive disorder and bipolar disorder (American Psychiatric Association, 1993; Depression Guideline Panel, 1993; Frank et al., 1993). The degree of effectiveness, however, varies according to the intensity of the depressive episode. With mild depressive episodes, the overall response rate is about 70 percent, including a placebo rate of about 60 percent (Thase & Howland, 1995). With severe depressive episodes, the overall response rate is much lower, as is the placebo rate. For example, with psychotic depression, the overall response rate to any one drug is only about 20 to 40 percent (Spiker, 1985), including a placebo response rate of less than 10 percent (Spiker & Kupfer, 1988; Schatzberg & Rothschild, 1992). Psychotic depression is treated with either an antidepressant/antipsychotic combination or ECT (Spiker, 1985; Schatzberg & Rothschild, 1992).
There are four major classes of antidepressant medications. The tricyclic and heterocyclic antidepressants (TCAs and HCAs) are named for their chemical structure. The MAOIs and SSRIs are classified by their initial neurochemical effects. In general, MAOIs and SSRIs increase the level of a target neurotransmitter by two distinct mechanisms. But, as discussed below, these classes of medications have many other effects. They also have some differential effects depending on the race or ethnicity of the patient.
The mode of action of antidepressants is complex and only partly understood. Put simply, most antidepressants are designed to heighten the level of a target neurotransmitter at the neuronal synapse. This can be accomplished by one or more of the following therapeutic actions: boosting the neurotransmitter’s synthesis, blocking its degradation, preventing its reuptake from the synapse into the presynaptic neuron, or mimicking its binding to postsynaptic receptors. To make matters more complicated, many antidepressant drugs affect more than one neurotransmitter. Explaining how any one drug alleviates depression probably entails multiple therapeutic actions, direct and indirect, on more than one neurotransmitter system (Feighner, 1999).
Selection of a particular antidepressant for a particular patient depends upon the patient’s past treatment history, the likelihood of side effects, safety in overdose, and expense (Depression Guideline Panel, 1993). A vast majority of U.S. psychiatrists favor the SSRIs as “first-line” medications (Olfson & Klerman, 1993). These agents are viewed more favorably than the TCAs because of their ease of use, more manageable side effects, and safety in overdose (Kapur et al., 1992; Preskorn & Burke, 1992). Perhaps the major drawback of the SSRIs is their expense: they are only available as name brands (until 2002 when they begin to come off patent). At minimum, SSRI therapy costs about $80 per month (Burke et al., 1994), and patients taking higher doses face proportionally greater costs.
Four SSRIs have been approved by the FDA for treatment of depression: fluoxetine, sertraline, paroxetine, and citalopram. A fifth SSRI, fluvoxamine, is approved for treatment of obsessive-compulsive disorder, yet is used off-label for depression.11 There are few compelling reasons to pick one SSRI over another for treatment of uncomplicated major depression, because they are more similar than different (Aguglia et al., 1993; Schone & Ludwig, 1993; Tignol, 1993; Preskorn, 1995). There are, however, several distinguishing pharmacokinetic differences between SSRIs, including elimination half-life (the time it takes for the plasma level of the drug to decrease 50 percent from steady-state), propensity for drug-drug interactions (e.g., via inhibition of hepatic enzymes), and antidepressant activity of metabolite(s) (DeVane, 1992). In general, SSRIs are more likely to be metabolized more slowly by African Americans and Asians, resulting in higher blood levels (Lin et al., 1997).
The SSRIs as a class of drugs have their own class-specific side effects, including nausea, diarrhea, headache, tremor, daytime sedation, failure to achieve orgasm, nervousness, and insomnia. Attrition from acute phase therapy because of side effects is typically 10 to 20 percent (Preskorn & Burke, 1992). The incidence of treatment-related suicidal thoughts for the SSRIs is low and comparable to the rate observed for other antidepressants (Beasley et al., 1991; Fava & Rosenbaum, 1991), despite reports to the contrary (Breggin & Breggin, 1994).
Some concern persists that the SSRIs are less effective than the TCAs for treatment of severe depressions, including melancholic and psychotic subtypes (Potter et al., 1991; Nelson, 1994). Yet there is no definitive answer (Danish University Anti-depressant Group, 1986, 1990; Pande & Sayler, 1993; Roose et al., 1994; Stuppaeck et al., 1994).
Side effects and potential lethality in overdose are the major drawbacks of the TCAs. An overdose of as little as 7-day supply of a TCA can result in potentially fatal cardiac arrhythmias (Kapur et al., 1992). TCA treatment is typically initiated at lower dosages and titrated upward with careful attention to response and side effects. Doses for African Americans and Asians should be monitored more closely, because their slower metabolism of TCAs can lead to higher blood concentrations (Lin et al., 1997). Similarly, studies also suggest that there may be gender differences in drug metabolism and that plasma levels may change over the course of the menstrual cycle (Blumenthal, 1994b).
In addition to the four major classes of antidepressants are bupropion, which is discussed below, and three newer FDA-approved antidepressants that have mixed or compound synaptic effects. Venlafaxine, the first of these newer antidepressants, inhibits reuptake of both serotonin and, at higher doses, norepinephrine. In contrast to the TCAs, venlafaxine has somewhat milder side effects (Bolden-Watson & Richelson, 1993), which are like those of the SSRIs. Venlafaxine also has a low risk of cardiotoxicity and, although experience is limited, it appears to be less toxic than the others in overdose. Venlafaxine has shown promise in treatment of severe (Guelfi et al., 1995) or refractory (Nierenberg et al., 1994) depressive states and is superior to fluoxetine in one inpatient study (Clerc et al., 1994). Venlafaxine also occasionally causes increased blood pressure, and this can be a particular concern at higher doses (Thase, 1998).
Nefazodone, the second newer antidepressant, is unique in terms of both structure and neurochemical effects (Taylor et al., 1995). In contrast to the SSRIs, nefazodone improves sleep efficiency (Armitage et al., 1994). Its side effect profile is comparable to the other newer antidepressants, but it has the advantage of a lower rate of sexual side effects (Preskorn, 1995). The more recently FDA-approved antidepressant, mirtazapine, blocks two types of serotonin receptors, the 5-HT2 and 5-HT3 receptors (Feighner, 1999). Mirtazapine is also a potent antihistamine and tends to be more sedating than most other newer antidepressants. Weight gain can be another troublesome side effect.
Figure 4-2 presents summary findings on newer pharmacotherapies from a recent review of the treatment of depression by the Agency for Healthcare Research and Quality (AHRQ, 1999). There have been few studies of gender differences in clinical response to treatments for depression. A recent report (Kornstein et al., in press) found women with chronic depression to respond better to a SSRI than a tricyclic, yet the opposite for men. This effect was primarily in premenopausal women. The AHRQ report (1999) also noted that there were almost no data to address the efficacy of pharmacotherapies in post partum or pregnant women.
Regardless of the initial choice of pharmacotherapy, about 30 to 50 percent of patients do not respond to the initial medication. It has not been established firmly whether patients who respond poorly to one class of antidepressants should be switched automatically to an alternate class (Thase & Rush, 1997). Several studies have examined the efficacy of the TCAs and SSRIs when used in sequence (Peselow et al., 1989; Beasley et al., 1990). Approximately 30 to 50 percent of those not responsive to one class will respond to the other (Thase & Rush, 1997).
Among other types of antidepressants, the MAOIs and bupropion are important alternatives for SSRI and TCA nonresponders (Thase & Rush, 1995). These agents also may be relatively more effective than TCAs or SSRIs for treatment of depressions characterized by atypical or reversed vegetative symptoms (Goodnick & Extein, 1989; Quitkin et al., 1993b; Thase et al., 1995). Bupropion and the MAOIs also are good choices to treat bipolar depression (Himmelhoch et al., 1991; Thase et al., 1992; Sachs et al., 1994). Bupropion also has the advantage of a low rate of sexual side effects (Gardner & Johnston, 1985; Walker et al., 1993).
Bupropion’s efficacy and overall side effect profile might justify its first-line use for all types of depression (e.g., Kiev et al., 1994). Furthermore, bupropion has a novel neurochemical profile in terms of effects on dopamine and norepinephrine (Ascher et al., 1995). However, worries about an increased risk of seizures delayed bupropion’s introduction to the U.S. market by more than 5 years (Davidson, 1989). Although clearly effective for a broad range of depressions, use of the MAOIs has been limited for decades by concerns that when taken with certain foods containing the chemical tyramine (for example, some aged cheeses and red wines); these medications may cause a potentially lethal hypertensive reaction (Thase et al., 1995). There has been continued interest in development of safer, selective and reversible MAOIs.
Hypericum (St. John's Wort). The widespread publicity and use of the botanical product from the yellow-flowering Hypericum perforatum plant with or without medical supervision is well ahead of the science database supporting the effectiveness of this putative antidepressant. Controlled trials, mainly in Germany, have been positive in mild-to-moderate depression, with only mild gastrointestinal side effects reported (Linde et al., 1996). However, most of those studies were methodologically flawed, in areas including diagnosis (more similar to adjustment disorder with depressed mood than major depression), length of trial (often an inadequate 4 weeks), and either lack of placebo control or unusually low or high placebo response rates (Salzman, 1998).
Post-marketing surveillance in Germany, which found few adverse effects of Hypericum, depended upon spontaneous reporting of side effects by patients, an approach that would not be considered acceptable in this country (Deltito & Beyer, 1998). In clinical use, the most commonly encountered adverse effect noted appears to be sensitivity to sunlight.
|
Figure 4-2. Treatment of depression-newer pharmacotherapies: Summary findings
*SSRIs and all other antidepressants marketed subsequently. Source: AHRQ, 1999. |
Basic questions about mechanism of action and even the optimal formulation of a pharmaceutical product from the plant remain; dosage in the randomized German trials varied by sixfold (Linde et al., 1996). Several pharmacologically active components of St. John's wort, including hypericin, have been identified (Nathan, 1999); although their long half-lives in theory should permit once daily dosing, in practice a schedule of 300 mg three times a day is most commonly used. While initial speculation about significant MAO-inhibiting properties of hypericum have been largely discounted, possible serotonergic mechanisms suggest that combining this agent with an SSRI or other serotonergic antidepressant should be approached with caution. However, data regarding safety of hypericum in preclinical models or clinical samples are few (Nathan, 1999). At least two placebo-controlled trials in the United States are under way to compare the efficacy of Hypericum with that of an SSRI.
The transition from one antidepressant to another is time consuming, and patients sometimes feel worse in the process (Thase & Rush, 1997). Many clinicians bypass these problems by using a second medication to augment an ineffective antidepressant. The best studied strategies of this type are lithium augmentation, thyroid augmentation, and TCA-SSRI combinations (Nierenberg & White, 1990; Thase & Rush, 1997; Crismon et al., 1999).
Increasingly, clinicians are adding a noradrenergic TCA to an ineffective SSRI or vice versa. In an earlier era, such polypharmacy (the prescription of multiple drugs at the same time) was frowned upon. Thus far, the evidence supporting TCA-SSRI combinations is not conclusive (Thase & Rush, 1995). Caution is needed when using these agents in combination because SSRIs inhibit metabolism of several TCAs, resulting in a substantial increase in blood levels and toxicity or other adverse side effects from TCAs (Preskorn & Burke, 1992).
Many people prefer psychotherapy or counseling over medication for treatment of depression (Roper, 1986; Seligman, 1995). Research conducted in the past two decades has helped to establish at least several newer forms of time-limited psychotherapy as being as effective as antidepressant pharmacotherapy in mild-to-moderate depressions (DiMascio et al., 1979; Elkin et al., 1989; Hollon et al., 1992; Depression Guideline Panel, 1993; Thase, 1995; Persons et al., 1996). The newer depression-specific therapies include cognitive-behavioral therapy (Beck et al., 1979) and interpersonal psychotherapy (Klerman et al., 1984). These approaches use a time-limited approach, a present tense (“here-and-now”) focus, and emphasize patient education and active collaboration. Interpersonal psychotherapy centers around four common problem areas: role disputes, role transitions, unresolved grief, and social deficits. Cognitive-behavioral therapy takes a more structured approach by emphasizing the interactive nature of thoughts, emotions, and behavior. It also helps the depressed patient to learn how to improve coping and lessen symptom distress.
There is no evidence that cognitive-behavioral therapy and interpersonal psychotherapy are differentially effective (Elkin et al., 1989; Thase, 1995). As reported earlier, both therapies appear to have some relapse prevention effects, although they are much less studied than the pharmacotherapies. Other more traditional forms of counseling and psychotherapy have not been extensively studied using a randomized clinical trial design (Depression Guideline Panel, 1993). It is important to determine if these more traditional treatments, as commonly practiced, are as effective as interpersonal psychotherapy or cognitive-behavioral therapy.
The brevity of this section reflects the succinctness of the findings on the effectiveness of these interventions as well as the lack of differential responses and “side effects.” It does not reflect a preference or superiority of medication except in conditions such as psychotic depression where psychotherapies are not effective.
Treatment of bipolar depression12 has received surprisingly little study (Zornberg & Pope, 1993). Most psychiatrists prescribe the same antidepressants for treatment of bipolar depression as for major depressive disorder, although evidence is lacking to support this practice. It also is not certain that the same strategies should be used for treatment of depression in bipolar II (i.e., major depression plus a history of hypomania) and bipolar I (i.e., major depression with a history of at least one prior manic episode) (DSM-IV).
Pharmacotherapy of bipolar depression typically begins with lithium or an alternate mood stabilizer (DSM-IV; Frances et al., 1996). Mood stabilizers reduce the risk of cycling and have modest antidepressant effects; response rates of 30 to 50 percent are typical (DSM-IV; Zornberg & Pope, 1993). For bipolar depressions refractory to mood stabilizers, an antidepressant is typically added. Bipolar depression may be more responsive to nonsedating antidepressants, including the MAOIs, SSRIs, and bupropion (Cohn et al., 1989; Haykal & Akiskal, 1990; Himmelhoch et al., 1991; Peet, 1994; Sachs et al., 1994). The optimal length of continuation phase pharmacotherapy of bipolar depression has not been established empirically (DSM-IV). During the continuation phase, the risk of depressive relapse must be counterbalanced against the risk of inducing mania or rapid cycling (Kukopulos et al., 1980; Wehr & Goodwin, 1987; Solomon et al., 1995). Although not all studies are in agreement, antidepressants may increase mood cycling in a vulnerable subgroup, such as women with bipolar II disorder (Coryell et al., 1992; Bauer et al., 1994). Lithium is associated with increased risk of congenital anomalies when taken during the first trimester of pregnancy, and the anticonvulsants are contraindicated (see Cohen et al., 1994, for a review). This is problematic in view of the high risk of recurrence in pregnant bipolar women (Viguera & Cohen 1998).
The relative efficacy of pharmacotherapy and the newer forms of psychosocial treatment, such as interpersonal psychotherapy and the cognitive-behavioral therapies, is a controversial topic (Meterissian & Bradwejn, 1989; Klein & Ross, 1993; Munoz et al., 1994; Persons et al., 1996). For major depressive episodes of mild to moderate severity, meta-analyses of randomized clinical trials document the relative equivalence of these treatments (Dobson, 1989; Depression Guideline Panel, 1993). Yet for patients with bipolar and psychotic depression, who were excluded from these studies, pharmacotherapy is required: there is no evidence that these types of depressive episodes can be effectively treated with psychotherapy alone (Depression Guideline Panel, 1993; Thase, 1995). Current standards of practice suggest that therapists who withhold somatic treatments (i.e., pharmacotherapy or ECT) from such patients risk malpractice (DSM-IV; Klerman, 1990; American Psychiatric Association, 1993; Depression Guideline Panel, 1993).
For patients hospitalized with depression, somatic therapies also are considered the standard of care (American Psychiatric Association, 1993). Again, there is little evidence for the efficacy of psychosocial treatments alone when used instead of pharmacotherapy, although several studies suggest that carefully selected inpatients may respond to intensive cognitive-behavioral therapy (DeJong et al., 1986; Thase et al., 1991). However, in an era in which inpatient stays are measured in days, rather than in weeks, this option is seldom feasible. Combined therapies emphasizing both pharmacologic and intensive psychosocial treatments hold greater promise to improve the outcome of hospitalized patients, particularly if inpatient care is followed by ambulatory treatment (Miller et al., 1990; Scott, 1992).
Combined therapies—also called multimodal treatments—are especially valuable for outpatients with severe forms of depression. According to a recent meta-analysis of six studies, combined therapy (cognitive or interpersonal psychotherapy plus pharmacotherapy) was significantly more effective than psychotherapy alone for more severe recurrent depression. In milder depressions, psychotherapy alone was nearly as effective as combined therapy (Thase et al., 1997b). This meta-analysis was unable to compare combined therapy with pharmacotherapy alone or placebo due to an insufficient number of patients.
In summary, the DSM-IV definition of major depressive disorder spans a heterogenous group of conditions that benefit from psychosocial and/or pharmacological therapies. People with mild to moderate depression respond to psychotherapy or pharmacotherapy alone. People with severe depression require pharmacotherapy or ECT and they may also benefit from the addition of psychosocial therapy.
Recurrent Depression. Maintenance pharmacotherapy is the best-studied means to reduce the risk of recurrent depression (Prien & Kocsis, 1995; Thase & Sullivan, 1995). The magnitude of effectiveness in prevention of recurrent depressive episodes depends on the dose of the active agent used, the inherent risk of the population (i.e., chronicity, age, and number of prior episodes), the length of time being considered, and the patient’s adherence to the treatment regimen (Thase, 1993). Early studies, which tended to use lower dosages of medications, generally documented a twofold advantage relative to placebo (e.g., 60 vs. 30 percent) (Prien & Kocsis, 1995). In a more recent study of recurrent unipolar depression, the drug-placebo difference was nearly fivefold (Frank et al., 1990; Kupfer et al., 1992). This trial, in contrast to earlier randomized clinical trials, used a much higher dosage of imipramine, suggesting that full-dose maintenance pharmacotherapy may improve prophylaxis. Indeed, this was subsequently confirmed in a randomized clinical trial comparing full- and half-dose maintenance strategies (Frank et al., 1993).
There are few published studies on the prophylactic benefits of long-term pharmacotherapy with SSRIs, bupropion, nefazodone, or venlafaxine. However, available studies uniformly document 1-year efficacy rates of 80 to 90 percent in preventing recurrence of depression (Montgomery et al., 1988; Doogan & Caillard, 1992; Claghorn & Feighner, 1993; Duboff, 1993; Shrivastava et al., 1994; Franchini et al., 1997; Stewart et al., 1998). Thus, maintenance therapy with the newer agents is likely to yield outcomes comparable to the TCAs (Thase & Sullivan, 1995).
How does maintenance pharmacotherapy compare with psychotherapy? In one study of recurrent depression, monthly sessions of maintenance interpersonal psychotherapy had a 3-year success rate of about 35 percent (i.e., a rate falling between those for active and placebo pharmacotherapy) (Frank et al., 1990). Subsequent studies found maintenance interpersonal psychotherapy to be either a powerful or ineffective prophylactic therapy, depending on the patient/treatment match (Kupfer et al., 1990; Frank et al., 1991a; Spanier et al., 1996).
Bipolar Depression. No recent randomized clinical trials have examined prophylaxis against recurrent depression in bipolar disorder. In one older, well-controlled study, recurrence rates of more than 60 percent were observed despite maintenance treatment with lithium, either alone or in combination with imipramine (Shapiro et al., 1989).
Success rates of 80 to 90 percent were once expected with lithium for the acute phase treatment of mania (e.g., Schou, 1989); however, lithium response rates of only 40 to 50 percent are now commonplace (Frances et al., 1996). Most recent studies thus underscore the limitations of lithium in mania (e.g., Gelenberg et al., 1989; Small et al., 1991; Freeman et al., 1992; Bowden et al., 1994). The apparent decline in lithium responsiveness may be partly due to sampling bias (i.e., university hospitals treat more refractory patients), but could also be attributable to factors such as younger age of onset, increased drug abuse comorbidity, or shorter therapeutic trials necessitated by briefer hospital stay (Solomon et al., 1995). The effectiveness of acute phase lithium treatment also is partially dependent on the clinical characteristics of the manic episode: dysphoric/mixed, psychotic, and rapid cycling episodes are less responsive to lithium alone (DSM-IV; Solomon et al., 1995).
A number of other medications initially developed for other indications are increasingly used for lithium-refractory or lithium-intolerant mania. The efficacy of two medications, the anticonvulsants carbamazepine and divalproex sodium, has been documented in randomized clinical trials (e.g., Small et al., 1991; Freeman et al., 1992; Bowden et al., 1994; Keller et al., 1992). Divalproex sodium has received FDA approval for the treatment of mania. The specific mechanisms of action for these agents have not been established, although they may stabilize neuronal membrane systems, including the cyclic adenosine monophosphate second messenger system (Post, 1990). The anticonvulsant medications under investigation for their effectiveness in mania include lamotrigine and gabapentin.
Another newer treatment, verapamil, is a calcium channel blocker initially approved by the FDA for treatment of cardiac arrhythmias and hypertension. Since the mid-1980s, clinical reports and evidence from small randomized clinical trials suggest that the calcium channel blockers may have antimanic effects (Dubovsky et al., 1986; Garza-Trevino et al., 1992; Janicak et al., 1992, 1998). Like lithium and the anticonvulsants, the mechanism of action of verapamil has not been established. There is evidence of abnormalities of intracellular calcium levels in bipolar disorder (Dubovsky et al., 1992), and calcium’s role in modulating second messenger systems (Wachtel, 1990) has spurred continued interest in this class of medication. If effective, verapamil does have the additional advantage of having a lower potential for causing birth defects than does lithium, divalproex, or carbamazepine.
Adjunctive neuroleptics and high-potency benzodiazepines are used often in combination with mood stabilizers to treat mania. The very real risk of tardive dyskinesia has led to a shift in favor of adjunctive use of benzodiazepines instead of neuroleptics for acute stabilization of mania (Chouinard, 1988; Lenox et al., 1992). The novel antipsychotic clozapine has shown promise in otherwise refractory manic states (Suppes et al., 1992), although such treatment requires careful monitoring to help protect against development of agranulocytosis, a potentially lethal bone marrow toxicity. Other newer antipsychotic medications, including risperidone and olanzapine, have safer side effect profiles than clozapine and are now being studied in mania. For manic patients who are not responsive to or tolerant of pharmacotherapy, ECT is a viable alternative (Black et al., 1987; Mukherjee et al., 1994). Further discussion of antipsychotic drugs and their side effects is found in the section on schizophrenia.
The efficacy of lithium for prevention of mania also appears to be significantly lower now than in previous decades; recurrence rates of 40 to 60 percent are now typical despite ongoing lithium therapy (Prien et al., 1984; Gelenberg et al., 1989; Winokur et al., 1993). Still, more than 20 studies document the effectiveness of lithium in preventing suicide (Goodwin & Jamison, 1990). Medication noncompliance almost certainly plays a role in the failure of longer term lithium maintenance therapy (Aagaard et al., 1988). Indeed, abrupt discontinuation of lithium has been shown to accelerate the risk of relapse (Suppes et al., 1993). Medication“holidays” may similarly induce a lithium-refractory state (Post, 1992), although data are conflicting (Coryell et al., 1998). As noted earlier, antidepressant cotherapy also may accelerate cycle frequency or induce lithium-resistant rapid cycling (Kukopulos et al., 1980; Wehr & Goodwin, 1987).
With increasing recognition of the limitations of lithium prophylaxis, the anticonvulsants are used increasingly for maintenance therapy of bipolar disorder. Several randomized clinical trials have demonstrated the prophylactic efficacy of carbamazepine (Placidi et al., 1986; Lerer et al., 1987; Coxhead et al., 1992), whereas the value of divalproex preventive therapy is only supported by uncontrolled studies (Calabrese & Delucchi, 1990; McElroy et al., 1992; Post, 1990). Because of increased teratogenic risk associated with these agents, there is a need to obtain and evaluate information on alternative interventions for women with bipolar disorder of childbearing age.
The mood disorders are associated with significant suffering and high social costs, as explained above (Broadhead et al., 1990; Greenberg et al., 1993; Wells et al., 1989; Wells et al., 1996). Many treatments are efficacious, yet in the case of depression, significant numbers of individuals either receive no care or inappropriate care (Katon et al., 1992; Narrow et al., 1993; Wells et al., 1994; Thase, 1996). Limitations in insurance benefits or in the management strategies employed in managed care arrangements may make it impossible to deliver recommended treatments. In addition, treatment outcome in real-world practice is not as effective as that demonstrated in clinical trials, a problem known as the gap between efficacy and effectiveness (see Chapter 2). The gap is greatest in the primary care setting, although it also is observed in specialty mental health practice. There is a need to develop case identification approaches for women in obstetrics/gynecology settings due to the high risk of recurrence in childbearing women with bipolar disorder. Little attention also has been paid to screening and mental health services for women in obstetrics/gynecology settings despite their high risk of depression (Miranda et al., 1998).
Primary care practice has been studied extensively, revealing low rates of both recognition and appropriate treatment of depression. Approximately one-third to one-half of patients with major depression go unrecognized in primary care settings (Gerber et al., 1989; Simon & Von Korff, 1995). Poor recognition leads to unnecessary and expensive diagnostic procedures, particularly in response to patients’ vague somatic complaints (Callahan et al., 1996). Fewer than one-half receive antidepressant medication according to Agency for Healthcare Research and Quality (AHRQ) recommendations for dosage and duration (Simon et al., 1993; Rost et al., 1994; Katon 1995, 1996; Schulberg et al., 1995; Simon & Von Korff, 1995). About 40 percent discontinue their medication on their own during the first 4 to 6 weeks of treatment, and fewer still continue their medication for the recommended period of 6 months (Simon et al., 1993). Although drug treatment is the most common strategy for treating depression in primary care practice (Olfson & Klerman, 1992; Williams et al., 1999), about one-half of primary care physicians express a preference to include counseling or therapy as a component of treatment (Meredith et al., 1994, 1996). Few primary care practitioners, however, have formal training in psychotherapy, nor do they have the time (Meredith et al., 1994, 1996). A variety of strategies have been developed to improve the management of depression in primary care settings (cited in Katon et al., 1997). These are discussed in more detail in Chapter 5 because of the special problem of recognizing and treating depression among older adults.
Another major service delivery issue focuses on the substantial number of individuals with mood disorders who go on to develop a chronic and disabling course. Their needs for a wide array of services are similar to those of individuals with schizophrenia. Many of the service delivery issues relevant to individuals with severe and persistent mood disorders are presented in the final sections of this chapter.
Our understanding of schizophrenia has evolved since its symptoms were first catalogued by German psychiatrist Emil Kraepelin in the late 19th century (Andreasen, 1997a). Even though the cause of this disorder remains elusive, its frightening symptoms and biological correlates have come to be quite well defined. Yet misconceptions abound about symptoms: schizophrenia is neither “split personality” nor “multiple personality.” Furthermore, people with schizophrenia are not perpetually incoherent or psychotic (DSM-IV; Mason et al., 1997) (Table 4-6).
Schizophrenia is characterized by profound disruption in cognition and emotion, affecting the most fundamental human attributes: language, thought, perception, affect, and sense of self. The array of symptoms, while wide ranging, frequently includes psychotic manifestations, such as hearing internal voices or experiencing other sensations not connected to an obvious source (hallucinations) and assigning unusual significance or meaning to normal events or holding fixed false personal beliefs (delusions). No single symptom is definitive for diagnosis; rather, the diagnosis encompasses a pattern of signs and symptoms, in conjunction with impaired occupational or social functioning (DSM-IV).
Symptoms are typically divided into positive and negative symptoms (see Table 4-7) because of their impact on diagnosis and treatment (Crow, 1985; Andreasen, 1995; Eaton et al., 1995; Klosterkotter et al., 1995; Maziade et al., 1996). Positive symptoms are those that appear to reflect an excess or distortion of normal functions (Peralta & Cuesta, 1998). The diagnosis of schizophrenia, according to DSM-IV, requires at least 1-month duration of two or more positive symptoms, unless hallucinations or delusions are especially bizarre, in which case one alone suffices for diagnosis. Negative symptoms are those that appear to reflect a diminution or loss of normal functions (Roy & DeVriendt, 1994; Crow, 1995; Blanchard et al., 1998). These often persist in the lives of people with schizophrenia during periods of low (or absent) positive symptoms. Negative symptoms are difficult to evaluate because they are not as grossly abnormal as positives ones and may be caused by a variety of other factors as well (e.g., as an adaptation to a persecutory delusion). However, advancements in diagnostic assessment tools are being made.
Diagnosis is complicated by early treatment of schizophrenia’s positive symptoms. Antipsychotic medications, particularly the traditional ones, often produce side effects that closely resemble the negative symptoms of affective flattening and avolition. In addition, other negative symptoms are sometimes present in schizophrenia but not often enough to satisfy diagnostic criteria (DSM-IV): loss of usual interests or pleasures (anhedonia); disturbances of sleep and eating; dysphoric mood (depressed, anxious, irritable, or angry mood); and difficulty concentrating or focusing attention.
Currently, discussion is ongoing within the field regarding the need for a third category of symptoms for diagnosis: disorganized symptoms (Brekke et al., 1995; Cuesta & Peralta, 1995). Disorganized symptoms include thought disorder, confusion, disorientation, and memory problems. While they are listed by DSM-IV as common in schizophrenia—especially during exacerbations of positive or negative symptoms (DSM-IV)—they do not yet constitute a formal new category of symptoms. Some researchers think that a new category is not warranted because disorganized symptoms may instead reflect an underlying dysfunction common to several psychotic disorders, rather than being unique to schizophrenia (Toomey et al., 1998).
Table 4-6. DSM-IV diagnostic criteria for schizophrenia |
|
Table 4-7. Positive and negative symptoms of schizophrenia |
Positive Symptoms of Schizophrenia Delusions are firmly held erroneous beliefs due to distortions or exaggerations of reasoning and/or misinterpretations of perceptions or experiences. Delusions of being followed or watched are common, as are beliefs that comments, radio or TV programs, etc., are directing special messages directly to him/her. Hallucinations are distortions or exaggerations of perception in any of the senses, although auditory hallucinations (“hearing voices” within, distinct from one’s own thoughts) are the most common, followed by visual hallucinations. Disorganized speech/thinking, also described as “thought disorder” or “loosening of associations,” is a key aspect of schizophrenia. Disorganized thinking is usually assessed primarily based on the person’s speech. Therefore, tangential, loosely associated, or incoherent speech severe enough to substantially impair effective communication is used as an indicator of thought disorder by the DSM-IV. Grossly disorganized behavior includes difficulty in goal-directed behavior (leading to difficulties in activities in daily living), unpredictable agitation or silliness, social disinhibition, or behaviors that are bizarre to onlookers. Their purposelessness distinguishes them from unusual behavior prompted by delusional beliefs. Catatonic behaviors are characterized by a marked decrease in reaction to the immediate surrounding environment, sometimes taking the form of motionless and apparent unawareness, rigid or bizarre postures, or aimless excess motor activity. Other symptoms sometimes present in schizophrenia but not often enough to be definitional alone include affect inappropriate to the situation or stimuli, unusual motor behavior (pacing, rocking), depersonalization, derealization, and somatic preoccupations. Affective flattening is the reduction in the range and intensity of emotional expression, including facial expression, voice tone, eye contact, and body language. Alogia, or poverty of speech, is the lessening of speech fluency and productivity, thought to reflect slowing or blocked thoughts, and often manifested as laconic, empty replies to questions. Avolition is the reduction, difficulty, or inability to initiate and persist in goal-directed behavior; it is often mistaken for apparent disinterest. |
Recently there has also been more clinical and research attention on cognitive difficulties that many people with schizophrenia experience (Levin et al., 1989; Harvey et al., 1996). Cognitive problems include information processing (Cadenhead et al., 1997), abstract categorization (Keri et al., 1998), planning and regulating goal-directed behavior (“executive functions”), cognitive flexibility, attention, memory, and visual processing (Cornblatt & Keilp, 1994; Mahurin et al., 1998). These cognitive problems are especially associated with negative and disorganized symptoms but seem to be distinct (Basso et al., 1998; Brekke et al., 1995; Cuesta & Peralta, 1995; Voruganti et al., 1997), although others disagree (Roy & DeVriendt, 1994).
These cognitive problems vary from person to person and can change over time. In some situations it is unclear whether such deficits are due to the illness or to the side effects of certain neuroleptic medications (Zalewski et al., 1998). As research on brain functioning grows more sophisticated, some models posit dysfunction of fundamental cognitive processes at the center of schizophrenia, rather than as one of numerous symptoms (Andreasen, 1997a, 1997b; Andreasen et al., 1996). On the basis of neuropsychological and neuroanatomical data, for example, some researchers posit that schizophrenia is a disorder of the prefrontal cortex and its ability to perform the essential cognitive function of working memory (Goldman-Rakic & Selemon, 1997). Problems in such fundamental areas as paying selective attention, problem-solving, and remembering can cause serious difficulties in learning new skills (social interaction, treatment and rehabilitation) and performing daily tasks (Medalia et al., 1998); treatment of such deficits is discussed in later sections of the chapter.
The criteria for a diagnosis of schizophrenia include functional impairment in addition to the constellation of symptoms outlined above. For formal diagnosis, a person must be experiencing significant dysfunction in one or more major areas of life activities such as interpersonal relations, work or education, family life, communication, or self-care (Docherty et al., 1996; Patterson et al., 1997, 1998). These problems result from the complex of symptoms and their sequelae, but have been linked more to negative than to positive symptoms (Ho et al., 1998). They have serious economic, social, and psychological effects: unemployment, disrupted education, limited social relationships, isolation, legal involvement, family stress, and substance abuse. Such sequelae form the most distressing aspects of the illness for many people and contribute to the increased risk of suicide among people diagnosed with schizophrenia.
On first consideration, symptoms like hallucinations, delusions, and bizarre behavior seem easily defined and clearly pathological. However, increased attention to cultural variation has made it very clear that what is considered delusional in one culture may be accepted as normal in another (Lu et al., 1995). For example, among members of some cultural groups, “visions” or “voices” of religious figures are part of normal religious experience. In many communities, “seeing” or being “visited” by a recently deceased person are not unusual among family members. Therefore, labeling an experience as pathological or a psychiatric symptom can be a subtle process for the clinician with a different cultural or ethnic background from the patient; indeed, cultural variations and nuances may occur within the diverse subpopulations of a single racial, ethnic, or cultural group. Often, however, clinicians’ training, skills, and views tend to reflect their own social and cultural influences.
Clinicians can misinterpret and misdiagnose patients whose cognitive style, norms of emotional expression, and social behavior are from a different culture, unless clinicians become culturally competent (see Chapter 2 and Center for Mental Health Services [CMHS], 1997). For example, clinicians may misinterpret a client’s deferential avoidance of direct eye contact as a sign of withdrawal or paranoia, or a normal emotional reserve as flattened affect if they are unaware of the norms of cultural groups other than their own. There is some empirical evidence that such misinterpretations happen widely. One finding is that African-American patients are more likely than white patients to be diagnosed with severe psychotic disorders in clinical settings (Snowden & Cheung, 1990; Hu et al., 1991; Lawson et al., 1994, Strakowski et al., 1995). The overdiagnosis of psychotic disorders among African Americans is interpreted by some as evidence of clinician bias.
People with differing cultural backgrounds also may experience and exhibit true schizophrenia symptoms differently (Brekke & Barrio, 1997; Thakker & Ward, 1998). Culture shapes the content and form of positive and negative symptoms (Maslowski et al., 1998). For example, people in non-Western countries report catatonic behavior among psychiatric patients much more commonly than in the United States. How culture, societal conditions, and diagnosing tendencies among clinicians in various countries interact to create these differences is being studied but is not yet well understood.
No description of symptoms can adequately convey a person’s experience of schizophrenia or other serious mental illness. Two individuals with very different internal experiences and outward presentations may be diagnosed with schizophrenia, if both meet the diagnostic criteria (Brazo & Dollfus, 1997; Kirkpatrick et al., 1998). Additionally, their symptoms and presentation may vary considerably over time (Ribeyre & Dollfus, 1996). This considerable variation (Basso et al., 1997; Sperling et al., 1997) has led to the naming of several subtypes of schizophrenia, depending on what symptoms are most prominent. Currently these are seen as variations within a single disorder. Similarly, the diagnosis is often difficult because other mental disorders share some common features. Diagnosis depends on the details of how people behave and what they report during an evaluation, the diagnostician, and variations in the illness over time. Therefore, many people receive more than one diagnostic label over the course of their involvement with mental health services. Refining the definition of schizophrenia and other serious mental illnesses to account for these individual and cultural variations remains a challenge to researchers and clinicians.
Studies of schizophrenia’s prevalence in the general population vary depending on the way diagnostic criteria are applied and the population, setting, and method of study (Hafner & an der Heiden, 1997). In general, 1-year prevalence in adults ages 18 to 54 is estimated to be 1.3 percent (Table 4-1). Onset generally occurs during young adulthood (mid-20s for men, late-20s for women), although earlier and later onset do occur. It may be abrupt or gradual, but most people experience some early signs, such as increasing social withdrawal, loss of interests, unusual behavior, or decreases in functioning prior to the beginning of active positive symptoms. These are often the first behaviors to worry family members and friends.
The mortality rate in persons with schizophrenia is significantly higher than that of the general population. While elevated suicide accounts for some of the excess mortality—and is a serious problem in its own right—comorbid somatic illnesses also contribute to excess mortality. Until recently, there was little information on the prevalence of comorbid medical illnesses in people with schizophrenia (Jeste et al., 1996). A recent study was among the first to document systematically that people with schizophrenia are beset by vision and dental problems, as well as by high blood pressure, diabetes, and sexually transmitted diseases. Their self-reported lifetime rates of high blood pressure (34.1 percent), diabetes (14.9 percent), and sexually transmitted diseases (10.0 percent) are higher than those for people of similar age in the general population (Dixon et al., 1999; Dixon et al., in press-a). The reasons for excess medical comorbidity are unclear, yet medical comorbidity is independently associated with lower perceived physical health status, more severe psychosis and depression, and greater likelihood of a history of a suicide attempt (Dixon et al., 1999). These findings have important implications for improving patient management (Dixon et al., in press-b).
It is difficult to study the course of schizophrenia and other serious mental illnesses because of the changing nature of diagnosis, treatment, and social norms (Schultz et al., 1997). Overall, research indicates that schizophrenia’s course over time varies considerably from person to person (DSM-IV; Wiersma et al., 1998) and varies for any one person (Moller & von Zerssen, 1995). The variability may emanate from the underlying heterogeneity of the disease process itself, as well as from biological and genetic vulnerability, neurocognitive impairments, sociocultural stressors, and personal and social factors that confer protection against stress and vulnerability (Liberman et al., 1980; Nuechterlein et al., 1994). Most individuals experience periods of symptom exacerbation and remission, while others maintain a steady level of symptoms and disability which can range from moderate to severe (Wiersma et al., 1998).
Most people experience at least one, often more, relapse after their first actively psychotic episode (Herz & Melville, 1980; Falloon, 1984; Gaebel et al., 1993; Wiersma et al., 1998). Often these are periods of more intense positive symptoms, yet the person continues to struggle with negative symptoms in between episodes (Gupta et al., 1997; Schultz et al., 1997). However, whether such exacerbations have the same degree of disabling and distressing effects each time depends greatly on the person’s coping skills and support system. Over time, many people learn successful ways of managing even severe symptoms to moderate their disruptiveness to daily life (e.g., Hamera et al., 1992). Therefore, earlier years with the illness are often more difficult than later ones. Additionally, gradual onset and delays in obtaining treatment seem to raise the risk of longer episodes of acute illness over time (Wiersma et al., 1998). Early treatment with antipsychotic medications has been found to predict better long-term outcomes for people experiencing their first psychotic episode, as compared with a variety of control groups, including those in more advanced stages (Lieberman et al., 1996; Wyatt et al., 1997, 1998; Wyatt & Henter, 1998).
The course of schizophrenia is also influenced by personal orientation and motivation, and by supports in the form of skill-building assistance and rehabilitation (Lieberman et al., 1996; Awad et al., 1997; Hafner & an der Heiden, 1997). These, in turn, are heavily influenced by regional, cultural, and socioeconomic factors in addition to individual factors (Dassori et al., 1995).
Family factors also are related to the course of illness. Following hospitalization, patients who return home are more likely to relapse if their family is identified as critical, hostile, or emotionally overinvolved than if their family is not so identified (Jenkins & Karno, 1992; Bebbington & Kuipers, 1994). This is a controversial finding because it appears to blame family members (Hatfield et al., 1987). However, recent studies suggest an interaction between families and the patient (Goldstein, 1995b), suggesting that the negative emotions of some family members may be a reaction to, more than a cause of relapse in, the family member. Blaming either the family or the patient overlooks important ways both parties interact and how such interactions are associated with the course of schizophrenia. In addition, there is a need to examine what part the role of families’ prosocial functioning (family warmth and family support) plays in the course of schizophrenia to identify how family factors can serve as protective factors (Lopez et al., in press).
Despite the variability, some generalizations about the long-term course of schizophrenia are possible largely on the basis of longitudinal research. A small percentage (10 percent or so) of patients seem to remain severely ill over long periods of time (Jablensky et al., 1992; Gerbaldo et al., 1995). Most do not return to their prior state of mental function. Yet several long-term studies reveal that about one-half to two-thirds of people with schizophrenia significantly improve or recover, some completely (for a review see Harding et al., 1992). These studies were important because they began to dispel the traditional view, dating back to the 19th century, that schizophrenia had a uniformly downhill course (Harding et al., 1992). Several other longitudinal studies, however, found less favorable patient outcomes with other cohorts of patients (Harrow et al., 1997). The differences in outcomes between the studies are thought to be explained on the basis of differences in patient age, length of followup, expectations about prognosis, and types of services received (Harrow et al., 1997).
The importance of a rehabilitation focus in shaping patient outcome was supported by one of the only direct comparisons between patient cohorts. The Vermont cohort consisted of the most severely affected patients from the “back wards” of the state hospital (Harding et al., 1987). As part of a statewide program of deinstitutionalization, the cohort was released in the 1950s to a hospital-based rehabilitation program and then to what was at the time an innovative, broad-based community rehabilitation program, which incorporated social, residential, and vocational components13. Patients’ degree of recovery at followup after three decades was measured by global functional improvement and other functional measures. One-half to two-thirds of the Vermont cohort significantly improved or recovered (Harding et al., 1987). The receipt of community-based rehabilitation was considered key to their recovery on the basis of a study comparing their progress with that of a matched cohort of deinstitutionalized patients from Maine. The Maine cohort did not function as well after receiving more traditional aftercare services without a rehabilitation emphasis (DeSisto et al., 1995a, 1995b). Although the findings from the Vermont cohort, as well as those from a cohort in Switzerland (Ciompi, 1980), are widely cited by consumers as evidence of recovery from mental illness, a topic discussed in detail in Chapter 2, it bears noting that patients in the Vermont cohort represented a less rigorously defined conceptualization of schizophrenia than is common today, which may account, in part, for the more favorable outcomes.
In summary, schizophrenia does not follow a single pathway. Rather, like other mental and somatic disorders, course and recovery are determined by a constellation of biological, psychological, and sociocultural factors. That different degrees of recovery are attainable has offered hope to consumers and families.
There appear to be gender differences in the course and prognosis of schizophrenia. Women are more likely than men to experience later onset, more pronounced mood symptoms, and better prognosis (DSM-IV), although the prognosis difference recently has come under question.
Current research (e.g., Hafner & an der Heiden, 1997; Hafner et al., 1998) suggests that some of the apparent gender differences in course and outcome occur because for some women schizophrenia does not develop until after menopause. This delay is thought to be related to the protective effects of estrogen, the levels of which diminish at menopause. According to this line of reasoning, men have no such delay because they lack the protective estrogen levels. Therefore, a higher proportion of men develop schizophrenia earlier.
Generally, early onset (younger than age 25 in most studies) is associated with more gradual development of symptoms, more prominent negative symptoms across the course (DSM-IV), and more neuropsychological problems (Basso et al., 1997; Symonds et al., 1997), regardless of gender. Early onset also usually involves more disruption of adult milestones, such as education, employment achievements, and long-term social relationships (Nowotny et al., 1996). People with later onset often have reached these milestones, cushioning them from disruptive sequelae and enabling better coping with symptoms (Hafner et al., 1998). Therefore, early onset (more men than women) often yields a more difficult first several years, although not necessarily a worse long-term outcome.
However, it must be emphasized that group probabilities do not necessarily speak to individual cases.
The cause of schizophrenia has not yet been determined, although research points to the interaction of genetic endowment and major environmental upheaval during development of the brain. This section first discusses genetic studies and then turns to the evidence for neurodevelopmental disruption. These lines of research are beginning to converge: neurodevelopmental disruption may be the result of genetic and/or environmental stressors early in development, leading to subtle alterations in the brain. Furthermore, environmental factors later in development can either exacerbate or ameliorate expression of genetic or neurodevelopmental defects. The overarching message is that the onset and course of schizophrenia are most likely the result of an interaction between genetic and environmental influences.
Family, twin, and adoption studies support the role of genetic influences in schizophrenia (Kendler & Diehl, 1993; McGuffin et al., 1995; Portin & Alanen, 1997). Immediate biological relatives of people with schizophrenia have about 10 times greater risk than that of the general population. Given prevalence estimates, this translates into a 5 to 10 percent lifetime risk for first-degree relatives (including children and siblings) and suggests a substantial genetic component to schizophrenia (e.g., Kety, 1987; Tsuang et al., 1991; Cannon et al., 1998). What also bolsters a genetic role are findings that the identical twin of a person with schizophrenia is at greater risk than a sibling or fraternal twin, and that adoptive relatives do not share the increased risk of biological relatives (see Figure 4-3). However, in about 40 percent of identical twins in which one is diagnosed with schizophrenia, the other never meets the diagnostic criteria. The discordance among identical twins clearly indicates that environmental factors likely also play a role (DSM-IV).
Current research proposes that schizophrenia is caused by a genetic vulnerability coupled with environmental and psychosocial stressors, the so-called diathesis-stress model (Zubin & Spring, 1977; Russo et al., 1995; Portin & Alanen, 1997). Family studies suggest that people have varying levels of inherited genetic vulnerability, from very low to very high, to schizophrenia. Whether or not the person develops schizophrenia is partly determined by this vulnerability. At the same time, the development of schizophrenia also depends on the amount and types of stresses the person experiences over time. An analogy can be drawn to diabetes by virtue of both genetic factors (e.g., family history) and behavioral factors (e.g., diet, exercise, stress) that interact to determine whether or not a given person develops diabetes. How the interaction works in schizophrenia is unknown, yet the subject of ongoing research (Murray et al., 1992; Spaulding, 1997; Jones & Cannon, 1998; van Os & Marcelis, 1998).
Despite the evidence for genetic vulnerability to schizophrenia, scientists have not yet identified the genes responsible (Kendler & Diehl, 1993; Levinson et al., 1998). The current consensus is that multiple genes are responsible (Kendler et al., 1996; Kunugi et al., 1996, 1997; Portin & Alanen, 1997; Straub et al., 1998).
Numerous brain abnormalities have been found in schizophrenia. For example, patients often have enlarged cranial ventricles (cavities in the brain that transport cerebrospinal fluid), especially the third ventricle (Weinberger, 1987; Schwarzkopf et al., 1991; Woods & Yurgelun-Todd, 1991; Dykes et al., 1992; Lieberman et al., 1993; DeQuardo et al., 1996), and decreased cerebral size (Schwarzkopf et al., 1991; Ward et al., 1996) compared with control groups. Several studies suggest this may be more common among men (Nopoulos et al., 1997) whose families do not have a history of schizophrenia (Schwarzkopf et al., 1991; Vita et al., 1994). There is also some evidence that at least some people with schizophrenia have unusual cortical laterality, with dysfunction localizing to the left hemisphere (Braun et al., 1995). To explain laterality, some have proposed a prenatal injury or insult at the time of left hemisphere development, which normally lags behind that of the right hemisphere (Bracha, 1991).
![]() |
Figure 4-3. Risk of developing schizophrenia |
The anatomical abnormalities found in different parts of the brain tend to correlate with schizophrenia’s positive symptoms (Barta et al., 1990; Shenton et al., 1992; Bogerts et al., 1993; Wible et al., 1995) and negative symptoms (Buchanan et al., 1993). Positive symptoms are often linked to temporal lobe dysfunction, as shown by imaging studies that utilize blood flow and glucose metabolism. Such dysfunction possibly is related to abnormal phospholipid metabolism (Fukuzako et al., 1996). Disorganized speech (taken to reflect disorganized thinking) has been associated with abnormalities in brain regions associated with speech regulation (McGuire et al., 1998). Negative and cognitive symptoms, especially those related to volition and planning, are commonly associated with prefrontal lobe dysfunction (Capleton, 1996; Abbruzzese et al., 1997; Mattson et al., 1997). This is perhaps related to unusual neuronal density (Selemon et al., 1998) and may be more prevalent among patients whose families have a history of schizophrenia than those whose do not (Sautter et al., 1995). However, mapping patients’ symptoms with brain regions is complex and variable. Researchers believe that the dysfunctions are present in brain circuitry rather than in one or two localized areas of the brain (Andreasen et al., 1997, 1998; Wiser et al., 1998).
Excessive levels of the neurotransmitter dopamine have long been implicated in schizophrenia, although it is unclear whether the excess is a primary cause of schizophrenia or a result of a more fundamental dysfunction. More recent evidence implicates much greater complexity in the dysregulation of dopamine and other neurotransmitter systems (Grace, 1991, 1992; Olie & Bayle, 1997). Some of this research ties schizophrenia to certain variations in dopamine receptors (Nakamura et al., 1995; Serretti et al., 1998), while other research focuses on the serotonin system (Inayama et al., 1996). However, it must be emphasized that in many cases it is possible that perturbations in neurotransmitter systems may result from complications of schizophrenia, or its treatment, rather than from its causes (Csernansky & Grace, 1998).
The “stressors” investigated in schizophrenia research include a wide range of biological, environmental, psychological, and social factors. There is consistent evidence that prenatal stressors are associated with increased risk of the child developing schizophrenia in adulthood, although the mechanisms for these associations are unexplained. Some interesting preliminary research suggests risk factors include maternal prenatal poverty (Cohen, 1993), poor nutrition (Susser & Lin, 1992; Susser et al., 1996, 1998), and depression (Jones et al., 1998). Other stressors are exposure to influenza outbreaks (Mednick et al., 1988; Adams et al., 1993; Rantakallio et al., 1997), war zone exposure (van Os & Selten, 1998), and Rh-factor incompatibility (Hollister, 1996). Their variety suggests other stressors might also be risk factors, under the general rubric of “maternal stress.”
As a result of such stresses, newborns of low birth weight and short gestation have been linked to increased risk of later developing schizophrenia (Jones et al., 1998), as have delivery complications (Hultman et al., 1997; Jones & Cannon, 1998) and other early developmental problems (Brixey et al., 1993; Ellenbroek & Cools, 1998; Portin & Alanen, 1998; Preti et al., 1998). Among children, especially infants, viral central nervous system infections may be associated with greater risk (Rantakallio et al., 1997; Iwahashi et al., 1998), thereby explaining links between schizophrenia and being born or raised in crowded conditions (Torrey & Yolken, 1998) or during the flu-prone winter and spring months (Castrogiovanni et al., 1998). However, support for these hypotheses is inconsistent and incomplete (Yolken & Torrey, 1995). In fact, it is possible that prenatal and obstetric complications associated with schizophrenia could reflect already disrupted fetal development, rather than being causal themselves (Lipska & Weinberger, 1997). More generally, across the life span, the chronic stresses of poverty (Cohen, 1993; Saraceno & Barbui, 1997) and some facets of minority social status appear to alter the course of schizophrenia.
Presently, it is unclear whether and how these risks contribute to the diathesis-stress interaction for any one person because specific causes may differ (Onstad et al., 1991; Cardno & Farmer, 1995; Tsuang & Faraone, 1995; Miller, 1996). Although genetic vulnerability is difficult to control, certain other important factors can be addressed with current knowledge. An awareness of stressors that increase the likelihood of genetic vulnerability being actualized supports preventive strategies, such as good prenatal health care and nutrition. Furthermore, since life stresses can exacerbate the course of the illness, access to good quality services and social supports, as well as attention to relapse prevention interventions, can have beneficial effects on longer term outcome (Wiersma et al., 1998).
At the same time, researchers and clinicians are striving to integrate findings concerning both diathesis and stress into models of how schizophrenia develops (Andreasen, 1997b). Not only does brain biology influence behavior and experience, but behavior and experience mold brain biology as well. One promising integrative model is the neurodevelopmental theory of schizophrenia developed by Weinberger and others (Murray & Lewis, 1987; Weinberger, 1987, 1995; Bloom, 1993; Weinberger & Lipska, 1995; Lipska & Weinberger, 1997). It posits that schizophrenia develops from “a subtle defect in cerebral development that disrupts late-maturing, highly evolved neocortical functions, and fully manifests itself years later in adult life” (Lipska & Weinberger, 1997; see also Susser et al., 1998).
The nature of the defect, which has not been identified, may be a product of a pre- or neonatal insult to the brain. Further support for the neurodevelopmental theory comes from abnormalities in brain structure that have long been found in people with schizophrenia. Such findings have been interpreted to reflect abnormal neuronal migration in early development (Jakob & Beckmann, 1986; Arnold et al., 1991; Akbarian et al., 1993; Falkai et al., 1995). Researchers have developed animal models of early neurodevelopmental dysfunctions that manifest in later behavioral and functional deficits (Geyer et al., 1993; Lipska & Weinberger, 1993; Wilkinson et al., 1994; Lipska et al., 1995) and are influenced by genetics (de Kloet et al., 1996; Zaharia et al., 1996). As promising as these theories are, the causes and mechanisms of schizophrenia remain unknown. Nonetheless, research has uncovered several of treatments for schizophrenia that are effective in reducing symptoms and functional impairments.
Table 4-8. Selected treatment recommendations, Schizophrenia Patient Outcomes Research Team |
Recommendation 1. Antipsychotic medications, other than clozapine, should be used as the first-line treatment to reduce psychotic symptoms for persons experiencing an acute symptom episode of schizophrenia. Recommendation 2. The dosage of antipsychotic medication for an acute symptom episode should be in the range of 300–1,000 chlorpromazine (CPZ) equivalents per day for a minimum of 6 weeks. Reasons for dosages outside this range should be justified. The minimum effective dose should be used. Recommendation 8. Persons who experience acute symptom relief with an antipsychotic medication should continue to receive this medication for at least 1 year subsequent to symptom stabilization to reduce the risk of relapse or worsening of positive symptoms. Recommendation 9. The maintenance dosage of antipsychotic medication should be in the range of 300–600 CPZ equivalents (oral or depot) per day. Recommendation 12. Depot antipsychotic maintenance therapy should be strongly considered for persons who have difficulty complying with oral medication or who prefer the depot regimen. Recommendation 23. Individual and group therapies employing well-specified combinations of support, education, and behavioral and cognitive skills training approaches designed to address the specific deficits of persons with schizophrenia should be offered over time to improve functioning and enhance other target problems, such as medication noncompliance. Recommendation 24. Patients who have ongoing contact with their families should be offered a family psychosocial intervention that spans at least 9 months and that provides a combination of education about the illness, family support, crisis intervention, and problem-solving skills training. Such interventions should also be offered to nonfamily members. Recommendation 27. Selected persons with schizophrenia should be offered vocational services.* Recommendation 29. Systems of care serving persons with schizophrenia who are high users should include ACT and ACM programs. |
* Edited Source: Lehman & Steinwachs, 1998a, 1998b. |
The treatment of schizophrenia has advanced considerably in recent years. A battery of treatments has become available to ameliorate symptoms, to improve quality of life, and to restore productive lives. Treatment and other service interventions often are linked to the clinical phases of schizophrenia: acute phase, stabilizing phase, stable (or maintenance) phase, and recovery phase. Where possible, this report ties available data to these treatment phases.
Optimal treatment across all phases of treatment includes some form of pharmacotherapy with antipsychotic medication, usually combined with a variety of psychosocial interventions. Psychosocial interventions include supportive psychotherapy, and family psychoeducational interventions, as well as psychosocial and vocational rehabilitation. The treatment of individuals with schizophrenia who are high service users should be orchestrated by an interdisciplinary treatment team to ensure continuity of services (i.e., assertive community treatment, which is discussed below). Others may benefit from less intensive forms of case management and various self-help and consumer-operated services, described later. It is also important to assist individuals with schizophrenia in meeting their many related needs, such as for supported housing, transportation, and general medical care. These are among the 30 pivotal treatment recommendations of the Agency for Healthcare Research and Quality (AHRQ)- and NIMH-sponsored Schizophrenia Patient Outcomes Research Team (PORT), which developed its recommendations on the basis of a comprehensive review of the treatment outcomes literature (Lehman & Steinwachs, 1998a). Table 4-8 contains a distillation of key recommendations.
Although the Schizophrenia PORT study recommendations are grounded in research such as that reviewed in the following paragraphs, it is noteworthy that treatment practices fail to adhere to these recommendations, with conformance generally falling below 50 percent (Lehman & Steinwachs, 1998b). The disturbing gap between knowledge and practice is discussed later in this chapter. Many barriers exist in the transfer of information about treatment and evidence-based practice to clinicians, family members, and service users.
Pharmacotherapies are the most extensively evaluated intervention for schizophrenia. The conventional or older antipsychotic medications (e.g., chlorpromazine, haloperidol, fluphenazine, molindone) and the more recently developed medications (e.g., clozapine, risperidone, olanzapine, quetiapine, sertindole) are used to reduce the positive symptoms of schizophrenia. The newer medications, often called atypical because they have a different mechanism action than their predecessors, also appear in preliminary studies to be more effective against negative symptoms, display fewer side effects, and show promise for treating people for whom older medications are ineffective (Ballus, 1997). Their introduction has created more treatment options for people with schizophrenia and other serious mental illnesses. Although the newer, more broadly effective medications have increased hopes for recovery, they also have resulted in greater treatment complexity for patients and providers alike (Fenton & Kane, 1997).
Conventional antipsychotics have been shown to be highly effective both in treating acute symptom episodes and in long-term maintenance and prevention of relapse (Cole & Davis, 1969; Davis et al., 1989; Kane, 1992). Across many studies, positive symptoms improved in about 70 percent of patients, compared with only 25 percent improvement in placebo groups (Kane, 1989; Kane & Marder, 1993). Their common mechanism of action is by blocking dopamine D2 receptors, and their therapeutic effects are presumably due to D2 blockade in the mesolimbic system (Dixon et al., 1995).
For acute symptom episodes, treatment recommendations call for dosages of antipsychotic medication in the range of 300 to 1,000 “chlorpromazine equivalents”14 per day (Lehman & Steinwachs, 1998b). Among patients discharged from inpatient units whose dosage fell outside of this range, minority patients often are much more likely than Caucasian patients to be on a higher dose (> 1,000 chlorpromazine equivalents) (Lehman & Steinwachs, 1998b). Such dosing patterns run counter to evidence that a higher proportion of minority patients, because of lower rates of drug metabolism, may require lower doses of antipsychotics.
Dosage studies have found that moderate levels (300 to 750 chlorpromazine equivalents daily for acute episodes, 300 to 600 for maintenance, although many people require less than 300) are more effective for positive symptom reduction over the long run than very high (“loading”), intermittent, or very low doses (Donlon et al., 1978, 1980; Neborsky et al., 1981; Baldessarini et al., 1990; Levinson et al., 1990; Van Putten et al., 1990, 1992; Rifkin et al., 1991). Very low and intermittent dosing substantially increases the risk of relapse, while rapid loading and very high doses greatly increase adverse effects (Davis et al., 1989), although medication programs must be tailored to individual needs. On conventional neuroleptics, patients experience symptom reduction over the first 5 to 10 weeks of treatment, with more gradual improvement sometimes continuing for more than double that time (Baldessarini et al., 1990). The older medications are occasionally found to reduce some negative symptoms as well, although it is impossible to tell from existing research if this is a primary or secondary effect of reduced positive symptoms (Davis et al., 1989; Cassens et al., 1990).
Apart from their minimal effects on negative symptoms, the greatest problem with conventional neuroleptic medications is their pervasive, uncomfortable, and sometimes disabling and dangerous side effects. The spectrum of side effects is broad (Davis et al., 1989; Casey, 1997), yet the most common and troubling are extrapyramidal effects such as acute dystonia, parkinsonism, and tardive dyskinesia (Chakos et al., 1996; Yuen et al., 1996; Trugman, 1998) and akathisia (Kane, 1985).15 Side effects are evident in an estimated 40 percent of patients, but pinpointing their prevalence is complicated by the vagaries of diagnosis, length of prescription and observation, and variability across individuals and medications. Rare side effects (seizures, paradoxical exacerbation of psychotic symptoms, neuroleptic malignant syndrome) also can be devastating.
Acute dystonia, parkinsonism, dyskinesias, and akathisia are usually treated by lowering the doses of neuroleptics and/or using adjunctive anticholinergic, antiparkinsonian medications (e.g., benztropine). Because these side effects can be mistaken for core psychotic symptoms, the neuroleptic dose is often increased, rather than decreased, exacerbating the side effects. Many other side effects such as attention and vigilance problems, sleepiness, blurry vision, dry mouth, and constipation are worse in the initial weeks of treatment and usually taper off as a person adjusts to the medication. However, the discomfort and disability of the initial weeks are intolerably disruptive to some individuals. Dosages can be individualized to minimize side effects while maximizing benefit.
Efficacy data on the newer antipsychotics indicate that they are as efficacious as the older agents at reducing positive symptoms and carry fewer side effects. They also offer important additional advantages for some who have had treatment-resistant schizophrenia (Kane, 1996, 1997; Vanelle, 1997; van Os et al., 1997; Andersson et al., 1998).
The prototype of the newer medications, clozapine, has been found effective for about 30 to 50 percent of treatment-resistant patients (Kane & Marder, 1993; Lieberman et al., 1994; Buchanan, 1995; Kane & McGlashan, 1995; Kane, 1996), as well as for patients who have responded to previous medications. Clozapine also seems to help secondary depression and anxiety, and perhaps the negative symptoms of schizophrenia (Buchanan, 1995). Clozapine not only has a very low incidence of tardive dyskinesia (Barnes & McPhillips, 1998) but may also show some promise as its treatment (Walters et al., 1997). However, the use of clozapine was constrained for many years in the United States because of findings that in about 1 percent of patients it causes a potentially fatal blood condition: agranulocytosis, a loss of white blood cells that fight infection. Because agranulocytosis is reversible if detected early, frequent (weekly) blood monitoring is critical (Lamarque, 1996; Meltzer, 1997). Although effective safeguards exist, use of clozapine tends to be limited to those who are unresponsive to, or cannot tolerate, other antipsychotics. The Veterans’ Administration sponsored the largest cost-effectiveness study to date of clozapine, comparing it to haloperidol. Studies by Rosenheck and his collaborators (1997, 1998b, 1999) replicated previous findings that clozapine was more effective than haloperidol in treating positive and negative symptoms and had fewer extrapyramidal side effects. In addition to its direct pharmacologic effect, the investigators found that clozapine enhances participation in psychosocial treatments, which augments its overall clinical effectiveness (Rosenheck et al., 1998b). Savings associated with use of clozapine were particularly significant among study participants who had averaged 215 inpatient hospital days in the year prior to the study (Rosenheck et al., 1998b).
Increasing numbers of patients with schizophrenia receive newer agents like risperidone (Smith et al., 1996a; Foster & Goa, 1998), olanzapine (Bymaster et al., 1997), and quetiapine (Wetzel et al., 1995; Gunasekara & Spencer, 1998). They have replaced the older antipsychotics in many cases because they cause fewer side effects at therapeutic levels (Umbricht & Kane, 1995) and do not require clozapine’s close monitoring. Their effects on negative schizophrenia symptoms are currently being evaluated and hold some promise, as do their effects on some cognitive dysfunctions (Gallhofer et al., 1996; Green et al., 1997; Kern et al., 1998). Furthermore, current cost analyses find these newer medications at least cost-neutral and sometimes more cost-effective in the long run than older agents, despite being more expensive per pill (Loebel et al., 1998).
Thus, as a whole, there is evidence that the newer antipsychotics are more clinically advantageous than the older ones due to the combination of their effective treatment of positive (and perhaps negative) symptoms, their treatment of ancillary symptoms such as anxiety and depression, and their more favorable side effect profile (Lieberman, 1993, 1996; Fleischhacker & Hummer, 1997; Shore, 1998). Having fewer side effects generally results in better compliance with the medication, although atypical side effects can include sedation, weight gain, sexual dysfunction, and other dose-related discomforts (Casey, 1997; Hasan & Buckley, 1998). Although the newer agents have less adverse impact on fecundity, so that more women with schizophrenia can conceive, there are very little data to address the impact of treatment on pregnancy and lactation. While it is not clear whether the newer medications directly lessen the functional disabilities that usually accompany schizophrenia, they may improve a person’s quality of life (Lehman, 1996) and responsiveness to psychosocial, rehabilitation, and therapeutic interventions (Buckley, 1997). Effectiveness in real-world settings may be substantially lower than efficacy in clinical trials, possibly due to patient heterogeneity, prescribing practices, and noncompliance (Dixon et al., 1995).
Growing awareness that ethnicity and culture influence patients’ response to medications has catapulted to prominence the field of ethnopharmacology. In the past decade, studies have demonstrated that psychiatric medications interact with patient ethnicity in multiple ways, with response to the same medication and dose varying by patient ethnicity (Frackiewicz et al., 1997). For example, due to racial and ethnic variation in pharmacokinetics, Asians and Hispanics with schizophrenia may require lower doses of antipsychotics than Caucasians to achieve the same blood levels (Collazo et al., 1996; Ramirez, 1996; Ruiz et al., 1996). Pharmacokinetics and pharmacodynamics also vary across other ethnic groups.16 Racial and ethnic variation likely stem from a combination of genetic and psychosocial factors, such as diet and health behaviors (Lin et al., 1995).
At the same time, it is possible that the documented medication differences are the result of underlying biological mechanisms of mental illness related to ethnicity, culture, and gender variations. Additionally, the effects of psychotropic medications may be interpreted differently by culture (Lewis et al., 1980). Although knowledge in these areas is incomplete, it is important to consider cultural patterns in dosing decisions and medication management, as well as risks of side effects and tardive dyskinesia. Furthermore, studies suggest that medication differences among African American people diagnosed with schizophrenia may reflect clinician biases in diagnosis and prescription practices more than differences in medication metabolism or health behaviors alone (Frackiewicz et al., 1997).
Psychosocial treatments are vital complements to medication for individuals with schizophrenia. They help patients maximize functioning and recovery. The PORT treatment recommendations, as noted earlier, stipulate that patients should receive pharmacotherapy in conjunction with supportive psychotherapy, family treatment, psychosocial rehabilitation and skill development, and vocational rehabilitation (Lehman & Steinwachs, 1998a). In the active phase of illness, medication enables patients to be more receptive to psychosocial treatments. During periods of remission, when maintenance medication is still recommended, psychosocial treatments continue to help patients to improve quality of life. Psychosocial treatments assume even greater importance for patients who do not respond to, cannot tolerate, or refuse to take medications. Several decades ago, psychosocial programs were developed that used little or no medication (Mosher, 1999). For a highly selected group of patients at the beginning of their first acute episode of schizophrenia, these programs were reported effective (Mosher & Menn, 1978). Most patients, however, do not meet the selection criteria employed in this study. Few such programs are currently operating (Mosher, 1999), and treatment with antipsychotic medication is recommended in conjunction with psychosocial treatments (Lehman & Steinwachs, 1998a).
Outcomes of individual and group therapies have been studied for people with schizophrenia, although not extensively and not in relation to current managed care practices. Overall, it is clear that individual and group therapies that focus on practical life problems associated with schizophrenia (e.g., life skills training) are superior to psychodynamically oriented therapies (Scott & Dixon, 1995a). Psychodynamically oriented therapies are considered to be potentially harmful; therefore, their use is not recommended (Lehman, 1997). Individual, group, or family therapies that combine support, education, and behavioral and cognitive skills, and that address specific challenges, can help clients cope with their illness and improve their functioning, quality of life, and degree of social integration. However, the optimum length of therapy seems to be longer than that afforded by “brief therapy” (Gunderson et al., 1984; Stanton et al., 1984; Hogarty et al., 1997). Additionally, certain targeted therapeutic interventions may be useful in addressing specific symptoms (Drury et al., 1996; Jensen & Kane, 1996). Certain subgroups of clients appear to find different types of therapy more or less useful than others (Scott & Dixon, 1995a).
Several professionally operated family intervention programs have been developed to help the family member with severe mental illness (e.g., Hogarty et al., 1987; Cazzullo et al., 1989; Mari & Streiner, 1994; McFarlane, 1997). Randomized trials have been conducted for interventions that educate families about schizophrenia, provide support and crisis intervention, and offer training in effective problem solving and communication. These interventions have strongly and consistently demonstrated their value in preventing or delaying symptom relapse and appear to improve the patient’s overall functioning and family well-being (Goldstein et al., 1978; Falloon et al., 1985; Strachan, 1986; Lam, 1991; Tarrier et al., 1994; Goldstein 1995a; Penn & Mueser, 1996). Research has suggested that groups of multiple families are more effective and less expensive than individual family interventions (McFarlane et al., 1995). Incorporating family religious and ethnic background may prove useful in family interventions (Guarnaccia et al., 1992). Family self-help groups are discussed subsequently in this chapter.
Psychosocial skills training strives to teach clients verbal and nonverbal interpersonal skills and competencies to live successfully in community settings. Skills or tasks are divided into small, simple behavioral elements that the client then learns, practices, and puts together. Currently, there is a growing addition of cognitive skill remediation to rehabilitation programs that have focused on social skills training (Bellack et al., 1989; Bellack & Mueser, 1993; Scott & Dixon, 1995a). As one example of the scope of such programs, the program examined by Liberman and co-workers (1998) focused on four skill areas: medication management, symptom management, recreation for leisure, and basic conversation skills. Each area was addressed through concrete topics, with the basic conversation skills module, for example, consisting of active listening skills, initiating conversations, maintaining conversations, terminating conversations, and putting it all together.
The evolution of psychosocial skills training is important yet incomplete. A review in the mid-1990s concluded that its overall impact on social, cognitive, or vocational functioning is modest, and it remains unclear whether these gains are maintained after the training is over and can be used in real-life situations (Scott & Dixon, 1995a). However, a more recent study found greater independent living skills among clients who had received skills training during a 2-year followup of everyday community functioning (Liberman et al., 1998). Several others agree that skills training is effective for specific behavioral outcomes (Marder et al., 1996; Penn & Mueser, 1996). Specific symptom profiles may also influence how effective skills training is for a given person (Kopelowicz et al., 1997). Furthermore, Medalia and coworkers (1998) report recent success adapting cognitive rehabilitation techniques, originally developed for survivors of serious head injuries, for people with schizophrenia, but long-term effects and generalizability have not been determined. This exemplifies both the progress and the need for further refinement of this intervention (Smith et al., 1996b).
In a recent review article, a team of researchers concluded that the most potent rehabilitation programs (1) establish direct, behavioral goals; (2) are oriented to specific effects on related outcomes; (3) focus on long-term interventions; (4) occur within or close to clients’ naturally preferred settings; and (5) combine skills training with an array of social and environmental supports. They also note that most programs do not contain all of these elements, but most are much improved over previous eras (Mueser et al., 1997b).
There are a host of multi-component psychosocial rehabilitation services that combine pharmacologic treatment, independent living and social skills training, psychological support to clients and their families, housing, vocational rehabilitation, social support and network enhancement, and access to leisure activities (World Health Organization [WHO], 1997). These are discussed in the later section on service delivery.
An important goal of recovery and the consumer movement is to enable patients themselves to participate more actively in their own treatment. While complete remission of all symptoms is unlikely for the majority, most can and do learn skills and techniques over time that they can use to manage distressing symptoms and the effects of the illness. Often, better skills in coping and monitoring one’s own health status occurs simply through experience. However, the growth of self-help and the development of recovery models for serious mental illnesses has spawned interventions that purposefully teach and encourage active coping on the part of clients and their families. Controlled research is sparse (Penn & Mueser, 1996), except in the area of relapse prevention.
For example, some people find it very useful to pay attention to their own warning signs of relapse or symptom exacerbation, so that additional coping practices, supports, or interventions can be put into place. Norman and Malla (1995) conclude that there is not a standardized set of signs that predict relapse, but that some individuals have and get to know their own reasonably consistent patterns. Herz and Lamberti (1995) agree that many people experience predictable signs, although whether a relapse occurs depends on many factors besides the signs themselves. Therefore, the risk and magnitude of relapse can be reduced by monitoring early symptoms and intervening when they emerge (Herz & Lamberti, 1995). Watching for such signs is recommended for consumers, family members, and clinicians (Jorgensen, 1998). Specific training programs for teaching individuals with schizophrenia to identify the warning signs of relapse and to develop relapse prevention plans have been shown to be effective (Liberman et al., 1998).
Unemployment is pervasive among people with serious and persistent mental illness. Employment is valued highly by the general public and by people with schizophrenia alike because it generates financial independence, social status, contact with other people, structured time and goals, and opportunities for personal achievement and community contribution (Mowbray et al., 1997). These attributes of employment, combined with the self-esteem and personal purpose that it engenders, make vocational rehabilitation a prominent facet of treatment for serious mental illnesses. Vocational rehabilitation is especially important because early adult onset often disrupts education and employment history.
Controlled studies of vocational rehabilitation interventions have shown mixed results (Lehman, 1995, 1998; Cook & Jonikas, 1996). Although such programs do seem to increase work-related activities while people are engaged in them, the gains do not seem to be translated into more independent employment once services cease. This has led to the conclusion that ongoing support is needed for many individuals with schizophrenia who wish to work in competitive employment (Wehman, 1988). Recent controlled studies have shown the effectiveness of this newer type of so-called supported employment models, which emphasize rapid placement in a real job setting and strong support from a job coach to learn, adapt, and maintain the position (Drake et al., 1994, 1996; Bond et al., 1997). These models, which are growing in use, strike a dynamic balance between being supportive yet challenging in order to avoid clients’ dependency and maximize their growth (Mowbray et al., 1997).
As vocational rehabilitation has moved away from sheltered workshops and toward supported employment models, the Americans With Disabilities Act of 1990 has helped to open jobs and educate employers about reasonable accommodations for people with psychiatric disabilities (Mechanic, 1998; Scheid, 1998). Additionally, innovations like client-run and client-owned vocational programs and independent businesses have begun to be developed on a larger scale (Rowland et al., 1993; Miller & Miller, 1997). These innovations are part of a larger movement of consumer involvement in the provision of services for people with mental illness (see Chapter 2).
The organization of services for adults with severe mental disorders is the linchpin of effective treatment. Since many mental disorders are best treated by a constellation of medical and psychosocial services, it is not just the services in isolation, but the delivery system as a whole, that dictates the outcome of treatment (Goldman, 1998b). Access to a delivery system is critical for individuals with severe mental illness not only for treatment of symptoms but also to achieve a measure of community participation.
Among the fundamental elements of effective service delivery are integrated community-based services, continuity of providers and treatments, and culturally sensitive and high-quality, empowering services (Mowbray et al., 1997; Lehman & Steinwachs, 1998a). Effective service delivery also requires support from the social welfare system in the form of housing, job opportunities, welfare, and transportation (Goldman, 1998a), issues that are discussed in the final section of this chapter.
What models of service delivery are most effective? This section strives to answer this question by focusing on models of service delivery for individuals with severe and persistent mental disorders, including severe depression and bipolar disorder, as well as schizophrenia. Although adults with mental illness in midlife confront many service delivery issues—for example, the problem of proper identification and treatment of depression in primary care settings—those who are most disabled by mental disorders encounter special service delivery problems. The focus on the most disabled is warranted for three reasons: (1) Society has a special obligation to those who are most impaired and consequently are the “least well off” (Callahan, 1999; Goldman, 1999; Rosenheck, 1999); (2) the body of research on mental health services delivery for this population is extensive; and (3) existing service systems are seriously deficient.
The deficiency of existing service systems is best documented for individuals with schizophrenia. The majority of people with schizophrenia do not receive the treatment and support they need, according to a groundbreaking finding of PORT (Lehman & Steinwachs, 1998a). PORT, as noted earlier, developed a series of basic treatment recommendations after reviewing hundreds of outcome studies. It proceeded to determine whether these recommendations were being met by examining current patterns of care in two states in the United States.
Among those with severe mental disorders, any number of special populations might have been the focus for this section. These special populations have severe mental disorders and HIV/AIDS (Cournos & McKinnon, 1997); are involved in the criminal justice system (Abram & Teplin, 1991; CMHS, 1995; Lamb & Weinberger, 1998); or have somatic health problems (Berren et al., 1994; Felker et al., 1996; Brown, 1997). Although some of what follows may be relevant to the unique needs of each of these groups, the evidence base is less well developed.
The remainder of this section focuses on case management, assertive community treatment, psychosocial rehabilitation services, inpatient hospitalization and community alternatives for crisis care, and combined treatment for people with the dual diagnosis of substance abuse and severe mental illness.
The purpose of case management is to coordinate service delivery and to ensure continuity and integration of services. Case managers engage in a variety of activities, ranging from simple roles in locating services to more intensive roles in rehabilitation and clinical care. The less intensive models of case management seem to increase clients’ links to, and use of, other mental health services at relatively modest cost. More intensive models also appear to help clients to increase daily-task functioning, residential stability, and independence, and to reduce their hospitalizations (Borland et al., 1989; Mueser et al., 1998a). Overall, models that focus on specific outcomes are more effective than those with global, vaguely defined goals (Attkisson et al., 1992).
More programs are beginning to employ mental health consumers as case managers in their multidisciplinary staff. Results have been positive, but the programs are challenging to implement and require ongoing supervision as do all case management programs (Mowbray et al., 1996). In a controlled study, clients served by case management teams, along with consumers as peer-specialists, displayed greater gains in several areas of quality of life and greater reductions in major life problems, as compared with two comparison groups of clients served by case management teams without peer-specialists (Felton et al., 1995). One randomized clinical trial compared case management teams wholly staffed by consumers versus case management teams staffed by nonconsumers. The study (at 1-year and 2-year followup) found that clients improved equally well with consumer and nonconsumer case managers (Solomon & Draine, 1995). In this series of studies, the case management teams were part of an intensive program of services known as assertive community treatment.
Assertive community treatment is an intensive approach to the treatment of people with serious mental illnesses that relies on provision of a comprehensive array of services in the community. The model originated in the late 1970s with the Program of Assertive Community Treatment in Madison, Wisconsin (Stein & Test, 1980). Fueled by deinstitutionalization and the vital need for community-based services, a multidisciplinary team serving psychiatric inpatients adapted its role to patients in the community. For this reason, assertive community treatment often is likened to a “hospital without walls.”
The hallmark of assertive community treatment is an interdisciplinary team of usually 10 to 12 professionals, including case managers, a psychiatrist, several nurses and social workers, vocational specialists, and more recently includes substance abuse treatment specialists and peer specialists. Assertive community treatment also possesses these features: coverage 24 hours, 7 days per week; comprehensive treatment planning; ongoing responsibility; staff continuity; and small caseloads, most commonly with 1 staff member for every 10 clients (Scott & Dixon, 1995b). Because of the intensity of services, assertive community treatment is most cost-effective when targeted to individuals with the greatest service need, particularly those with a history of multiple hospitalizations (Scott & Dixon, 1995b; Lehman & Steinwachs et al., 1998a).
Randomized controlled trials have demonstrated that assertive community treatment and similar models of intensive case management substantially reduce inpatient service use, promote continuity of outpatient care, and increase community tenure and residence stability for people with serious mental illnesses (Stein & Test, 1980; Bond et al., 1995; Lehman, 1998; Mueser et al., 1998a). Among the beneficiaries are homeless individuals and those with substance abuse problems and mental disorders. Evidence of effectiveness is weaker for other outcomes (e.g., social integration, employment) and for amelioration of substance abuse problems associated with schizophrenia, particularly when combined treatment is not offered (Mueser et al., 1998b). Assertive community treatment models are generally popular with clients (Stein & Test, 1980) and family members (Flynn, 1998). There also are some preliminary results suggesting that employing peer (i.e., consumer) or family outreach workers on the multidisciplinary assertive community treatment teams increases positive outcomes (Dixon et al., 1997, 1998) and creates more positive attitudes among team members toward people with mental illnesses.
As noted above, there are a range of multicomponent programs called psychosocial rehabilitation services that are distinct from the single component skills training interventions described in the section on interventions for schizophrenia. These psychosocial rehabilitation programs combine pharmacologic treatment, independent living and social skills training, psychological support to clients and their families, housing, vocational rehabilitation, social support and network enhancement, and access to leisure activities (WHO, 1997). Randomized clinical trials have shown that psychosocial rehabilitation recipients experience fewer and shorter hospitalizations than comparison groups in traditional outpatient treatment (Dincin & Witheridge, 1982; Bell & Ryan, 1984). In addition, recipients are more likely to be employed (Bond & Dincin, 1986). Cook & Jonikas (1996) review the outcomes of a wide range of psychosocial rehabilitation programs, including Fairweather lodges (Fairweather et al., 1969) and psychosocial clubhouses (Dincin, 1975), some of which were demonstrated as effective 20 and 30 years ago but have not been widely implemented.
The role of psychiatric hospitalization has changed greatly over recent decades, stemming from the recognition of poor and occasionally abusive conditions, excessive patient dependency, and patients’ loss of connection to the community (Wing, 1962; Gruenberg, 1974). More recent evolution in hospitalization traces to changes in the financing of care and the introduction of new medications (Appleby et al., 1993; Bezold et al., 1996). Community-based alternatives for crisis care services began to flourish in lieu of hospitalization (Fenton et al., 1998; Mosher, 1999).
The new priorities of psychiatric hospitalization focus on ameliorating the risk of danger to self or others in those circumstances in which dangerous behavior is associated with mental disorder, and the rapid return of patients to the community (Sederer & Dickey, 1995). Inpatient units are seen as short-term intensive settings to contain and resolve crises that cannot be resolved in the community. For this reason, inpatients are commonly suicidal, homicidal, or decompensating (experiencing the rapid return of severe symptoms) to the degree that they cannot care for themselves or respond to community-based services. Inpatient services therefore emphasize safety measures, crisis intervention, acute medication and reevaluation of ongoing medications, and (re)establishing the client’s links to other supports and services (Sederer & Dickey, 1997).
Mobile crisis services have developed in many urban areas to prevent hospitalization (Zealberg 1997), as have day hospital programs. With crisis services, a multidisciplinary team comes directly to the aid of the client in the community to provide immediate evaluation and services. This new conceptualization of inpatient care and crisis intervention services minimizes the use of hospital resources; however, well-coordinated teams, sufficient community programs, and ready linkages are not widely available, particularly in rural and frontier areas.
African Americans and Native Americans are overrepresented in psychiatric inpatient units in relation to their representation in the population (Snowden & Cheung, 1990; Snowden, in press). Overrepresentation is found in hospitals of all types except private psychiatric hospitals. The reasons for this disparity, while not completely understood, may reflect a mix of limited access to outpatient services and differences in cultural patterns of help-seeking behavior and overt discriminatory practices. Cost, disinclination to seek help, and lack of community support may contribute to patients’ delay in seeking treatment until symptoms are severe enough to warrant inpatient care. Clinician bias may also be at work. Cultural differences in treatment seeking and treatment utilization are discussed in greater detail in Chapter 2.
As many as half of people with serious mental illnesses develop alcohol or other drug abuse problems at some point in their lives (Mueser et al., 1990; Regier et al., 1993; Drake & Osher, 1997). Theories to explain comorbidity (also known as dual diagnosis) range from genetic to psychosocial, but empirical support for any one theory is inconclusive (Kosten & Ziedonis, 1997; Mueser et al., 1998b). In short, the cause of such widespread comorbidity is unknown.
Comorbidity worsens clinical course and outcomes for individuals with mental disorders. It is associated with symptom exacerbation, treatment noncompliance, more frequent hospitalization, greater depression and likelihood of suicide, incarceration, family friction, and high services, use, and cost (Bartels et al., 1995; Mueser et al., 1997a; Bellack & Gearon, 1998; Havassy & Arns, 1998). Furthermore, patients may be jeopardized by the consequences of substance abuse, namely, increased risk of violence, HIV infection, and alcohol-related disorders (IOM, 1995).
In light of the extent of mental disorder and substance abuse comorbidity, substance abuse treatment is a critical element of treatment for people with mental disorders. Likewise, treatment of symptoms and signs of mental disorders is a critical element of recovery from substance abuse. Yet decades of treating comorbidity through separate mental health and substance abuse service systems proved ineffective (Ridgely et al., 1990; Mueser et al., 1997a).
Research amassed over the past 10 years supports a shift to treatment that combines interventions directed simultaneously to both conditions—that is, severe mental illness and substance abuse—by the same group of providers (Kosten & Ziedonis, 1997; for an example, see Mowbray et al. 1995), but access to such treatment remains limited. Most successful models of combined treatment include case management, group interventions (such as persuasion groups and social skills training), and assertive outreach to bring people into treatment (Mueser et al., 1997a). They typically take into account the cognitive and motivational deficits that characterize serious mental illnesses (Bellack & Gearon, 1998), although many providers still need to be educated (Kirchner et al., 1998). Combined treatment is effective at engaging people with both diagnoses in outpatient services, maintaining continuity and consistency of care, reducing hospitalization, and decreasing substance abuse, while at the same time improving social functioning (Miner et al., 1997; Mueser et al., 1997a).
Although there is little evidence for any particular approach to combining treatments for comorbidity (Ley et al., 1999), recent research suggests that services incorporating behavioral (motivational) approaches to substance abuse treatment are superior to traditional 12-step approaches (e.g., Alcoholics Anonymous) with this population of clients (Drake et al., 1998). This may be because the more structured behavioral methods better accommodate the cognitive difficulties that accompany schizophrenia. Others, however, find self-help interventions tailored to dual-diagnosis clients quite useful (Vogel et al., 1998). Current research also is seeking to tailor combined treatment to the needs and preferences of specific patient subgroups, such as men, women (Alexander, 1996), people with addiction to multiple substances (as opposed to alcohol addiction alone), and people with histories of physical and psychological trauma (Mueser et al., 1997a).
Comprehensive care for adults with severe and persistent mental disorders also includes ancillary services to deal with such social consequences as family disruption and loss of employment and housing. Ancillary services are those above and beyond symptom management and rehabilitation. They include consumer self-help and advocacy, consumer-operated programs, family self-help and advocacy, and human services. The chapter concludes with a brief review of evidence about integrating the mental health service system and the human services system of which it is part.
A driving force for many of these services is to redress the stigma associated with severe and persistent mental illness. Stereotypes and ignorance are omnipresent (Robert Wood Johnson Foundation, 1990; Wahl et al., 1995). They lead many people to avoid living, socializing, or working with, renting to, or employing people with severe mental disorders (Levey et al., 1995). Stigma reduces consumers’ access to resources and opportunities (e.g., housing, jobs), fuels isolation and hopelessness, and leads to outright discrimination and abuse. Thus, overcoming stigma represents yet another challenge of coping with severe and persistent mental illness and of working toward recovery (Wahl & Harman, 1989; Reidy, 1993).
Self-help groups are geared for mutual support, information, and growth. Self-help is based on the premise that people with a shared condition who come together can help themselves and each other to cope, with the two-way interaction of giving and receiving help considered advantageous. Self-help groups are peer led rather than professionally led.
Organized self-help has a long history, with an estimated 2 to 3 percent of the general population involved in some self-help group at any one time (Borkman, 1991, 1997). Over the past several decades, people with serious mental illnesses have formed mutual assistance organizations to aid each other and to combat stigma. These range from small groups held in a member’s home to freestanding nonprofit organizations with paid staff and a range of programs. In general, however, the self-help empowerment trend does not appear to have reached the African-American, Native American, Hispanic/Latino, and Asian-American populations.
As the number and variety of self-help groups has grown, so too has social science research on their benefits (Borkman, 1991). In general, participation in self-help groups has been found to lessen feelings of isolation, increase practical knowledge, and sustain coping efforts (Powell, 1994; Kurtz, 1997). Similarly, for people with schizophrenia or other mental illnesses, participation in self-help groups increases knowledge and enhances coping (Borkman, 1997; Trainor et al., 1997). Various orientations include replacing self-defeating thoughts and actions with wellness-promoting activities (Murray, 1996), improved vocational involvement (Kaufmann, 1995), social support and shared problem solving (Mowbray & Tan, 1993), and crisis respite (Mead, 1997). Such orientations are thought to contribute greatly to increased coping, empowerment, and realistic hope for the future. Additionally, some groups are tailored to meet the needs of consumers who are members of sexual minority groups, men, or those who also have substance disorders (Noordsy et al., 1996; Vogel et al., 1998).
A number of controlled studies have demonstrated benefits for consumers participating in self-help. One study of the self-help group Recovery, Inc., found that leaders and members who were surveyed retrospectively reported fewer symptoms and fewer hospitalizations after joining the group than before. It also found the leaders’ reports of their psychological well-being to have been comparable to community controls (Galanter, 1988). In another study of 115 former mental patients, Luke (1989) found that those who continued to attend self-help meetings at least once per month over a period of 10 months were more likely to show improvement on psychological, interpersonal, or community adjustment measures than those who attended less frequently. Through a case study, which included focus groups and interviews, Lieberman and colleagues (1991) found a consumer-run support group to improve members’ self-confidence and self-esteem and to lead to fewer hospitalizations.
In a survey of mental health self-help group leaders in New York State, respondents identified three positive outcomes that were directly related to their self-help group membership: greater self-esteem, more hopefulness about the future, and a greater sense of well-being. According to survey respondents, all of these positive changes led to fewer hospitalizations (Carpinello & Knight, 1993). A study of six self-help programs in several parts of the United States also reported on consumers’ perceptions of self-help programs (Chamberlin & Rogers, 1990). Although not nationally representative, consumers in this study expressed satisfaction with their self-help program, at which they spent an average of 15 hours per week. They reported that their participation helped them to solve problems and feel more in control of their lives.
Propelled by the growing consumer movement, consumer self-help extends beyond self-help groups. It also encompasses consumer-operated programs, such as drop-in centers, case management programs, outreach programs, businesses, employment and housing programs, and crisis services, among others (Long & Van Tosh, 1988; National Resource Center on Homelessness and Mental Illness, 1989; Van Tosh & del Vecchio, in press). Drop-in centers are places for obtaining social support and assistance with problems, without professionals in attendance. The rationale for consumer roles in service delivery is that consumer staff, clients, and the mental health system can benefit. Consumer staff are thought to gain meaningful work, to serve as role models for clients, and to enhance the sensitivity of the service system to the needs of people with mental disorders. Clients are thought to gain from being served by staff who are more empathic and more capable of engaging them in mental health services (Mowbray et al., 1996).
An appreciation for the potential value of peer support stimulated the Community Support Program of the National Institute of Mental Health to fund local consumer-operated Services Demonstration Projects from 1988 to 1991. These demonstration projects also resulted in the increasing involvement of mental health consumers in the development and provision of peer support, involvement in traditional service roles, evaluation of services, and advocacy. A variety of consumer-operated programs were developed, staffed, and evaluated as states began to fund locally based initiatives (Nikkel et al., 1992; Kaufmann et al., 1993; Mowbray & Tan, 1993). Most evaluations of drop-in centers were in the form of process evaluations that generally found consumers to be satisfied or that programs met their objectives (Kaufmann et al., 1993; Mowbray & Tan, 1993). In 1998, the Federal Center for Mental Health Services initiated a multisite evaluation study of consumer-operated services across the United States.
In addition to ongoing evaluations, there are several published studies of client outcomes with consumer-run programs, although the research base is modest. Several studies, noted earlier, found improved outcomes with consumer self-help programs. Another study evaluated a consumer-run case management program. It compared the effectiveness of a case management program staffed by consumers with a similar program staffed by nonconsumers. Case managers in both programs, which were part of assertive community treatment, performed brokering, assistance, and support functions, rather than clinical management and treatment. The randomized trial found that clients assigned to either case management program fared equally well in clinical, social, and quality of life outcomes (Solomon & Draine, 1995). Recently, peer specialists were added to the recommended staffing for assertive community treatment teams; peer specialists provide expertise and consultation to the entire treatment team (Allness & Knoedler, 1999).
Consumers also may be employed as staff in more traditional mental health services operated by nonconsumer professionals. Consumer positions most commonly include peer counselors, peer job coaches, case managers, staff for drop-in centers, outreach workers, and housing assistants. In a survey of 400 agencies offering supported housing to people with severe mental illness, 38 percent employed mental health consumers as paid staff (Besio & Mahler, 1993). As noted previously, consumers in the role of peer-specialists integrated into case management teams led to improved patient outcomes (Felton et al., 1995).
The mental health field has witnessed great changes in policy development, with consumers playing increasingly visible roles in advocacy. Consumer contribution to policy was initially encouraged by Federal laws mandating consumer participation in planning, oversight, and advocacy activities at the state level (Chamberlin & Rogers, 1990; Van Tosh & del Vecchio, in press). With the establishment of state mental health planning councils and local mental health advisory boards and committees, consumers increasingly have become equal partners in a process often reserved for seasoned policymakers. In addition, consumers have become active participants in the process to reform health and mental health care financing. For example, the Managed Care Consortium was formed in 1995 to create educational opportunities for a host of advocacy organizations across the United States. With funding support from the federal Center for Mental Health Services, this consortium encouraged teams to form in each state to influence the design of managed care programs. Consumers also have entered the halls of many public sector bureaucracies, serving in leadership roles in Offices of Consumer Affairs and interfacing with other government departments. In what was once believed to be the last bastion for consumer integration, consumers are now seen as critical stakeholders and valued resources in the policy process.
Consumers also have become advocates in the communities where they live and work. Advocacy enables consumer groups to shape policy at the local level, where a direct impact can be felt. At the local level, advocacy strives to improve access to, or quality of, needed services and to counter employment and housing discrimination. It can also be helpful in mobilizing resources to build and sustain programs. The National Mental Health Association (NMHA, available at http://www.nmha.org), comprising more than 340 affiliates nationwide, works with and supports the efforts of consumers to achieve advocacy goals.
Family members of people with severe mental illnesses also encounter ignorance and stigma. Stigma translates into avoiding or blaming family members (Phelan et al., 1998; Wahl & Harman, 1989). Families also are under a great deal of stress associated with care giving and obtaining resources for their mentally ill members.
Families—especially parents, siblings, adult children, and spouses—often provide housing, food, transportation, encouragement, and practical assistance. At the same time, schizophrenia and other mental disorders strain family ties. Symptoms of mental disorders may be disruptive and troubling, especially when they flare up. Even when there are no problems, living together can be stressful—interpersonally, socially, and economically. Parents and their adult children often perceive mental disorders and treatment differently, sometimes disagreeing about the best course of action.
Consequently, families too have created support organizations. Some of these are professionally based and facilitated, often as part of a clinic or other treatment program. Others are peer run in the self-help model. Similar to self-help among people with mental illnesses, family self-help can range from small supportive groups to large organizations. The National Alliance for the Mentally Ill (NAMI) is the largest such organization. Starting in 1979 in Wisconsin, NAMI now has 208,000 members nationally. It has more than 1,200 local self-help groups (affiliates) across all 50 states (see http://www.nami.org). While still growing, its members include only a small percentage of the family members of people with mental illnesses in the country (Monking, 1994; Heller et al., 1997a).
Family members primarily attend self-help and support groups to receive emotional support and accurate information about mental illness and mental health services (Heller et al., 1997a, 1997b). Participation often leads to better quality of life for the attending family members and also indirectly benefits the member diagnosed as mentally ill (Wahl & Harman, 1989; Monking, 1994). Family self-help groups can result in better communication and interaction among family members (Heller et al., 1997b).
In addition to providing each other with mutual support, families often devote time, energy, and resources for advocacy to improve services and opportunities for their family members with mental disorders. Similar to consumer advocacy, family advocacy on a local level might include organizing to improve local mental health services, or to redress grievances with service providers. On the national level, consumer groups work to influence legislation and to support research and education initiatives (Wahl & Harman, 1989). Through their advocacy, families have been quite effective in raising their concerns and perspectives to service providers, legislators, and the public.
The clinical symptoms of schizophrenia and other mental disorders are often disruptive and distressing. Their consequences are no less severe—truncated education, unemployment, social isolation, and exclusion from community participation. Facing multiple life stressors, all severe, with a minimum of resources, people with severe mental illnesses often need a variety of supportive services. Paramount among these are housing, employment and income assistance, and health benefits. Consumers have reported their major needs to include adequate income, meaningful employment, decent and affordable housing, quality health care, and education to increase skills (Ball & Havassy, 1984; Rosnow & Rucker, 1985; Lynch & Kruzich, 1986).
Housing ranks as a priority concern of individuals with serious mental illness. Locating affordable, decent, safe housing is often difficult, and out of financial reach. Stigma and discrimination also restrict consumer access to housing. Despite legislation such as the Fair Housing Act, allegations of housing discrimination based on psychiatric disabilities are highly prevalent (U.S. Department of Education, 1998). Landlords and public housing programs are often unwilling to accept tenants with severe mental disorders. In a survey of parents of mentally ill adults, the dearth of decent and affordable housing was a direct barrier to the person moving out of the family home, even when all parties wanted it (Hatfield, 1992).
The actual proportion of people with severe mental illnesses who lack affordable and decent housing has not been assessed directly. Yet indirect assessments point to a serious problem. In 1994, the U.S. Department of Housing and Urban Development (HUD) reported that almost half of all very low-income disabled residents—including persons with serious mental illness—have “worst case” needs for housing assistance. Furthermore, it was reported that the majority of these persons often live in the most severely inadequate housing (U.S. Department of Housing and Urban Development, 1994; U.S. Department of Education, 1998). It is estimated that up to one in three individuals who experience homelessness has a mental illness (Federal Interagency Task Force on Homelessness and Mental Illness, 1992).
The housing preferences of people with schizophrenia and other serious mental disorders are clear: these individuals strongly desire their own decent living quarters where they have control over who lives with them and how decisions are made (Owen et al., 1996; Schutt & Goldfinger, 1996; Sohng, 1996). In an analysis of 26 consumer preference surveys, Tanzman (1993) found that at least 59 percent of those surveyed wanted independent living in a house or apartment. They also preferred to live alone (or with a spouse or partner), yet not with other people with mental disorders. Most also preferred access to mental health and rehabilitation services to support them where they were living.
When deinstitutionalization led to the need for more community housing, the residential programs that were developed replicated institutional programs (Carling, 1989). Although residential programs varied in the degree of oversight and services, they generally proved to be ineffective in meeting consumers’ needs. Moreover, living in such programs added to stigma. Because of these shortfalls, greater emphasis has been placed on conventional housing supplemented by appropriate assistance tailored to individual need (Srebnik et al., 1995). This new concept, called supported housing, moves away from “placing” clients, grouping clients by disability, staff monopolizing decisionmaking, and use of transitional settings and standardized levels of service (Carling, 1989; Lehman & Newman, 1996). Instead, supported housing focuses on consumers having a permanent home that is integrated socially, is self-chosen, and encourages empowerment and skills development. The services and supports offered are individualized, flexible, and responsive to changing consumer needs. Thus, instead of fitting a person into a housing program “slot,” consumers choose their housing, where they receive support services. The level of support is expected to fluctuate over time. With residents living in conventional housing, some of the stigma attached to group homes and residential treatment programs is avoided.
Although there are no randomized clinical trials to support the effectiveness of the supported housing approach, consumer advocacy and changes in clinical practice affirm the shift to supported housing. In a quasi-experimental study, an evaluation of the Robert Wood Johnson Foundation Program on Chronic Mental Illness demonstrated the feasibility and modest benefits of the supported housing approach using rental subsidies from HUD (Newman et al., 1994). Consumers experienced better mental health and more self-determination when they lived in adequate housing (Nelson et al., 1998). For example, one study found that personal empowerment and functioning were enhanced, and hospitalization reduced, after 5 months in a supported housing program (McCarthy & Nelson, 1991). Also, resident control over decisions was directly related to satisfaction and empowerment (Seilheimer & Doyal, 1996). Similarly, another study found that having greater choice in housing was associated with greater happiness and life satisfaction (Srebnik et al., 1995).
Despite these findings, serious housing problems persist for people with schizophrenia and other mental disorders. Most such individuals are poor and thereby face very limited housing options.
People with severe mental illnesses tend to be poor (Polak & Warner, 1996). Although the reasons are not understood, poverty is a risk factor for some mental disorders, as well as a predictor of poor long-term outcome among people already diagnosed (Cohen, 1993; Rabins et al., 1996; Saraceno & Barbui, 1997). People with serious mental illnesses often become dependent on public assistance shortly after their initial hospitalization (Ho et al., 1997). They rely on government disability-income programs, rent subsidies (Loyd & Tsuang, 1985; Polak & Warner, 1996; Ho et al., 1997), and informal sources of economic support (e.g., living with parents). The unemployment rate among adults with serious and persistent mental disorders hovers at 90 percent (National Institute on Disability and Rehabilitation Research, 1992).
Conversely, adequate standards of living and employment are associated with better clinical outcomes and quality of life (Cohen, 1993; Bell & Lysaker, 1997). In a randomized trial of consumers assigned to paid versus unpaid work, paid employment was found to reduce symptoms of schizophrenia (Bell et al., 1996). Moreover, employer accommodations for those with psychiatric disabilities appear to be inexpensive. The most frequently requested accommodations focus on orientation and training of supervisors, provision of onsite support, and adaptive work schedules. Such accommodations rarely result in significant cost to the employer (Mancuso, 1990; Fabian et al., 1993).
While newer vocational rehabilitation and employment initiatives strive to remedy persistently high levels of unemployment, most consumers find themselves unable to work consistently or at all. This is due not only to active symptoms but also to profound interruptions of education and employment caused by symptom onset and exacerbations, stigma and discrimination, lack of higher education programs for this population, and low-paying menial jobs.
When the onset of mental health problems begins during school-age years, educational systems are often ill prepared. Several studies have identified educational deficits in their clientele, who function in reading and math at a level far below their achieved grades in school (Cook et al., 1987; Cook & Solomon, 1993). Supported education models can provide assistance to consumers with their education (Cook & Solomon, 1993; Hoffman & Mastrianni, 1993; Ryglewicz & Glynn, 1993). One example is Consumers and Alliances United for Supported Education, a consumer-operated program in Quincy, Massachusetts, that provides a wide range of services to encourage individuals with psychiatric disabilities to enter or reenter college or technical school programs. Services include academic and career counseling, assistance with finding financial aid, study skills, stress control, tutoring/coaching, and assistance with crisis while hospitalized (CMHS, 1996).
Consumers lack control over their financial affairs when benefit checks are given directly to care providers for the person’s housing and other expenses, or to a legally appointed representative payee (if the person has been deemed unable to manage his/her own finances) (Conrad et al., 1998). Those consumers who manage their own finances usually face such modest monthly budgets that there is no room for error. Funds frequently are depleted before the end of the month. Furthermore, disability payments are sometimes reduced or discontinued when a recipient is working. Since employment is rarely consistent, they need to resume disability benefits. Yet, once they are canceled, government disability benefits can be cumbersome to restart. The Social Security Administration has developed new measures to facilitate reactivation of benefits for individuals who return to work, but they are not yet widely disseminated. In some ways the requirements of Social Security disability benefits and other such programs often act as disincentives to the pursuit of employment (Polak & Warner, 1996; Priebe et al., 1998).
Some people with serious mental illnesses have adequate income or financial assistance (Ware & Goldfinger, 1997). Some have affluent families who can subsidize their expenses. Others collect pensions because they were not disabled by their illness until after they had a substantial work history. Finally, some have found well-paying positions through a formal rehabilitation program, a community-based educational or vocational training program, or a supportive employer.
Health coverage goes hand in hand with housing and income in determining standards of living for people with serious psychiatric disabilities. Due to their low incomes and the high cost of psychiatric and other health services, most people with schizophrenia and other forms of severe and persistent mental disorders rely on Medicare, Medicaid, and other government programs to cover their therapeutic services, medications, and other health care. When reductions or loss of these benefits curtail access to needed medication or services, clients’ health suffers and their use of more expensive emergency services increases (Soumerai et al., 1994). Even when they have access to health insurance coverage, individuals with a mental disorder encounter barriers to procuring that insurance and in receiving general medical care (Druss & Rosenheck 1998).
Integrating the range of services needed by individuals with severe and persistent mental disorders has been a vexing problem for decades. The General Accounting Office (1977) criticized the Federal community mental health centers for their failure to meet the multiple needs of individuals with chronic mental illness. The Federal response was to establish a Community Support Program to provide resources and technical assistance to communities to help them in formulating community support systems to integrate the various services provided by fragmented human services agencies (Turner & TenHoor, 1978; Tessler & Goldman, 1982). The limitations of a community support program in dealing with severe and persistent mental illness in major cities, particularly those with high rates of homelessness, prompted the Robert Wood Johnson Foundation to partner with HUD to create the Program on Chronic Mental Illness (Aiken et al., 1986). This program promoted the concept of local mental health authorities as the agencies responsible for integrating all services for individuals with chronic mental illness, including housing opportunities (Shore & Cohen, 1990, 1994). The Robert Wood Johnson Foundation Program on Chronic Mental Illness was initiated in late 1986 and evaluated over a 6-year period (Goldman et al., 1990a, 1990b, 1994a, 1994b).
The evaluation determined that local mental health authorities were established or strengthened in almost all of the nine cities, resulting in measurable increases in organizational centralization and reduced fragmentation of services (Morrissey et al., 1994). Case management services also were expanded, producing greater continuity of care and reductions in family burden (Lehman et al., 1994; Shern et al., 1994; Tessler et al., 1994). Client outcomes, including social functioning and quality of life measures, improved during the demonstration (Lehman et al., 1994; Shern et al., 1994). Yet the time course of most clients’ improvement did not coincide with improvements in system integration. This suggested that their improvement could not be attributed to system integration. For a subset of clients, improved client outcomes were due to the benefits of special combined housing and support services. Yet, even for this subset, improvements were related, but not directly attributable, to systems integration (Newman et al., 1994).
Evaluators concluded that system integration and traditional case management alone probably were not sufficient to produce optimal social and clinical outcomes (Goldman et al., 1994b; Lehman et al., 1994). They speculated that the availability of rental subsidies and supports or more intensive and higher quality case management services—such as those offered in assertive community treatment—were essential (Ridgely et al., 1996). This set of findings, coincident with the release of the report of the Federal Interagency Task Force on Homelessness and Mental Illness (1992), Outcasts on Main Street, prompted the development of another demonstration program.
Access to Community Care and Effective Services and Supports was launched by the Federal Center for Mental Health Services in 1993 (Randolph et al., 1997). Still in the midst of its evaluation, preliminary findings sustain the benefits of providing assertive community treatment to obtain good clinical and social outcomes. They support the association of better system integration with higher rates of moving individuals with severe mental illness from homelessness into independent housing (Rosenheck et al., 1998a). It remains to be seen, however, whether the improvements in system integration observed over time are associated with improvements in consumers’ clinical and social outcomes.
Integrating service systems remains a challenge to mental health and related human service agencies. Its benefits for accountability and centralization of authority have been established. Its impact on individuals with severe and persistent mental illness may be limited by the lack of available high-quality services and mainstream welfare resources, reflecting the gap between what can be done and what is available (Goldman, 1998a).
1 The acute subform of post-traumatic stress disorder is distinct from acute stress disorder because the latter resolves by the end of the first month, whereas the former persists until 3 months. If the condition persists after 3 months duration, the diagnosis is again changed to the chronic post-traumatic stress disorder subform (DSM-IV).
2 Anxiety is one of the few mental disorders for which animal models have been developed. Researchers can reproduce some of the symptoms of human anxiety in animals by introducing different types of stressors, either physical or psychosocial.
3 Hypothalamus and the pituitary gland, and then the cortex, or outer layer, of the adrenal gland. Upon stimulation by the pituitary hormone ACTH, the adrenal cortex releases glucocorticoids into the circulation.
4 Also known as coriocotropin-releasing factor.
5 CRH may act as a neuromodulator, a neurotransmitter, or a neurohormone, depending on the pathway.
6 The adjective “major” before the word “depression” denotes the number of symptoms required for the diagnosis, as distinct, from a proposed new category of “minor depression,” which requires fewer symptoms (see Chapter 5).
7 Bipolar disorder is also known as bipolar affective disorder and manic depression.
8 Monoamine neurotransmitters are a chemical class that includes catecholamines (norepinephrine, epinephrine, dopamine) and indoleamines (serotonin).
9 A small, albeit noteworthy, sex-related difference is seen in the higher incidence of rapid-cycling bipolar disorder in women (cited in Blumenthal, 1994).
10 Nonadherence is defined as lack of adherence to prescribed activities such as keeping appointments, taking medication, and completing assignments.
11 Technically, FDA approves drugs for a selected indication (a disorder in a certain population). However, once the drug is marketed, doctors are at liberty to prescribe it for unapproved (off-label) indications.
12 Bipolar depression refers to episodes with symptoms of depression in patients diagnosed with bipolar disorder.
13 These are the vital components of most contemporary rehabilitation programs (see section on service delivery).
14 A chlorpromazine equivalent is a measure in milligrams of antipsychotic medication doses indexed to the potency of a standard dosage of chlorpromazine, one of the earliest, most widely used antipsychotic medications.
15 Acute dystonia is involuntary muscle spasms resulting in abnormal and usually painful body positions. Parkinsonism is defined by tremors, muscle rigidity, and stuporous appearance. Dyskinesias are involuntary repetitive movements, often of the mouth, face, or hands, and akathisia is painful muscular restlessness requiring the person to move constantly.
16 For Caucasian, Hispanic, Asian, Africian-Americans variations, see Frackiewicz et al., 1997; Chinese-Jann et al., 1992; black, white, Chinese, Mexican American-Lam et al., 1995; Lin et al., 1995).