RELATIVELY DEPRIVED

by JOHN CASSIDY

How poor is poor?

Issue of 2006-04-03
Posted 2006-03-27

In the summer of 1963, Mollie Orshansky, a forty-eight-year-old statistician at the Social Security Administration, in Washington, D.C., published an article in the Social Security Bulletin entitled “Children of the Poor.” “The wonders of science and technology applied to a generous endowment of natural resources have wrought a way of life our grandfathers never knew,” she wrote. “Creature comforts once the hallmark of luxury have descended to the realm of the commonplace, and the marvels of modern industry find their way into the home of the American worker as well as that of his boss. Yet there is an underlying disquietude reflected in our current social literature, an uncomfortable realization that an expanding economy has not brought gains to all in equal measure. It is reflected in the preoccupation with counting the poor—do they number 30 million, 40 million, or 50 million?”

Orshansky’s timing was propitious. In December of 1962, President John F. Kennedy had asked Walter Heller, the chairman of the Council of Economic Advisers, to gather statistics on poverty. In early 1963, Heller gave the President a copy of a review by Dwight Macdonald, in The New Yorker, of Michael Harrington’s “The Other America: Poverty in the United States,” in which Harrington claimed that as many as fifty million Americans were living in penury.

The federal government had never attempted to count the poor, and Orshansky’s paper proposed an ingenious and straightforward way of doing so. Orshansky had experienced poverty firsthand. Born in the South Bronx in 1915, she was one of six daughters of Ukrainian Jewish immigrants who barely spoke English. Her father, a plumber and ironworker, was often unemployed, and Orshansky and her sisters wore hand-me-downs and slept two to a bed. Sometimes the family stood in relief lines to collect food. Nevertheless, Orshansky attended Hunter College High School, which was then a school for gifted girls, and went on to Hunter College, where she majored in mathematics and statistics. In 1939, she joined the U.S. Children’s Bureau, now part of the Department of Health and Human Services, and studied children’s health and nutrition.

Orshansky never married or had children, but she was passionate about children’s welfare. From 1945 to 1958, she worked in the Department of Agriculture’s Bureau of Human Nutrition and Home Economics, where she worked on a series of diets designed to provide poor American families with adequate nutrition at minimal cost. In painstaking detail, the food plans laid out the amount of meat, bread, potatoes, and other staples that families needed in order to eat healthily. These were “by no means subsistence diets,” Orshansky later wrote. “But they do assume that the housewife will be a careful shopper, a skillful cook, and a good manager who will prepare all the family’s meals at home.”

In 1958, Orshansky joined the research department of the Social Security Administration, and decided to try to estimate the incidence of child poverty. “Poor people are everywhere; yet they are invisible,” she told a reporter for the Dallas Morning News in 1999. “I wanted them to be seen clearly by those who make decisions about their lives.” Building on pioneering research on diet and poverty conducted in York at the turn of the twentieth century by Seebohm Rowntree, a British social reformer, Orshansky used her food plans to calculate a subsistence budget for families of various sizes. For a mother and father with two children, she estimated the expense of a “low cost” plan at $3.60 a day, and of an even more frugal “economy plan” at $2.80 a day. Rather than trying to calculate the price of other items in the family budget, such as rent, heat, and clothing, Orshansky relied on a survey by the Agriculture Department, which showed that the typical American family spent about a third of its income on food. Thus, to determine the minimum income a family needed in order to survive, she simply multiplied the annual cost of the food plans by three. Families on the low-cost plan needed to earn at least $3,955 a year; families on the economy plan needed to earn $3,165.

Orshansky compared these figures with the Census Bureau’s records on pre-tax family incomes and concluded that twenty-six per cent of families with children earned less than the upper poverty threshold and eighteen per cent earned less than the lower poverty threshold. In total, she estimated that between fifteen million and twenty-two million children were living in poverty, a disproportionate number of them in single-parent households and minority neighborhoods. “It would be one thing if poverty hit at random, and no one group were singled out,” she wrote. “It is another thing to realize that some seem destined to poverty almost from birth—by their color or by the economic status or occupation of their parents.”

Heller and his colleagues on the Council of Economic Advisers cited Orshansky’s paper in an “Economic Report to the President” that appeared in January, 1964, shortly after Kennedy’s successor, Lyndon B. Johnson, declared a “war on poverty” in his State of the Union address. In August of that year, Congress created the Office of Equal Opportunity, which used Orshansky’s method to determine eligibility for new anti-poverty programs, such as Head Start. Other federal agencies followed suit, and in 1969 the White House adopted a slightly modified version of Orshansky’s lower threshold—the one based on the economy food plan—as the official poverty line.

In the nineteen-sixties, many economists believed that economic growth and government intervention would eliminate poverty. Between 1964 and 1973, as Johnson’s Great Society programs went into effect, the poverty rate fell from nineteen per cent of the population to 11.1 per cent. But, while the nation’s inflation-adjusted gross domestic product has virtually tripled since 1973, the poverty rate has hardly budged. In 2004, the most recent year for which figures are available, it stood at 12.7 per cent, a slight increase over the previous year, and in some regions the figure is much higher. The horror of Hurricane Katrina was not just the physical destruction it wrought but the economic hardship it exposed. In New Orleans, the poverty rate in 2004 was twenty-three per cent, a fact that George W. Bush noted in his address from New Orleans’ French Quarter on September 15th, when he said, “We have a duty to confront this poverty with bold action.” (Six months later, the Bush Administration has yet to present an anti-poverty plan.) According to the Census Bureau, many cities are even poorer than New Orleans. In Detroit in 2004, the poverty rate was 33.6 per cent; in Miami, it was 28.3 per cent; and in Philadelphia it was 24.9 per cent. (In New York, it was 20.3 per cent.)

The persistence of endemic poverty raises questions about how poverty is measured. In the past ten years or so, significant changes have been made in the way that inflation, gross domestic product, and other economic statistics are derived, but the poverty rate is still calculated using the technique that Orshansky invented. (Every twelve months, the Census Bureau raises the income cutoffs slightly to take inflation into account.)

This approach has some obvious shortcomings. To begin with, the poverty thresholds are based on pre-tax income, which means that they don’t take into account tax payments and income from anti-poverty programs, such as food stamps, housing subsidies, the Earned Income Tax Credit, and Medicaid, which cost taxpayers hundreds of billions of dollars a year. In addition, families’ financial burdens have changed considerably since Orshansky conducted her research. In the late fifties, most mothers didn’t have jobs outside the home, and they cooked their families’ meals. Now that most mothers work full time and pay people to help them take care of their kids, child care and commuting consume more of a typical family budget.

Another problem is that the poverty thresholds are set at the same level all across the country. Last year, the pre-tax-income cutoff for a couple with two children was $19,806. This might be enough to support a family of four in rural Arkansas or Tennessee, but not in San Francisco, Boston, or New York, where the real-estate boom has created a shortage of affordable housing. According to Jared Bernstein and Lawrence Mishel, economists at the liberal Economic Policy Institute, in Washington, D.C., the average rent in working-class neighborhoods of Boston is about a thousand dollars a month, which for a family of four with a poverty-level income leaves just six hundred and fifty dollars a month for food, clothing, heat, and everything else. Bernstein and Mishel argue that in some cities the poverty thresholds should be twice their current level.

Such considerations suggest that the official measures understate the extent of poverty, but the opposite argument can also be made. The poverty figures fail to distinguish between temporary spells of hardship, like those caused by a job loss or a divorce, and long-term deprivation. Surveys show that as many as forty per cent of people who qualify as poor in any given year no longer do so the following year. Middle-class families that suffer a temporary loss of income can spend their savings, or take out a loan, to maintain their living standard, and they don’t belong in the same category as the chronically impoverished. One way to remedy this problem is to consider how much households spend, rather than how much they earn. If in the course of a year a household spends less than some designated amount, it is classified as poor. Daniel T. Slesnick, an economist at the University of Texas, has tested this approach using figures that he obtained from the Department of Labor’s Consumer Expenditure Survey, which tracks the buying habits of thousands of American families. Slesnick calculated that the “consumption poverty rate” for 1995—that is, the percentage of families whose spending was less than the poverty income threshold—was 9.5 per cent, which is 4.3 per cent less than the official poverty rate. Subsequent studies have confirmed Slesnick’s findings.

In 1995, a panel of experts assembled by the National Academy of Science concluded that the Census Bureau measure “no longer provides an accurate picture of the differences in the extent of economic poverty among population groups or geographic areas of the country, nor an accurate picture of trends over time.” The panel recommended that the poverty line be revised to reflect taxes, benefits, child care, medical costs, and regional differences in prices. Statisticians at the Census Bureau have experimented with measures that incorporate some of these variables, but none of the changes have been officially adopted.

The obstacles are mainly political. “Poverty rates calculated using the experimental measures are all slightly higher than the official measure,” Kathleen Short, John Iceland, and Joseph Dalaker, statisticians at the Census Bureau, reported in a 2002 paper reviewing the academy’s recommendations. In addition to increasing the number of people officially classified as impoverished, revising the Census Bureau measure in the ways that the poverty experts suggested would mean that more elderly people and working families would be counted as poor.

Conservatives would prefer a measure that reduces the number of poor people. “The poverty rate misleads the public and our representatives, and it thereby degrades the quality of our social policies,” Nicholas Eberstadt, of the American Enterprise Institute, wrote in a 2002 article. “It should be discarded for the broken tool that it is.” In February, the conservatives appeared to make some headway when the Census Bureau released a report on some new ways of measuring poverty that could cut the official rate by up to a third.

Rather than trying to come up with a subsistence-based poverty measure about which everybody can agree, we should accept that there is no definitive way to decide who is impoverished and who isn’t. Every three years, researchers from the federal government conduct surveys about the number of appliances in the homes of American families. In 2001, ninety-one per cent of poor families owned color televisions; seventy-four per cent owned microwave ovens; fifty-five per cent owned VCRs; and forty-seven per cent owned dishwashers. Are these families poverty-stricken?

Not according to W. Michael Cox, an economist at the Federal Reserve Bank of Dallas, and Richard Alm, a reporter at the Dallas Morning News. In their book “Myths of Rich and Poor: Why We’re Better Off Than We Think” (1999), Cox and Alm argued that the poverty statistics overlook the extent to which falling prices have enabled poor families to buy consumer goods that a generation ago were considered luxury items. “By the standards of 1971, many of today’s poor families might be considered members of the middle class,” they wrote.

Consider a hypothetical single mother with two teen-age sons living in New Orleans’ Ninth Ward, a neighborhood with poor schools, high rates of crime and unemployment, and few opportunities for social advancement. The mother works four days a week in a local supermarket, where she makes eight dollars an hour. Her sons do odd jobs, earning a few hundred dollars a month, which they have used to buy stereo equipment, a DVD player, and a Nintendo. The family lives in public housing, and it qualifies for food stamps and Medicaid. Under the Earned Income Tax Credit program, the mother would receive roughly four thousand dollars from the federal government each year. Compared with the destitute in Africa and Asia, this family is unimaginably rich. Compared with a poor American family of thirty years ago, it may be slightly better off. Compared with a typical two-income family in the suburbs, it is poor.

The concept of relative deprivation was first described by Adam Smith in “The Wealth of Nations,” in a passage on the “necessaries” of daily life:

By necessaries I understand not only the commodities which are indispensably necessary for the support of life, but what ever the customs of the country renders it indecent for creditable people, even the lowest order, to be without. A linen shirt, for example, is, strictly speaking, not a necessary of life. The Greeks and Romans lived, I suppose, very comfortably, though they had no linen. But in the present times, through the greater part of Europe, a creditable day-laborer would be ashamed to appear in public without a linen shirt, the want of which would be supposed to denote that disgraceful degree of poverty which, it is presumed, nobody can well fall into, without extreme bad conduct. Custom, in the same manner, has rendered leather shoes a necessary of life in England.

For decades, economists overlooked Smith’s analysis, and it was left to sociologists and anthropologists to study the impact of relative deprivation. During the Second World War, Samuel A. Stouffer, a sociologist at the University of Chicago, and a team of researchers compared the levels of job satisfaction reported by members of the military police, a profession in which few people were promoted, and members of the Army Air Force, where there were frequent opportunities for advancement. To the researchers’ surprise, the policemen reported greater happiness in their jobs than the airmen. One possible explanation, the researchers speculated, is that the policemen tended to compare themselves with colleagues who hadn’t been promoted, whereas the “reference group” for the airmen was colleagues who had been promoted. “The more people a man sees promoted when he is not promoted himself,” the Cambridge University sociologist W. G. Runciman wrote in 1966, in his book “Relative Deprivation and Social Justice,” “the more people he may compare himself to in a situation where the comparison will make him feel relatively deprived.”

More recently, three economists at the University of Warwick published the results of a survey of sixteen thousand workers in a range of industries, in which they found that the workers’ reported levels of job satisfaction had less to do with their salaries than with how their salaries compared with those of co-workers. Human beings are also competitive with their neighbors. Erzo Luttmer, an economist at the John F. Kennedy School of Government, recently found that people with rich neighbors tend to be less happy than people whose neighbors earn about as much money as they do. It appears that, while money matters to people, their relative ranking matters more.

Relative deprivation is also bad for your health. In a famous study conducted between 1967 and 1977, a team of epidemiologists led by Sir Michael Marmot, of University College London, monitored the health of more than seventeen thousand members of Britain’s Civil Service, a highly stratified bureaucracy. Marmot and his colleagues found that people who had been promoted to the top ranks—those who worked directly for cabinet ministers—lived longer than their colleagues in lower-ranking jobs. Mid-level civil servants were more likely than their bosses to develop a range of potentially deadly conditions, including heart disease, high blood pressure, lung cancer, and gastrointestinal ailments.

Initially, some critics suggested that these results could be attributed to differences in behavior: members of the lower ranks were more likely to smoke and drink and less likely to exercise and eat healthily than their better-paid superiors. To test this theory, Marmot and his team have been conducting a follow-up study of civil servants, which began in 1985 and continues to this day. This survey has confirmed the results of the first study, and has also suggested that less than a third of the difference in patterns of disease and mortality can be ascribed to behavior associated with coronary risk, such as smoking or lack of exercise. “The higher the social position, the longer people can expect to live, and the less disease they can expect to suffer,” Marmot explained in a recent paper. “This is the social gradient in health.”

The British findings have been replicated in other parts of the world, including the United States. Amartya Sen, who won the 1998 Nobel in economics, has pointed out that African-Americans as a group have a smaller chance of reaching old age than Indians born in the impoverished state of Kerala, who are much poorer. Part of the reason may be the high rate of homicide deaths among young African-American men, but African-American women also have higher mortality rates than women in Kerala. “So it is not only the case that American blacks suffer from relative deprivation in terms of income per head vis-à-vis American whites,” Sen wrote in his 1999 book, “Development as Freedom.” “They also are absolutely more deprived than the low-income Indians in Kerala.”

The epidemiological studies don’t explain how relative deprivation damages people’s health; they simply suggest that there is a connection. One possibility is that subordination leads to stress, which damages the body’s immune system. In the animal kingdom, where there are bitter fights over relative status, there is evidence supporting this hypothesis. The neurobiologist Robert Sapolsky has described how dominant baboons in troops on the African plains verbally and physically abuse their subordinates. When Sapolsky analyzed blood samples from low-ranking baboons, he found high levels of a hormone associated with stress. Other scientists have shown that dominant rhesus monkeys have lower rates of atherosclerosis (hardening of the arteries) than monkeys further down the social hierarchy, and when dominant female monkeys are relegated to a subordinate status their rate of heart disease goes up.

“Given the animal results,” Angus Deaton, a Princeton economist who is an expert on poverty, wrote in a recent paper about relative deprivation and mortality, “the degree to which low rank is harmful to an individual is likely to depend on the number of people of higher rank, because each such person is in a position to deliver the threats, insults, enforced obeisance, or ultimate violence that generate stress. Individuals who are insulted by those immediately above them insult those immediately below them, generating a cascade of threats and violence through which low-ranked individuals feel the burden, not just of their immediate superiors, but of the whole hierarchy above them.”

Poor health may be the most dramatic consequence of relative deprivation, but there are more subtle effects as well. Although many poor families own appliances once associated with rich households, such as color televisions and dishwashers, they live in a society in which many families also possess DVD players, cell phones, desktop computers, broadband Internet connections, powerful game consoles, S.U.V.s, health-club memberships, and vacation homes. Without access to these goods, children from poor families may lack skills—such as how to surf the Web for help-wanted ads—that could enhance their prospects in the job market. In other words, relative deprivation may limit a person’s capacity for social achievement. As Sen put it, “Being relatively poor in a rich country can be a great capability handicap, even when one’s absolute income is high in terms of world standards.” Research by Tom Hertz, an economist at American University, shows that a child whose parents are in the bottom fifth of the income distribution has only a six-per-cent chance of attaining an average yearly income in the top fifth. Most people who start out relatively poor stay relatively poor.

Since relative deprivation confers many of the disadvantages of absolute deprivation, it should be reflected in the poverty statistics. A simple way to do this would be to classify a household as impoverished if its pre-tax income was, say, less than half the median income—the income of the household at the center of the income-distribution curve. In 2004, the median pre-tax household income was $44,684; a poverty line based on relative deprivation would have been $22,342. (As under the current system, adjustments could be made for different family sizes.)

Adopting a relative-poverty threshold would put to rest the debate over how to define a subsistence threshold. As long as the new measure captured those at the bottom of the social hierarchy, it wouldn’t matter much whether the income cutoff was set at forty per cent or fifty per cent of median income. If poverty is a relative phenomenon, what needs monitoring is how poor families make out compared with everybody else, not their absolute living standards.

Academics have proposed a relative-poverty line before; notably, the British sociologist Peter Townsend, in 1962, and the American economist Victor Fuchs, who is now an emeritus professor at Stanford, in 1965. Nobody has taken the idea very seriously. “I still think that it is the right way to think about poverty, especially from a policy point of view,” Fuchs told me. Unfortunately, few politicians and poverty experts agree. Liberals fear that shifting the focus of policy away from hunger and physical need would make it even harder to win support for government anti-poverty programs; conservatives fear that adopting a relative-poverty rate would be tantamount to launching another costly war on poverty that the government couldn’t hope to win.

Neither of these fears is justified. Many Americans are skeptical about government anti-poverty programs, because they believe that the impoverished bear some responsibility for their plight by dropping out of high school, taking drugs, or committing crimes. Raising public awareness about relative deprivation could help to change attitudes toward the poor, by showing how those at the bottom of the social hierarchy continue to face obstacles even as they, along with the rest of the society, become more prosperous. The Times recently reported that more than half of black men in inner cities fail to finish high school, and that, nationwide, almost three-quarters of black male high-school dropouts in their twenties are unemployed. “It doesn’t do a poor person any good to say ‘You are better off than you would have been thirty years ago,’ ” Fuchs said. “The pathologies we associate with poverty—crime, drug use, family disintegration—we haven’t eliminated them at all.”

The conservative case against a relative-poverty line asserts that since some people will always earn less than others the relative-poverty rate will never go down. Fortunately, this isn’t necessarily true. If incomes were distributed more equally, fewer families would earn less than half the median income. Therefore, the way to reduce relative poverty is to reduce income inequality—perhaps by increasing the minimum wage and raising taxes on the rich. Between 1979 and 2000, the inflation-adjusted earnings of the poorest fifth of Americans increased just nine per cent; the earnings of the middle fifth rose fifteen per cent; and the earnings of the top fifth climbed sixty-eight per cent.

In the Ninth Ward and in neighborhoods like it, the gap between aspiration and reality has never been greater. As Americans were shocked to learn, many residents lacked the means to pay for transportation out of the city during Hurricane Katrina. But the poor of New Orleans were also relatively deprived, as became clear when they were transported to Houston and other cities and, in some cases, ended up staying with affluent white families. (Not surprisingly, conflict ensued. In Houston’s public high schools, Katrina evacuees have been involved in brawls.) The entire episode demonstrated that those at the bottom of the social pecking order are not only economically detached from other Americans; they are also socially and geographically isolated.

Introducing a relative-poverty line would help shift attention to this larger problem of social exclusion. Although few attempts have been made to address the issue, the results have been promising. A recent long-term study of Head Start, which began in 1964, as one of the original “war on poverty” initiatives, found that poor children who participated in the program were more likely to finish high school and less likely to be arrested for committing crimes than those who did not. And in another initiative, undertaken between 1976 and 1998, the city of Chicago relocated thousands of impoverished African-Americans from inner-city projects to subsidized housing in middle-class, predominantly white suburbs; researchers found that the adults who participated were more likely to be employed, and their children were more likely to graduate from high school, than their inner-city counterparts. (A more recent experiment, in which the federal government gave vouchers to poor residents in a number of cities, enabling them to move to wealthier neighborhoods, has failed to produce similar gains. Many of the participants chose to live near one another, which researchers think may account for the disappointing results.)

Mollie Orshansky, who is now ninety-one and living on Manhattan’s East Side, never warmed to the idea of a relative-poverty line—she was too concerned about people actually starving—but she wasn’t wedded to her method, either. “If someone has a better approach, fine,” she said in 1999. “I was working with what I had and with what I knew.”