Posted by Jay Livingston
Christmas in the Northeast was a warm one. Brad Wright describes the sartorial adjustments his six-year-old made at Christmas eve services (baring midriff, rolling up pantlegs and shirtsleeves). Dress codes were apparently not enforced.
Sunday evening here in New York, at a local Catholic church’s Christmas eve family mass, the father of one of the little girls in the children’s choir sat in the front pew wearing jeans and a mustard-colored sweatshirt. A few men wore neckties; most didn't. Some women were in their holiday outfits, but some others wore sneakers. I was reminded of a couple I know who exemplify the American success story, raised in a Catholic working class home but now quite successful. Somewhere along the way, she changed the family’s affiliation to the Episcopal church because the people at the Catholic churches just didn’t seem to care what they wore.
It’s anecdotal evidence of course, but it may be representative. Thirty years ago in Americans Together, a study of a Midwestern town (“Appleton”), French anthropologist Hervé Varenne noted the differences in how people dressed for church. The Protestants dressed up. The Catholics offered a much greater variety, from Sunday best to very casual. As I recall, Varenne traced the differences back to the theology of the Reformation, especially (Weber noted this too) insecurity about one's state of salvation. The more individualist Protestant doctrine results in a pressure on members to show outwardly the signs of grace (not that any of the congregants in Appleton nearly a half-millenium after the fact would have put it that way). In Catholicism, your place in the community and in heaven is more secure; you need only to come to church, confess, take communion, etc.
(I highly recommend Varenne’s book to anyone interested in American culture. Several chapters, though not the one on Protestants and Catholics, are available online at his website.)
Posted by Jay Livingston
Why is it news when sex objects behave sexually? And why do people feign shock and horror?
I have not been following the Miss USA flap closely. It hardly seems important enough, though anything that makes Donald Trump a matter of mockery can’t be all bad even if it does serve his never-ending quest for publicity.
Trump owns the Miss USA beauty contest and a couple of others. Recently, the alert media reported that this year’s winner, Miss Kentucky, having won her title in the usual way — i.e., parading around skimpily clad in front of a lot of people—had behaved immorally. She had been drinking to bars, testing positive for cocaine, and even kissing Miss Teen USA, who presumably won her title in a similar way. What else could Trump do but threaten to take the title away? He could let the story play out for a couple of days, that’s what, and then continue to keep the story in the news by then saying that she could keep her title. The stock plotline Trump selected was that Miss KY was a basically good small-town girl corrupted by the wicked ways of New York and that she deserves a second chance.
Today, the news is that Miss Nevada is being cashiered for, of all things, being sexual. (Nevada, if I remember correctly, is the only state in the country that has legalized brothels.) Some photos of her kissing and flashing at some party have surfaced (you can find the uncensored version on the Internet, but far be it from a wholesome blog like this one to provide you the URL).
Is all this peculiarly American? I suspect that the beauty pageant is an American invention, and there may be something especially American about it — the display of sexuality amid the continual declaration of high-mindedness, the denial of both the obvious lechery and the only slightly less obvious profit motive.
Posted by Jay Livingston
The news today is that Pittsburgh, my old hometown, is going to get a gambling casino. All slot machines.
Up until about 25 years ago, the action in casinos was at the tables. People crowded around a crap table generate excitement, almost a team spirit since most are betting with the shooter rather than with the house. And everyone gets a chance to be the shooter, as the dice pass from player to player around the table. Roulette and blackjack are calmer, the players seated, and the house, rather than one of the players, spinning the wheel or dealing the cards, but the players are still there together, aware of each other’s bets.
The tables were where the casinos made their money. They courted the high rollers, comping them rooms, food, and even air fare. The slot machines were small-time stuff, a way to keep wives from getting bored.
Then the balance began to shift until now slot machines account for most casino revenue, typically 75%, even higher in some places. So why not just get rid of the tables altogether and have nothing but machines? From the casino’s point of view, there are lots of reasons to get rid of the tables, mostly things like labor costs, health benefits, and other potential difficulties that arise when your employees are human beings.
But what is the attraction for players? Is that they too feel more comfortable alone with a machine than among other humans?
There may be other reasons as well. You don’t have to worry about how much to tip if you win; you don’t have to tip at all. Also, the machines are far more complicated than the old three-wheel one-armed bandits. They resemble video games, with different levels you can move through and different choices you can make. The generation raised on video games may feel more comfortable with these machines and may find a simple pair of dice or deck of cards incredibly one-dimensional.
Even the traditional games are becoming mechanized. You can play poker, craps, or roulette at an electronic console rather than at a table. I guess I’m hopelessly old fashioned. I’d be less likely to trust a programmable computer to give an honest roll of the dice or turn of a card than I would a real person holding the actual dice or deck.
The sociological question is the one Putnam raised about bowling. Does this transformation of gambling yet one more way that social life is becoming more fragmented and individualized? What makes public social life interesting is the possibility of new experience, something we never expected. The more individual control we have over our environment, the more we remove the possibility of these unplanned encounters.
In the fully mechanized casino, people minimize the chance of a random social encounter while at the same time they cede complete control over their money to a flashy random-number generator.
Posted by Jay Livingston
Surveillance cameras. London has a half million of them. In New York, in Greenwich Village and Soho, there are about 4,200 — a drop in the London bucket, but five times more than in 1998. That’s according to a survey out last week by the New York Civil Liberties Union. The majority of the cameras were installed and operated by private businesses and buildings.
The cameras are supposedly for our protection, but the NYCLU and others claim that the cameras do not reduce crime, though they may help catch perpetrators after the crime has been committed. But perhaps that’s only because the criminals don’t know about them. Or if criminals do know, the cameras are so unobtrusive that the criminals forget they are there. As anyone who has done participant observation knows, after a while, people will tune out even human observers who are standing right there and go on about their business, even when that business is of questionable legality. “I don’t see how my men could have done that with those observers right there in the car,” said one police officer when shown an observational report about police brutality. That was forty years ago. Now the cops are on videotape. Has possibility of a video turning up all over the news on TV has had any affect on the way police do their work?
The NYCLU worries about the erosion of privacy, especially by police cameras. The camera proponents argue that the cameras are trained on the streets or the interiors of stores. They see nothing more than what a person in the same situation might see, though usually from a higher angle. The NYCLU points to cases where people thought they were alone, in fact were alone in the sense that no other people were around, but were secretly taped. The NYCLU even found a classic example: police using the night-vision capacity of a helicopter camera switched the focus from a bicycle protest to the terrace of an apartment building, where a couple who thought they were alone in the dark were making love.
Still, there’s a difference between being out in public, casually noticed by strangers, and being watched. One afternoon in ancient times, back when I was in grad school, I was walking around town after lunch one day. (It may even have been one of those days when I lunched with a group that included the current director of the NYCLU, not that that’s relevant.) It was a warm day, and because my hands sweat, and because I didn’t want the paperback book I was carrying to get damaged, I folded it into the protection of my newspaper. At some point as I was walking down the street, a man in a suit tapped me on the shoulder. “Do you mind if I see that book you’ve got inside that paper?” he said.
I was stunned. He was a store detective from the bookstore, where I had been browsing earlier, and he thought I might have shoplifted the book. I showed him the book, which he could see immediately was not stolen. “O.K.,” he said. I had no problem with the bookstore wanting to protect itself from shoplifting. But then it hit me. “You mean you’ve been following me all this time?” I had left the bookstore ten or fifteen minutes earlier. “Yeah. I lost you over on [he named some street or store, which I don’t remember] for a while.”
My reaction was visceral; I could feel it in my gut— uneasiness, almost fear. I immediately thought back over my path since leaving the store. I had been in public the whole time, all my activities visible to strangers. Still, I wondered if I had done anything that I wouldn’t have wanted him to have seen— nothing criminal, just embarrassing or in some way private. I didn’t like the idea that I’d been followed and spied on.
When we’re in public, we take it for granted that others will notice us as one of the crowd. It’s a very different feeling to think that we are having every movement, every twitch and scratch, closely observed and recorded.
I guess I’ll watch Coppola’s film “The Conversation” next time it comes around on television. And maybe I’ll include the Civil Liberties Union among my year-end donations.
Posted by Jay Livingston
A standard church sermon warns us against placing too much emphasis on material objects, wealth, and success. Pursuit of these worldly goals imperils not only our souls but our human relationships with family and friends. That’s Sunday. Monday morning, we go back to a life dominated by the very same values -- success and the money and material goods that come with it.
For those who don’t go to church to hear this message, there’s always the movies.
Last weekend, I saw “The Devil Wears Prada,” recently released on DVD. How many times have we seen this story? I was tempted to stop the DVD after the first two minutes and ask my 16-year-old to predict the plot, and I’m sure he could have. I suppose it’s a sign of progress that this story can now be told with women in the main roles and men as pretty faces. But the moral about yielding to the devil is the same, and so are his temptations— career success and the things money can buy.
In “Prada,” a sensible young woman (Anne Hathaway) with a journalism degree, good values, and a working class boyfriend (the good-looking guy from “Entourage” as a chef) gets caught up in the world of high fashion, where appearance counts for everything. That world and its values are personified in the character of her arrogant, demanding boss (Meryl Streep), a fashion editor who apparently dominates the entire fashion industry.
Our good girl, seemingly against her will, winds up getting new hair, new makeup, and clothes, clothes, clothes. She works long hours trying to please her boss and becomes super-competent at her job. Only late in the film does she realize what she has sacrificed: “I turned my back on my friends and family.” And when she tries to blame everything on the external pressures of her work, Streep tells her bluntly, “You chose to get ahead.”
Of course, in the end, she walks out on the fashion world and into the good kind of journalism she was looking for at the beginning of the film.
The conflict between relationships and success is standard stuff in American TV and movies and perhaps in real life as well, though only in the movies do people regularly turn their backs on a successful career. If “Prada” offers anything new, it’s to call into question not just our materialism but even our values on achievement and good old fashioned hard work.
This is not to say that movies show us the underside of all our values. Just a select few like success. Freedom, independence, equality, optimism, rationality, informality — it’s hard to think of a film that portrays these as anything but good.
But at least “Prada” confronts its heroine with a choice. More typically, American movies and TV pretend that you actually can have it both ways. You can be successful without abandoning your roots, you can move up without moving out. “Entourage” is a good example, an urban version of “The Beverly Hillbillies.” The guys remain true to their Queens working-class ways and to one another even when surrounded by Hollywood with all its tension and pretension, and yet they always come out on top.
Posted by Jay Livingston
A long time observer of American society once said, “The other night I went to a fight, and a hockey game broke out.”
Last night it was basketball. Knicks vs. Nuggets at the Garden. It was late in the game, and the Knicks had no chance of winning. Mardy Collins of the Knicks committed a flagrant foul, horse-collaring J.R. Smith of the Nuggets, who was about to jump for an otherwise uncontested breakaway jam.
Smith reacted. In-your-face chest bumping, led to pushing. Other players joined in, some pushing and grabbing, some trying to separate the combatants. Others threw punches. Some of the punches may even have landed. The refs ejected all ten players.
The tongue clucking in the media afterwards was so loud it could have been heard above a NASCAR race. “The worst day in NBA history,” said someone on ESPN. “The only ones to benefit from this will be the charities,” said someone else, referring to the ultimate recipients of the heavy fines that the NBA will levy on the players.
Really? I’m sure that the NBA commissioner will, in his media appearance, look as stern as possible. He will deplore the actions of these players and say how terrible it is for the league. Then he’ll go back to his office and watch the TV ratings for the NBA, especially the Knicks and Nuggets, for the next couple of weeks, especially in they have a rematch. We should watch the ratings too, and we shouldn’t be surprised if they rise.
I suspect something similar is true about NASCAR fans. For spectator interest, the best race is not the one that is crash-free. It’s the one with the the spectacular, multi-car crash where all the drivers walk away unhurt.
Regardless of ratings, the NBA may actually want to end these brawls. I am more skeptical about the NHL. I suspect they could greatly reduce the fighting if they wanted to, and if they were willing to impose real penalties. Deterrence works, at least in some circumstances. Sure, fights are crimes of passion, and in the heat of the moment, players are not thinking about all the contingencies. But players are aware of the penalties. I don’t have the data, but I’d bet a lot that if you looked at when flagrant fouls and fights occur in the NBA, there would be a very strong correlation with the point differential. Nobody wants to give up a technical or get thrown out of a game they might win.
Inside HigherEd.com has a report today showing that Sociology is on the leading edge when it comes to retirement. In just ten years, we've nearly doubled the rate at which we're hanging up our spikes. I suspect this has a lot to do with the trajectory of the field itself, not just with those in it. As Everett Hughes pointed out, professions have careers just as people do, and one aspect of that career is the waxing and waning of popularity. If there was a sociology boom in the sixties, all those people who entered the field then should be hitting retirement age about now.
Percentage of Social Science Ph.D.’s Who Are Retired
|Other social sciences||6.7%||8.3% |
Percentage of Social Science Ph.D.’s Who Are Unemployed
|Other social sciences||1.6%||1.5%|
Posted by Jay Livingston
You find a wallet in the street, one of those inexpensive nylon jobs. It has two singles, a dime, a $50 gift certificate, a shopping list, and an ID card with name, phone and address. What do you do?
Wallettest.com shows the results of a field experiment in honesty. I don’t know how long the Wallet Test website has been up—the press release is dated this month— but it’s new to me.
Paul Kinsella videotaped people finding the wallet and kept data on who tried to return it and who didn’t. Kinsella is not a social scientist, and you can find flaws in his methodology.
The results — the demographics of honesty— are about what you’d expect, and you can see brief videos of people picking up the wallet. You can even listen to phone conversations with three people who called trying to get information so they could cash the $50 gift certificate. Kinsella's end of the conversation departs from the standard social science debriefing protocol. Or as Kinsella says of one caller, “My goal was to try to keep this schmuck on the phone for as long as possible.”
Posted by Jay Livingston
After a protest against the president, a student Website carried a statement that included the following: “The students showed that despite vast propaganda, the president has not been able to deceive academia.”
The students had shouted down the president, set fire to photos of him, threw firecrackers, and chanted, “Death to the dictator,” and kicked at the car in which he made his premature departure.
Nevertheless, according to the story in the Times, “The guards did not remove the students or use force to stop the protests,” although students at the protest were certain that some of the counter-demonstrators supporting the president were shills bused in by the Administration.
You’ve probably caught on by now that this was not in the US. (That “death to the dictator” is a giveaway. American protesters don’t call for death to anyone. Well, sometimes there are demonstrations in favor of capital punishment and the execution of particular criminals, but aside from those . . . ) And of course there's no way that US protestors could have come even close to within kicking distance of the president's limo.
The protest was in Iran, and the president was the somewhat loony Ahmadinejad.
The story seems like some bizarro mirror of reactions here to our own president and issues of free speech. But what if it had been the US? What if students at a university speech by President Bush had protested like this? Any chance that the guards wouldn’t use force to clear the protesters out? And is it possible that the administration, given advance warning of a protest, might bring in outside counterdemonstrators?
I’m not sure what the sociological moral of the story is. And I don’t mean to imply that students in Tehran are freer than their US counterparts. In fact, Ahmadinejad, as the Times reports, has “cracked down on dissent.” But the incident, and our reactions to it, may have some relevance for our own debates about free speech on campus.
Posted by Jay Livingston
Can someone please explain the rules about formal dress?
The news story out of Washington yesterday was that at a White House reception on Sunday, three women wore the same dress. Four actually. The fourth was the first lady. Television reports spliced together a montage of the three women arriving, each escorted by a man in a tuxedo, and then Mrs. Bush in the same dress, a red Oscar de la Renta number that goes for $8500.
Under the circumstances, Mrs. Bush felt compelled to slip upstairs and into something else. If the other three women had been near their own walk-in closets, they would probably have done the same thing. Or at least two of them would have. But why?
Why is it so terrible for two or three or four women to be wearing the same dress? The news reporters assumed that we knew and did not explain. Nor did any of the reports even mention that all the men — not just two or three, but all of them — were wearing nearly identical outfits. Black tuxedos with white shirts and black ties. Clumped together at the White House reception, these plumpish, successful men looked like a colony of penguins. The women at the party, though many wore black, could choose all kinds of colors — Dolly Parton was in white, Shania Twain in a print. But if a man had arrived in some color other than black— a seasonal red or green for example— he might well have been denied admission.
The rules are clear:
Men – same style , no colors
Women — unique style; all colors
Obvioulsy these rules say something about gender, but what? That women have nothing better to do than to spend their time shopping for one-of-a-kind clothes while men are so busy they don’t have time to think about the matter? But that doesn’t explain the analogous pattern in names. Women don’t want themselves or their daughters to have names that are too common, and fashions in names for women, just like fashions in clothing, change much more rapidly than do men’s (to check name popularity, go here).
Why don’t we feel the same way about the names and clothes that men wear? The men-in-black requirement is especially interesting, at least to me. Once, to a friend’s wedding, I wore a deep blue dinner jacket instead of a tux, and I’m not sure if the family has ever forgiven me. Hey, it was summer in the 70s.
It wasn’t always like this. Go back two centuries or so, to the court of King George III rather than Bush George the Second, and you might think you’d stumbled into an Elton John theme party. Of course, even in the 18th century, women’s dress had greater variety than did men’s, but at least a guy could wear color. In this picture, which makes fun of the difficulties women encountered just to get into their gowns, the man is in bright red, and the maid (?) is in blue. (I'm not sure if the neutral-colored garment being laced up is the final layer or merely an undergarment.)
The rules of formal dress, just like preferences in names, probably also vary by social class and (at least in the US) race. Levitt and Dubner, the Freakanomics guys, maintain that changes in names (at least among whites in the US) filter down through the social class structure, starting from the top. What about fashions?
posted by Jay Livingston
If you blog about the news, things keep cycling back. This week, thanks to the report of the Iraq Study Group, the news reminds us that the Bush administration still refuses to talk with Iran and Syria. (You can download a .pdf file of the report here.) I blogged that such a refusal seemed silly (“Can We Talk?”, Nov. 1). The ISG puts it more soberly: it’s detrimental to us. It quotes an Iraqi official saying that already “Iran is negotiating with the US on the streets of Baghdad,” (p. 25 of the .pdf file, probably p. 33 in the actual report).
And then there’s the controversy over just how much violence there is. Two months ago, the British journal The Lancet published an article estimating that 600,000 people had been killed in Iraq, twenty times the figure President Bush had mentioned.
The numbers obviously had political implications, and war supporters (yes, there still were some back in October) insisted that the numbers were greatly inflated. After all it worked 470 a day, when even the big massacres reported on the news— car bombings and the like— rarely killed more than fifty. Some social scientists and anti-war bloggers defended the research— its sampling technique and its conclusions.
Shaping the data to fit political goals seems to have been a tool more used by the administration than by the social scientists. The ISG has this to say (p. 62 in the .pdf file).
In addition, there is significant underreporting of the violence in Iraq. The standard for recording attacks acts as a filter to keep events out of reports and databases. A murder of an Iraq is not necessarily counted as an attack. If we cannot determine the source of a sectarian attack, that assault does not make it into the database. A roadside bomb or a rocket or mortar attack that doesn’t hurt U.S. personnel doesn’t count. For example, on one day in July 2006 there were 93 attacks or significant acts of violence reported. Yet a careful review of the reports for that single day brought to light 1,100 acts of violence. Good policy is difficult to make when information is systematically collected in a way that minimizes its discrepancy with policy goals.
Posted by Jay Livingston
Sociologists are often accused of being preoccupied with the obvious and the useless. Business school faculty, by contrast, work on problems that have a practical payoff, right?
Somehow I got an the e-mailing list for a publication from the Wharton School of Business, which is to MBAs what MIT is to engineers. The latest issue has this article: “The ‘Traveling Salesman’ Goes Shopping: The Efficiency of Purchasing Patterns in the Grocery Store.” It asks if grocery shoppers plan out their route through the supermarket the way that sales reps plan a multi-city trip. “Do shoppers tend to be somewhat ‘optimal’ in their shopping patterns?” And it reaches the jaw-dropping conclusion: “travel inefficiency accounts for a large portion of the travel distance in the majority of grocery trips.”
I’ve shopped in supermarkets, and I’ve tagged along with others who shop in supermarkets. So this research seems right up there with “Ursine Defecation Patterns and Their Correlation with Sylvan Density Environmental Variables.” In a word, du-uhh.
The grocery researchers put Lojack-like transmitters on shopping carts so as to generate something like that map in Harry Potter with moving dots tracking people as they scamper around Hogwarts. Then the researchers matched the shopper’s path with the items scanned at the checkout. It’s an interesting high-tech “unobtrusive measure.” Without the shopper’s knowledge (I assume), they could know what items she bought and the route she took through the store. They also knew where those items were on the shelves, so they could work out the “ideal” route and compare it to the shopper’s actual route.
The high-tech research confirms what most of us could have guessed from our own experience, though it gives more precise estimates: Shoppers “spend only 20% to 30% of their time actually acquiring merchandise.”
O.K. People are not going from peanut butter to milk to ground chuck with tunnel-vision efficiency. (There’s a mid-Atlantic chain called ShopRite, and when I first saw that name I thought: exactly — shopping as ritual. And as Durkheim reminded us long ago, rituals are not about rationality and efficiency.)
But if people spend only 30% of their time actually “shopping,” what are they doing the other 70% of the time?
Most likely, they’re looking. As they’d probably tell you, they’re looking at all the stuff — that’s why companies spend so much on packaging and why they compete so desperately for eye-level locations on the shelves. But my guess is that shoppers also spend a fair amount of time looking at the other shoppers. And that is something they would probably not tell you.
I don’t mean that people would deliberately lie about what they are doing. It’s just that they are not aware of it, and more important, nobody thinks of people-watching as part of shopping. If you asked me what I did at the ShopRite, it just wouldn’t occur to me to say that I saw a lot of different people.
If only there were an unobtrusive Lojack that could monitor not just where shoppers are pushing their carts but what they are looking at. Failing that, we might see if shoppers traveled more efficiently when the store was relatively empty and there was nobody to look at. Or maybe some clever students who still need an idea for a research project could figure out some other way.
Posted by Jay Livingston
Sesame Street has a segment intended to teach kids to think in categories. The screen shows four objects, and the song goes:
One of these things just doesn’t belong
Can you tell me which one is not like the others
By the time I finish my song.
I thought of this song when I saw this item, chock full of boldface names, from Page Six (the gossip page) in the New York Post:
Lindsay Lohan . . .at the GQ Men of the Year dinner, . . . joining the likes of Leonardo DiCaprio, Al Gore, Jay-Z . . . and Magic Johnson - she “flipped out” upon seeing Jessica Biel . . . there with her assistant.
Why is Al Gore here among the entertainers and superstars?
Over a century ago, Wilfredo Pareto wrote about the “circulation of elites,” and a half-century ago C. Wright Mills, in The Power Elite wrote about the connections between people at the top in the worlds of Business, Military, and Government. Generals retire to work for military contractors; politicians become lobbyists for corporations; business biggies become politicians (Bloomberg, Corzine).
Now celebrities are in the loop, and can circulate from one realm to the other. Magic Johnson is a “motivational speaker” for businesses. And Al Gore, a man we might kindly call charismatically challenged, sits at the GQ table with Jay-Z.
Of course, Al Gore did make a movie, produced by Larry David’s wife. But mostly Gore is remembered for losing an election despite getting the most votes (representing infinities with non-presidencies).
But today, he’s in boldface with the stars on Page Six.
Posted by Jay Livingston
“Whatever happened to the war on drugs?” a friend asked, “Did we win?”
We were having lunch at a Greek restaurant a few weeks ago, and she was being facetious. This is someone who knows a lot about crime, law-enforcement, and sociology. She also knows that drugs haven’t exactly disappeared from American society. Her point was that without any big decrease in actual drug use, the “war on drugs,” so important for so long, is now something we rarely hear about.
From the perspective of 2006, that war now looks more and more like part of a “moral panic,” a change in public consciousness when real events, like the crack boom of the late 1980s, evoke an apparently hysterical response. The moral panic and the officially declared war that went with it saddled the US with policies that seemed more designed to make us feel that we were taking a strong stand against evil than to reduce drug use. These policies were also very expensive and wasteful. After all, when you are conducting a morality-based war against Evil, you cannot compromise. You cannot drive out the devil with treatment, only with harsh punishment, and damn the cost. At least that seemed to be the logic behind much of the legislation and enforcement. The war on drugs also fell most heavily on minorities, and it shrunk the usual protections that the Bill of Rights afforded to all citizens.
When 9/11 gave us a new enemy, a new source of Evil, war on drugs just couldn’t compete. The moral troops of our collective consciousness had to be moved to a new front.
It’s not that actual drug enforcement has faded. Thanks to laws passed in those decades, we’re still locking up inordinate numbers of people. But the urgency, the moral panic, seems to have subsided.
I remembered this question — whatever happened to the war on drugs?— when I was watching “House” on TV this week. Besides the usual medical problems that come up each week, “House” now has a continuing plot thread that involves a drug-fighting cop who does everything in his power to convict drug-law violators. The interesting thing is that he’s the bad guy. His zeal is portrayed as harmful, and he himself has no redeeming qualities (at least not yet). Dr. House, the drug violator, and his fellow doctors who try to shield him are portrayed as virtuous victims of the cop’s doggedness. Would a major network have aired such a story in the 1980s or 90s?
Over a century ago, Durkheim maintained that a society needs a certain level of deviance. By reacting against deviance, we strengthen social solidarity. So when the level of deviance falls, we will either expand our definition of what’s deviant, or we will find a new threat that requires us to reinforce our moral boundaries.
It seems unlikely that the moral panic about drugs, only recently subsided, can be quickly revived. The war on terror — at least as it has been carried out in Iraq— now looms as a very costly mistake. If there are no new terrorist attacks, the US may need to find a new moral threat on the home front. My friend predicts that it will be gangs. (Keep tuned to your local media and politicians to see if she’s right.)
November 29, 2006
Posted by Jay Livingston
Anthony Giddens is a prolific British sociologist (you might have come across him in your sociological theory course). On Sunday, the Guardian, a leftish British newspaper, published a “call to arms” by Giddens. (It’s interesting in itself that a major newspaper would publish a 1000-word piece about sociology. I wonder if any of the major US papers would do so.) Sociology is the challenger in this bout. The champion is “market fundamentalism,” which has worn the crown for the last quarter-century.
Giddens begins by calling out the troops.
All you sociologists out there! All you ex-students of sociology! All of you (if there are such people) who are simply interested in sociology and its future!
He sets up the challenge.
Why isn't sociology again right at the forefront of intellectual life and public debate? In universities, sociology used to be much more popular than psychology; today it is the other way around. [Giddens has some answers to his own question.]
And he predicts a victory.
The world is moving in a propitious way for a recovery of the sociological imagination. Market fundamentalism is disappearing from the scene.
. . the secular, American descendant of the European Catholic Easter procession in which all the icons and saints’ bones are removed from the churches and carried ceremonially around the town. The baseball hero, the gaseous, rubbery Mickey Mouse, the Mayflower pilgrims were the totems and treasure relics of a culture, as the New Orleans jazz and Sousa marches were its solemn music.from Jonathan Raban, Hunting Mr. Heartbreak: a Discovery of America, 1998.
Had a serious-minded Martian been standing at the window, he would have learned a good deal by studying the parade’s idyllic version of American history. [guns, refugees, rebels]. . . The imaginative life of children was honored to a degree unknown on Mars— which was, perhaps, why matters of fact and matters of fiction were so confusingly jumbled up here, with Santa Claus and George Washington and Superman and Abraham Lincoln all stirred into the same pot.
He would be struck by the extraordinarily mythopoeic character of life in this strange country. People made myths and lived by them with an ease and fertility that would have been the envy of any tribe of Pacific islanders. Sometimes there were big myths that took possession of the whole society, sometimes little ones, casually manufactured, then trusted absolutely.
No student has ever chosen the Macy’s parade. I wonder why not. Raban, who is from England, not Mars, senses the religious aura of the parade with its many gods. Had there been a Macy’s in ancient Greece, the parade would no doubt have had balloon representations of Demeter (god of the harvest), Poseidon (god of the sea— or would he have a float?), Aphrodite (god of beauty), Hermes (god of silk scarves), and of course in the US, Hebe (goddess of youth). And all the rest. We’re not Athenians. Instead, we throng the streets for icons like Snoopy and Spiderman, Pikachu, Bullwinkle, and Spongebob, but the idea is the same. They are our totems, our gods.
I imagine Durkheim on Central Park West, watching the children and grown-ups that have come together here to look up to these huge embodiments of our cultural ideals. Durkheim feels a frisson, a shiver of recognition. He sees the newest addition coming along. The Energizer Bunny. What better way to symbolize the idea about the binding power of ritual social energy?
Posted by Jay Livingston
Google Trends has information about the number of Google searches by time and place. If you go to http://www.google.com/trends and enter "turkey," you'll see a graph that looks like this (I've limited it to the US).
Not too surprising. The second line, below the search line, is the trend line for news stories mentioning the word. Of course, you can't be sure whether the newswriters and googlers were curious about recipes or about vacations in Istanbul.
I plugged in "Durkheim" and got this.
Not much interest in Durkheim during the summer. But comes the new semester, I guess I'm not the only one starting with social facts and suicide. Interesting that the sharp differences of 2004 and 2005 aren't repeated in 2006. Could it mean that sociology enrollments are down? Or that more students took sociology in the summer?
(Or it could be an artifact of sampling. Google does not use the total of all searches but selects a sample, though they won't tell you how they arrive at that sample.)
The results also show the top cities in the search— those with the highest percentage of searches for your keyword relative to the total of all searches from that city. Cambridge, MA came in first for Durkheim. But the city with the highest percentage of searches on "sociology" is Piscataway. Somebody help me out here. What's up with Piscataway and sociology?
Both Borat (i.e., Sacha Baron Cohen in character) and Milgram lie about who the people involved really are and about what’s really happening. Borat is not really a Eurasian journalist making a documentary; in Milgram’s experiment, the “learner” supposedly receiving the shocks is not really a volunteer, and the experiment isn’t about learning. Both Borat and Milgram lie to their subjects about the true purpose of the project. It is not about the things taking place around the subject (a dinner party, a comedy coaching session, or a learning experiment); what it’s really about, and what the camera is zooming in on, is the reactions of the subjects themselves.
The two projects are similar not just in their ethically questionable methods but in their results. What both movies show is the power of social norms, the unwritten rules of everyday politeness. Borat and Milgram can get away with their outrageous questions, requests, or behavior because people are just too polite to tell them that they are way out of line.
The rules of everyday politeness also require that both people in an interaction must agree as to when it ends. (Try breaking off a conversation with someone who wants to continue. It’s not so easy.) So once Borat’s victims have committed themselves to the interaction, which always starts out being normal enough, they can’t figure out how to end it even when Borat’s behavior goes far beyond the bounds of good taste. The humor, like that of the old TV show “Candid Camera” depends on people continuing to try to be polite even when circumstances would seem to call for confrontation and even when that politeness makes them increasingly uncomfortable.
The same goes for Milgram’s subjects. The experiment starts off quite normally— no howls of pain for the low-voltage shocks— the subjects become committed to their place in the situation, and now the norms about breaking up the interaction kick in. One subject shown at length in the film says to the experimenter, “I don’t mean to be rude, Sir, but . . . .” To us watching the film, it seems ridiculous that he’s apparently less affected by the extreme pain, injury or death of someone in the next room than he is by the possibility of being rude to the experimenter. But that’s because we don’t realize the power of the norms in the immediate situation.
The other unwritten rule that enables Milgram and Borat (and Ali G and “Candid Camera”) to take things so far is this: don’t question what someone says he is, at least not without very, very strong information to the contrary. (This insight is the basis for one of the classic books in sociology, The Presentation of Self in Everyday Life, by Erving Goffman.)
Borat presents himself as a very naive Eurasian journalist trying to learn about America. To act towards him as though he were an uncouth fool — even though he’s behaving like one — would be an insult. Milgram says in effect that this is a learning experiment. To discontinue the experiment would be saying in effect, “You’re not really the psychology researcher you say you are. You don’t know how to run an experiment.” Yes, some people discontinue the experiment, and no doubt some people don’t go along with Borat (though of course they get edited out of the film). But even those brave people must still overcome the pull of very strong norms.
As in other scams, the set-up is crucial. For the game to work, Borat (like Ali G and Milgram) must first get the other person to commit himself to the interaction and to accept Borat for what he claims to be. For the scammer, going in cold may be risky, as Sacha Baron Cohen found out two weeks ago. After doing Saturday Night Live as Borat, he went out in New York still in character with fellow Brit Hugh Laurie. They were on the street in the Village when Cohen, with no set-up, approached a stranger and reportedly said, “I like your clothings. Are nice! Please, may I buying? I want to have sex with it...your clothings...very much.”
Posted by Jay Livingston
Thinking back on the Democratic sweep of a week ago, I now realize that I should have seen it coming last year during football season. It was the year of the Steelers.
But the link between the Steelers and the election may be real. It wasn’t that the Steelers won the Superbowl; it was that somehow along the way, they had become “America’s Team.”
That title used to belong to the Dallas Cowboys. I imagine that some PR person for the Cowboys dreamed up the phrase, but it was true in a way. The Cowboys weren’t really America’s team so much as they were what we might now call the Red States’ Team. Through a wide swath of the South and West, people rooted for the Cowboys, mostly because football fans had no other good pro team to root for, maybe no team at all.
Today, fans in places like Arizona, North Carolina, and Tennessee have local teams. Not so in the 1960s and 70s. And the teams that did make their home in the South and West were in the AFC. On Sunday, NBC would broadcast the local AFC team (Broncos, Dolphins). But the CBS affiliate would be broadcasting the NFC, and usually it was the Cowboys.
So the people who listened to Country & Western on the radio watched the Cowboys on TV. Rooting for Dallas was easy in those days. The Cowboys were good. They went to the Superbowl four times in the 1970s, winning twice. Beyond the won-lost record, they had an image, a brand. The Cowboys represented the individualist strain in American culture. The Cowboys were Texas, the land of big thinking, big opportunity, and every man for himself. They were rugged, independent, a football version of the Marlboro man. And just as Americans bought Marlboro cigarettes, America also bought a lot of Cowboys jerseys and other paraphernalia. For a while, the Cowboys alone accounted for 30% of all NFL merchandise sales.
As the red states got more NFL teams, the Cowboys position as “America’s Team” started to fade. There were teams closer to home to root for, and the Cowboys’ performance in the past few years hasn’t exactly been the kind that makes distant fans remain loyal.
The Steeler brand is something else entirely. If the Cowboys were the team of the Sun Belt, the Steelers are the team of the Rust Belt. Pittsburgh produces very little steel these days. The economy of the region is dominated by medical complexes. That and unemployment. But the team is still called the Steelers, not the Medics, and it still represents the values of an industrial past. Steelworkers are working class wage earners bringing home a paycheck. Their families depend on the New Deal kind of government they pay taxes to or the union they are part of to help protect them from the uncertainties of life — sudden turns of fortune like layoffs at work and serious illness at home. These people stress the public and collective over the private and individual. Remember, the Steelers’ powerful running back Jerome Bettis was not called the SUV or the Pick-up Truck; he was public transportation, The Bus.
Is there a parallel in the election? We all know that people were voting against Republican policies in Iraq and against Republican sleaze. But Democrats weren’t just non-Republicans. Many of the Democrats who won ran as economic populists. They support policies that benefit ordinary people and perhaps cut into the profits of corporations. One of the first things the new Democratic congress will do is pass an increase in the minimum wage. They will also try to change the new Medicare law to allow the Government to negotiate with drug companies to get lower prices, something forbidden under the Republicans’ Medicare bill.
In 2005, the Steelers became America’s team. They won the Superbowl. But more tellingly, Americans, voting with their wallets, bought more Steeler merchandise than that of any other team. Nine months later, Americans voted for a congressional majority that could easily be wearing black and gold under their red, white, and blue.
(An ironic footnote: The election did feature one actual Steeler. Lynn Swann, the great receiver for the great Steeler teams of the seventies, ran for governor of Pennsylvania as a Republican. He lost badly.)
I voted. I live in New York, where none of the races was going to be close. I knew my vote didn’t mean a thing. But I voted. I wonder why. Not out of civic duty or a belief that my vote will influence policy or any of those other reasons you learn in school.
Why do I vote, I asked myself. Then I remembered that “why” is the wrong question. Start with the other “reporter’s” questions – who, what, where, when, how. Get good answers to those, and you’ll be much closer to answering why.
What do I do when I vote; where and how do I do it?
I live in New York City. In my precinct, you vote an old building in a drab room with dull lighting and a coffee-stained linoleum floor. Usually, people are waiting in line, most of them people you’ve never seen, but you chat and joke with them. The voting booths and machines are the old kind with a curtain —an old piece of canvas that if you thought about or looked at closely you wouldn’t want to spend too much time touching. Inside the both is the machine. You push the big lever to the right, then you flip down the little levers beside the candidates’ names, then you pull the lever back to the left, and that’s it.
Every time I do it, I think – and sometimes I make this comment to the person next to me in line– that these are probably the same machines people voted on to elect LaGuardia mayor in 1934.
As I was thinking about this now, I realized that I felt good about this whole scene. I liked it. I liked the dirty floors, I liked standing there with these strangers. I liked it because even though we were strangers, even though we might be voting for different people (not really all that likely in my precinct), we were all there together as New Yorkers. I liked thinking that I was connected with New Yorkers and New York elections going back to Fiorello (who, by the way, was dead long before I ever set foot in the city). It’s the sense of being part of something that I want to be part of.
I was talking about this with a friend, and he had the same reaction. He said that when he votes, it always takes him back to the first time he voted. It was the Oregon Democratic primary in 1968. He voted for Bobby Kennedy against Hubert Humphrey and Eugene McCarthy. Kennedy didn’t win in Oregon, McCarthy did. But a couple of weeks later, Kennedy went to California, and on the night that he won that primary, he was assassinated. My friend’s point is that his vote then connected him with an event of historical importance. And now when he votes, he still feels he’s connecting to history.
I think that’s why I vote and why my students don’t. Older people feel more of a connection to history. I know I feel that connection much more now than when I was in my twenties.
But the larger point is that voting is not a rational act, or at least not completely and not always. It’s not a logical means towards some specific goal (like putting the people you like in office). It’s more about how you feel. If you don’t feel connected to the dominant institutions and the history of the country, come election day there will be something else you feel emotionally closer to, and you’ll probably be “too busy” to vote.
Lazarsfeld thought you could make a pretty good prediction about how someone would vote if you knew about certain demographic markers — income, occupation, religion, urban or small town, etc. Often, these characteristics tended to cluster, especially in the 1940s with the dominance of the Roosevelt coalition. But what about the person who belonged to groups that pulled in different ways — the small-town Protestant (Republican pressure) who had a blue-collar union job (Democratic pressure)? Lazarsfeld’s answer was that these voters tend make up their minds later in the campaign, and sometimes they resolve their conflict by just not voting at all.
Obviously, the Republican leadership is worried about these pressures and about the response that Lazarsfeld would predict — staying home on election day. From the top of party on down, GOP professionals are trying to make sure that their traditional voters come out. It’s not about converting Democrats or persuading the Independents and undecided. It’s about making sure that the hard core keep the faith, that they do not give in to cross pressures and just avoid the voting booth.
November 5, 2006
Posted by Jay Livingston
Saddam Hussein was sentenced to death today. No doubt, he will be under close watch to make sure that he does not kill himself.
It’s called “cheating the executioner.” It's a phrase you hear when a murderer shoots himself just as the cops are closing in on him. Or when a prisoner on death row dies of some disease while his case is still pending. It cropped up in the news two weeks ago wnen a death-row prisoner in Texas, Michael Johnson, committed suicide the day before he was to be executed. He cut his own throat and used the blood to write “I didn’t do it” on his cell wall.
The headline in the Washington Times (the online version at least) was “Death Row Inmate Cheats Executioner,” and some other papers had similar wording. That headline, along with the reported detail that death-row inmates are checked on every fifteen minutes, tells us a lot about the real reasons for the death penalty, and they are not the ones usually given.
One rationale for the death penalty is that it saves innocent lives. It deters other potential murderers, and it “incapacitates” the executed murderer so that he can’t kill again. In reality, there’s not a lot of convincing data to support the idea that executions have any impact on murder rates. But most death-penalty supporters base their opinions not on the practical effects of executins but on principles of justice and morality: a person who commits a horrible crime does not “deserve” to live. It’s a matter of right and wrong, and regardless of the impact on future murders, it would be wrong to let him live.
If the criminal’s death were the central issue, as it is in these three rationales, it wouldn’t matter how he died; he would still have been removed from society. So we are not looking at a simple rational process. The irrationality is clear in the standard death-row procedure of the 15-minute suicide watch. If the guards had caught Mr. Johnson in time, the best medical help would have been called and no effort spared to save his life. Then, weeks or months later, when he had recovered, the state would kill him.
Why does the state go to such extraordinary lengths —checking every fifteen minutes— to make sure that some condemned man doesn’t pull a fast one and kill himself. Why, when death comes by suicide or cancer rather than execution, do some people feel “cheated”? What were they cheated of?
The answer is clear. The death-row suicide deprives them of only one thing: the chance to inflict the punishment themselves via their representative the executioner. The importance of the execution is not the effect it has on the criminal — that effect is the same regardless of the cause of death — but its effect on society, on those who carry out the execution. It allows them to dramatize that they and their morality are in control. It draws a clear line, with “us” on the good side and the criminal on the other.
This is the logic behind President Bush’s characterizing the 9/11 bombers as “cowards.” It was not only that they killed unsuspecting civilians. They also cheated us of the privilege of trying and executing them, of showing them who was boss and who was right. The trial, sentence, and execution would have drawn that line between us and them, between good and evil, a line which the president and many other Americans desperately wanted to draw. No doubt, many Iraqis— and Americans— will feel the same way about Saddam.
By executing the criminal, the “good” people confirm their own virtue. Any other form of death cheats them of this occasion to feel good about themselves and secure in their morality.