Michael Anderson and Maximilian Auffhammer from UC-Berkeley point out in NBER Working Paper #17170 released last June: "The average weight of light vehicles sold in the United States has fluctuated substantially over the past 35 years. From 1975 to 1980, average weight dropped almost 1,000 pounds (from 4,060 pounds to 3,228 pounds), likely in response to rising gasoline prices and the passage of the Corporate Average Fuel Efficiency (CAFE) standard. As gasoline prices fell in the late-1980s, however, average vehicle weight began to rise, and by 2005 it had attained 1975 levels ..."
The trend is clearly visible in one of the annual reports from the EPA: Light-Duty Automotive Technology, Carbon Dioxide Emissions, and Fuel Economy Trends: 1975 Through 2010.
Notice that as the weight of cars increased for most of the last 30 years, horsepower and acceleration have also risen--although all three of these trends have been disrupted by the higher gasoline prices in the last couple of years.
The trend toward heavier cars has two tradeoffs I'll mention here: 1) improvements in car technology have all been going to horsepower and acceleration, instead of improved miles-per-gallon; and 2) heavier cars are more likely to lead to deaths when accidents occur.
Christopher R. Knittel of MIT has examined what could have happened if technological progress in the auto industry had been focused on improved miles-per-gallon. He writes: "This paper estimates the technological progress that has occurred since 1980 in the automobile industry and the trade-offs faced when choosing between fuel economy, weight, and engine power characteristics. The results suggest that if weight, horsepower, and torque were held at their 1980 levels, fuel economy could have increased by nearly 60 percent from 1980 to 2006. (Knittel's paper is called "Automobiles on Steroids: Product Attribute Trade-Offs and Technological Progress in the Automobile Sector." The paper is forthcoming in the American Economic Review: in the meantime, a PDF version is available at Knittel's website here.) Here's a figure from the EPA report showing the lack of progress in miles-per-gallon over most of the last three decades.
Another problem with heavier cars is that they are deadlier in accidents. The danger posed by heavier cars is the focus of the Anderson and Auffhamme working paper, which is titled "Pounds that Kill: The External Costs of Vehicle Weight." The working paper is only available on-line by subscription, but a short summary written by Lester Picker is available here.
The authors write: "We present robust evidence that increasing striking vehicle weight by 1,000 pounds increases the probability of a fatality in the struck vehicle by 40% to 50%. This finding is unchanged across different specifications, estimation methods, and different subsets of the sample. We show that there are also significant impacts on serious injuries." They find that when heavier vehicles collide with each other, the fatality rate is not higher. But when heavier vehicles collide with lighter vehicles, motorcycles, or pedestrians, the death rate is higher. They estimate the cost of increased fatalities alone--that is, not counting the costs of more serious injuries--at $93 billion per year.
Many U.S. consumers have clearly demonstrated their preference over the last three decades for heavier cars with more horsepower and acceleration. For at least some consumers, a larger car is a choice made in self-defense, given the dangers posed by a collision with all the other large cars on the roads. But while owners of large cars are engaging in their version of an arms race, the rest of us face greater risks of injury and death in an accident. In Knittel's paper, he points out that the Obama administration has been seeking to raise the average miles-per-gallon standard to 35.5 mpg by 2016. Knittel argues that the only way to accomplish this goal will be for the average car to get lighter--which should also save some lives.
Martin Shubik's Dollar Auction Game
Martin Shubik's endowed chair at Yale University is the Seymour Knox Professor Emeritus of Mathematical Institutional Economics. Most of the time, "mathematical" and "institutional" economists are separate people. But Shubik's career has combined both deep mathematical insights about strategic behavior and also applications to financial, corporate, defense, and other institutions. The opening pages of the just-arrived October 2011 issue of the American Economic Review offer a short tribute to Shubik, who was named a Distinguished Fellow of the American Economic Association in 2010. A list of past Distinguished Fellows is here; the short description of Shubik's work from the AER is here.
One of my favorites of Shubik's papers, in part because it is so accessible that it can readily be used with introductory students and in part because it gives a sense of how his mind works, is the Dollar Auction Game. The Dollar Auction Game is in some ways similar to the better-known prisoner's dilemma, because it illustrates how two parties each pursuing their own self-interest can end up with an outcome that makes both of them worse off. The first published discussion of the game is in the Journal of Conflict Resolution, March 1971, pp. 109-111, which is not freely available on-line but can be found on JSTOR.
The rules of the Dollar Auction are deceptively simple: "The auctioneer auctions off a dollar bill to the highest bidder, with the understanding that both the highest bidder and the second highest bidder will pay. For example, if A has bid 10 cents and B has bid 25 cents, pay a dollar to B, and A will be out 10 cents."
Now consider how the game unfolds. Imagine that two players are willing to bid small amounts, enticed by the prospect of the reward. Then the logic of the game takes hold. Say that player A has bet 20 cents and player B has bet 25 cents. Player A reasons: "If I quit now, I lose 20 cents. But if I bid 30 cents, I have a chance to the $1 and thus gain 70 cents." So Player A bids more. But the same logic applies for Player B: lose what was already bid, or bid more.
But if both players continue to follow this logic, they find their bids steadily climbing past 50 cents apiece: in other words, the sum of their bids exceeds the dollar for which they are bidding. They approach bidding $1 each--but even reaching this level doesn't halt the logic of the game. Say that A has bid 95 cents, and B has bid $1. Player A reasons: "If I quit now, I lose 95 cents. But if I bid $1.05, and win the dollar, I lose only 5 cents." So Player A bids more than a dollar, Player B, driven by the same logic, bids higher as well.
Apparently, Shubik and his colleagues liked to play these games at parties. As he writes in the 1971 article: "In playing this game, a large crowd is desirable. Furthermore, experience has indicated that the best time is during a party when spirits are high and the propensity to calculate does not settle in until after at least two bids have been made. ... Once two bids have been obtained from the crowd, the paradox of escalation is real. Experience with the game has shown that it is possible to "sell" a dollar bill for considerably more than a dollar. A total of payments between three and five dollars is not uncommon. ... This simple game is a paradigm for escalation. Once the contest has been joined, the odds are that the end will be a disaster to both. When this is played as a parlor game, this usually happens."
With any game of this sort, two sorts of questions arise: Under what conditions can the players sidestep the escalation? And does this simple game address real-world phenomena?
Of course, the two players can avoid the escalation if they communicate with each other, agree not to increase their bids, and perhaps also agree to split the gains. It may be necessary in this situation to enforce this agreement with a threat: for example, I will stop bidding and you will also stop bidding, but if you bid again, I will immediately jump my bid to $1, and force us both to take losses. Or unless you stop bidding, I vow to bid forever, no matter the losses. Of course, whether these threats are credible and believable would be an issue.
Another exit strategy from the game is for one player to see where the game is headed, and to stop bidding. Notice that by bailing out of the game, the player who stops is a "loser" in a relative sense: that is, the other player gets the dollar. But by bailing out sooner, the player who stops actually prevents both players from further escalation and ending up as even bigger losers. A more extreme version of this strategy is that a player may refuse to follow the rules, perhaps declaring the game to be "unfair," and refuse to be bound by paying that player's previous bid. To avoid this possibility, perhaps the bids would need to be handed to the auctioneer as the bidding proceeds.
Again, the bottom line of the Dollar Auction game is to illustrate a simple setting in which self-interested behavior leads to losses for both players--in this case, to escalating losses until one of the players decides that enough is enough.
The Dollar Auction Game is simple enough that it doesn't fit perfectly with real-world situation. However,
John Kay argued in a Financial Times op-ed in July that many of our decisions in the last few years about bailing out financial institutions and now countries like Greece have a "dollar auction" aspect to them, in the sense that governments keep thinking that if they just make one more bid, they will have gains--or at least they will reduce the size of their losses. What the governments don't seem to realize is that the other parties in the economy will keep making another bid as well, forcing the government to make yet another bid.
Perhaps the deeper wisdom here is that when entering into a competitive situation, it's useful to look ahead and have a clear vision of what the end-game would look like. If you find yourself in a situation of escalation, by all means try instead to negotiate for a way in which both sides can combine a strategy of lower bids and bid-curdling threats--and then end up sharing the prize. But if negotiation seems impossible, and appealing to the rationality of the other player doesn't5 work it is better to bail out from the bidding rather than continue escalation with an irrational player. Better to take the immediate loss, and to let the less-rational player win the dollar, than to build up to larger losses. As John Kay writes: "In the dollar bill auction, one party eventually scores a pyrrhic victory and takes possession of the dollar bill. Both parties lose, but the smaller loser is the person who sticks out longest. That is not usually the rational player."
One of my favorites of Shubik's papers, in part because it is so accessible that it can readily be used with introductory students and in part because it gives a sense of how his mind works, is the Dollar Auction Game. The Dollar Auction Game is in some ways similar to the better-known prisoner's dilemma, because it illustrates how two parties each pursuing their own self-interest can end up with an outcome that makes both of them worse off. The first published discussion of the game is in the Journal of Conflict Resolution, March 1971, pp. 109-111, which is not freely available on-line but can be found on JSTOR.
The rules of the Dollar Auction are deceptively simple: "The auctioneer auctions off a dollar bill to the highest bidder, with the understanding that both the highest bidder and the second highest bidder will pay. For example, if A has bid 10 cents and B has bid 25 cents, pay a dollar to B, and A will be out 10 cents."
Now consider how the game unfolds. Imagine that two players are willing to bid small amounts, enticed by the prospect of the reward. Then the logic of the game takes hold. Say that player A has bet 20 cents and player B has bet 25 cents. Player A reasons: "If I quit now, I lose 20 cents. But if I bid 30 cents, I have a chance to the $1 and thus gain 70 cents." So Player A bids more. But the same logic applies for Player B: lose what was already bid, or bid more.
But if both players continue to follow this logic, they find their bids steadily climbing past 50 cents apiece: in other words, the sum of their bids exceeds the dollar for which they are bidding. They approach bidding $1 each--but even reaching this level doesn't halt the logic of the game. Say that A has bid 95 cents, and B has bid $1. Player A reasons: "If I quit now, I lose 95 cents. But if I bid $1.05, and win the dollar, I lose only 5 cents." So Player A bids more than a dollar, Player B, driven by the same logic, bids higher as well.
Apparently, Shubik and his colleagues liked to play these games at parties. As he writes in the 1971 article: "In playing this game, a large crowd is desirable. Furthermore, experience has indicated that the best time is during a party when spirits are high and the propensity to calculate does not settle in until after at least two bids have been made. ... Once two bids have been obtained from the crowd, the paradox of escalation is real. Experience with the game has shown that it is possible to "sell" a dollar bill for considerably more than a dollar. A total of payments between three and five dollars is not uncommon. ... This simple game is a paradigm for escalation. Once the contest has been joined, the odds are that the end will be a disaster to both. When this is played as a parlor game, this usually happens."
With any game of this sort, two sorts of questions arise: Under what conditions can the players sidestep the escalation? And does this simple game address real-world phenomena?
Of course, the two players can avoid the escalation if they communicate with each other, agree not to increase their bids, and perhaps also agree to split the gains. It may be necessary in this situation to enforce this agreement with a threat: for example, I will stop bidding and you will also stop bidding, but if you bid again, I will immediately jump my bid to $1, and force us both to take losses. Or unless you stop bidding, I vow to bid forever, no matter the losses. Of course, whether these threats are credible and believable would be an issue.
Another exit strategy from the game is for one player to see where the game is headed, and to stop bidding. Notice that by bailing out of the game, the player who stops is a "loser" in a relative sense: that is, the other player gets the dollar. But by bailing out sooner, the player who stops actually prevents both players from further escalation and ending up as even bigger losers. A more extreme version of this strategy is that a player may refuse to follow the rules, perhaps declaring the game to be "unfair," and refuse to be bound by paying that player's previous bid. To avoid this possibility, perhaps the bids would need to be handed to the auctioneer as the bidding proceeds.
Again, the bottom line of the Dollar Auction game is to illustrate a simple setting in which self-interested behavior leads to losses for both players--in this case, to escalating losses until one of the players decides that enough is enough.
The Dollar Auction Game is simple enough that it doesn't fit perfectly with real-world situation. However,
John Kay argued in a Financial Times op-ed in July that many of our decisions in the last few years about bailing out financial institutions and now countries like Greece have a "dollar auction" aspect to them, in the sense that governments keep thinking that if they just make one more bid, they will have gains--or at least they will reduce the size of their losses. What the governments don't seem to realize is that the other parties in the economy will keep making another bid as well, forcing the government to make yet another bid.
Perhaps the deeper wisdom here is that when entering into a competitive situation, it's useful to look ahead and have a clear vision of what the end-game would look like. If you find yourself in a situation of escalation, by all means try instead to negotiate for a way in which both sides can combine a strategy of lower bids and bid-curdling threats--and then end up sharing the prize. But if negotiation seems impossible, and appealing to the rationality of the other player doesn't5 work it is better to bail out from the bidding rather than continue escalation with an irrational player. Better to take the immediate loss, and to let the less-rational player win the dollar, than to build up to larger losses. As John Kay writes: "In the dollar bill auction, one party eventually scores a pyrrhic victory and takes possession of the dollar bill. Both parties lose, but the smaller loser is the person who sticks out longest. That is not usually the rational player."
0
comments
Labels:
game theory
Grade Inflation and Choice of Major
Like so many other bad habits, grade inflation is lots of fun until someone gets hurt. Students are happy with higher grades. Faculty are happy not quarreling with students about grades.
When I refer to someone getting hurt by grade inflation, I'm not talking about the sanctity of the academic grading process, which is a mildly farcical concept to begin with and at any rate too abstract for me. I'm also not referring to how it gets harder for law and business schools to sort out applicants when so many students have high grades. In the great list of social problems, the difficulties of law and B-school admissions offices don't rank very high.
To me, the real and practical problem of grade inflation is that it causes students to alter their choices, away from fields with tougher grading, like the sciences and economics, and toward fields with easier grading.
A couple of recent high-profile newspaper stories have highlighted that college and university courses in the "STEM" areas of science, technology, engineering and mathematics tend to have lower average grades than courses in humanities, which is one factor that discourages students from pursuing those fields. Here's an overview of those stories, and then some connections to more academic treatments of the topic from my own Journal of Economic Perspectives.
A New York Times story on November 4, by Christopher Drew, was titled, "Why Science Majors Change Their Minds (It’s Just So Darn Hard)." Drew writes: "Studies have found that roughly 40 percent of students planning engineering and science majors end up switching to other subjects or failing to get any degree. That increases to as much as 60 percent when pre-medical students, who typically have the strongest SAT scores and high school science preparation, are included, according to new data from the University of California at Los Angeles. That is twice the combined attrition rate of all other majors."
Part of the reason is that most of the STEM fields start off with a couple of years of tough, dry, abstract courses, for which many students are not academically or emotionally prepared. Another reason is that the grading in these courses is tougher than in non-STEM fields. Drew describes some of the evidence: "After studying nearly a decade of transcripts at one college, Kevin Rask, then a professor at Wake Forest University, concluded last year that the grades in the introductory math and science classes were among the lowest on campus. The chemistry department gave the lowest grades over all, averaging 2.78 out of 4, followed by mathematics at 2.90. Education, language and English courses had the highest averages, ranging from 3.33 to 3.36. Ben Ost, a doctoral student at Cornell, found in a similar study that STEM students are both “pulled away” by high grades in their courses in other fields and “pushed out” by lower grades in their majors."
(For those who want the underlying research, the Rask paper is available here, and the Ost paper is available
here.)
On November 9, the Wall Street Journal had a story called "Generation Jobless: Students Pick Easier Majors Despite Less Pay," written by Joe Light and Rachel Emma Silverman.
"Although the number of college graduates increased about 29% between 2001 and 2009, the number graduating with engineering degrees only increased 19%, according to the most recent statistics from the U.S. Dept. of Education. The number with computer and information-sciences degrees decreased 14%." Again, part of the problem is insufficient preparation before college for the STEM classes, and part is the discouragement of getting lower grades than those in non-STEM fields. Also, even with lower grades, the STEM majors are more work: "In a recent study, sociologists Richard Arum of New York University and Josipa Roksa of the University of Virginia found that the average U.S. student in their sample spent only about 12 to 13 hours a week studying, about half the time spent by students in 1960. They found that math and science—though not engineering—students study on average about three hours more per week than their non-science-major counterparts."
(For those who want to go for original sources, Arum and Roksa discuss the several thousand students that they surveyed over several years in their 2011 book Academically Adrift: Limited Learning on College Campuses. When they make comparisons back to how much students studied in 1960s, they are drawing on work by Philip Babcock and Mindy Marks. For a readable overview of that work, see their August 2010 essay on "Leisure College, USA: The Decline in Student Study Time," written as an Education Policy brief for the American Enterprise Institute. For the technical academic version of their work, see their essay in the May 2011 Review of Economics and Statistics (Vol. 93, No. 2, Pages 468-478), "The Falling Time Cost of College: Evidence from Half a Century of Time Use Data."
As noted, there are lots of reasons why students don't persevere in STEM courses: inadequate preparation at the high school level, students who have unrealistic expectations or don't want to commit the time to studying, or that the courses are just hard. It's of course possible to address these issues, but difficult. However, if one of the issues discouraging students from taking STEM courses is that grade inflation is happening faster in the humanities, then surely, this cause at least is fixable? In my own Journal of Economic Perspectives, which is freely available from the current issue going back to the late 1990s courtesy of the American Economic Association, several authors have taken a stab at quantifying the differences in grades across majors and what difference it makes to course choice.
The first such paper we published was back in the Winter 1991 issue. It was by Richard Sabot and John Wakemann-Linn, and called "Grade Inflation and Course Choice." It's too far back to be freely available on-line, but it's available through JSTOR. The complaints in that article sound quite familiar. They write:
"The number of students graduating from American colleges and universities who had majored in the sciences declined from 1970-71 to1984-85, both as a proportion of the steadily growing total and in
absolute terms. ... Students make their course choices in response to a powerful set of incentives: grades. These incentives have been systematically distorted by the grade inflation of the past 25 years. As a consequence of inflation, many universities have split into high- and low-grading departments. Economics, along with Chemistry and Math, tends to be low-grading. Art, English, Philosophy, Psychology, and Political Science tend to be high-grading." They present more evidence on grade inflation and course choice from Amherst College, Duke University, Hamilton College, Haverford College, Pomona College, the University
of Michigan, the University of North Carolina and the University of Wisconsin, and more detailed analysis from their own Williams College. As they write: "This sample is admittedly small, but was selected so as to include private and state schools, large universities and small colleges, and Eastern, Southern, Midwestern and Western schools."
Based on more detailed statistical analysis from Williams College, where they have access to more detailed data, they write: "Our simulation indicated that if Economics 101 grades were distributed as they are in English 101, the number of students taking one or more courses beyond the introductory course in Economics would increase by 11.9 percent. Conversely, if grades in English 101 were distributed as they are in Economics 101, the simulation indicated that the number of students taking one or more courses beyond the introductory course in English would decline by 14.4 percent. The results of applying this method to-the Math department, which had the lowest mean grade and the highest dispersion of grades, are more striking. If the Math department adopted in its introductory course the English 101 grading distribution, our simulation indicated an 80.2 percent increase in the number of students taking at least one additional Math course! Alternatively, if the English department adopted the Math grade distribution, there would be a decline of 47 percent in the number of students taking one or more courses beyond the introductory course in English."
We took another swing at the issue of grades and course choice with a couple of articles in our Summer 2009 issue. Alexandra C. Achen and Paul N. Courant asked "What Are Grades Made Of?" They argue: "Grades are an element of an intra-university economy that determines, among other things, enrollments and the sizes of departments. ... Departments generally would prefer small classes populated by excellent and highly motivated students. The dean, meanwhile, would like to see departments supply some target quantity of credit hours—the more the better, other things equal—and will penalize departments that don’t do enough teaching. In this framework, grades are one mechanism that departments can use to influence the number of students who will take a given class."
Focusing on 25 years of grade data from the University of Michigan, they find: "First, the distribution of grades is likely to be lower where courses are required, and where there are agreed-upon and readily assessed criteria—right or wrong answers—for grading. By contrast, departments that evaluate student performance using interpretative methods will tend to have higher grades, because using these methods increases the personal cost to instructors of assigning and defending low grades. Second, upper-division classes are likely to have higher grades than lower-division classes, both because students have selected into the upper-division courses where their performance is likely to be stronger and because faculty want to support (and may even like) their student majors. Third, grades can be used in conjunction with other tools to attract students to departments that have low enrollments and to deter students from courses of study that are congested. We find some evidence in support of each of these patterns. As it happens, the consequence of the preceding tendencies is that, indeed, the sciences (mostly) grade harder than the humanities. ..."
"We argue that differential grading standards have potentially serious negative consequences for the ideal of liberal education. At the same time, we conclude that any discussion of a policy response to grade inflation must begin by recognizing that American colleges and universities are now in at least the fifth decade of well-documented grade inflation and differences in grading norms by field. Current grading behavior must and will be interpreted in the context of current norms and expectations about grades, not according to some dimly imagined (anyone who actually remembers it is retired) age of uniform standards across departments. Proposals that attempt to alter grading behavior will face the costs of acting against prevailing customs and expectations, whether in altering pre-existing patterns of grades across departments within a college or university or in attempting to alter grades in one institution while recognizing that other universities may
not change."
In that same issue, Talia Bar, Vrinda Kadiyali, and Asaf Zussman discuss one proposal to alter the incentives for grade inflation about "Grade Information and Grade Inflation: The Cornell Experiment." They report that in "April 1996, the [Cornell] Faculty Senate voted to adopt a new grade reporting policy which had two parts: 1) the publication of course median grades on the Internet; and 2) the reporting of course median grades in students’ transcripts. ... Curbing grade inflation was not explicitly stated as a goalof this policy. Instead, the stated rationale was that “students will get a more
accurate idea of their performance, and they will be assured that users of the transcript will also have this knowledge."
To given a sense of the institutional obstacles here, they report that while median grades were publicly available on-line in 1998, at the time the article was written this information did not yet appear on actual student transcripts. As they point out, making this information available may have the undesired effect of encouraging students even more to take courses with easier grades! They also argue that the propensity to take easier-grading courses will be greater for lower-ability students. Thus, student will tend to sort themselves into higher-ability students in tougher-grading classes, and lower-ability students in easier-grading classes. Indeed, they estimate that nearly half of the grade inflation for Cornell as a whole, in the years after median grades were posted on the web, was attributable to students sorting themselves out in this way.
In short, grade inflation in the humanities has been contributing to college students moving away from science, technology, engineering, and math fields, as well as economics, for the last half century. It's time for the pendulum to start swinging back. A gentle starting point would be to making the distribution of grades by institution and by academic department (or for small departments, perhaps grouping a few departments together) publicly available, and perhaps even to add this information to student transcripts. If that answer isn't institutionally acceptable, I'm open to alternatives.
When I refer to someone getting hurt by grade inflation, I'm not talking about the sanctity of the academic grading process, which is a mildly farcical concept to begin with and at any rate too abstract for me. I'm also not referring to how it gets harder for law and business schools to sort out applicants when so many students have high grades. In the great list of social problems, the difficulties of law and B-school admissions offices don't rank very high.
To me, the real and practical problem of grade inflation is that it causes students to alter their choices, away from fields with tougher grading, like the sciences and economics, and toward fields with easier grading.
A couple of recent high-profile newspaper stories have highlighted that college and university courses in the "STEM" areas of science, technology, engineering and mathematics tend to have lower average grades than courses in humanities, which is one factor that discourages students from pursuing those fields. Here's an overview of those stories, and then some connections to more academic treatments of the topic from my own Journal of Economic Perspectives.
A New York Times story on November 4, by Christopher Drew, was titled, "Why Science Majors Change Their Minds (It’s Just So Darn Hard)." Drew writes: "Studies have found that roughly 40 percent of students planning engineering and science majors end up switching to other subjects or failing to get any degree. That increases to as much as 60 percent when pre-medical students, who typically have the strongest SAT scores and high school science preparation, are included, according to new data from the University of California at Los Angeles. That is twice the combined attrition rate of all other majors."
Part of the reason is that most of the STEM fields start off with a couple of years of tough, dry, abstract courses, for which many students are not academically or emotionally prepared. Another reason is that the grading in these courses is tougher than in non-STEM fields. Drew describes some of the evidence: "After studying nearly a decade of transcripts at one college, Kevin Rask, then a professor at Wake Forest University, concluded last year that the grades in the introductory math and science classes were among the lowest on campus. The chemistry department gave the lowest grades over all, averaging 2.78 out of 4, followed by mathematics at 2.90. Education, language and English courses had the highest averages, ranging from 3.33 to 3.36. Ben Ost, a doctoral student at Cornell, found in a similar study that STEM students are both “pulled away” by high grades in their courses in other fields and “pushed out” by lower grades in their majors."
(For those who want the underlying research, the Rask paper is available here, and the Ost paper is available
here.)
On November 9, the Wall Street Journal had a story called "Generation Jobless: Students Pick Easier Majors Despite Less Pay," written by Joe Light and Rachel Emma Silverman.
"Although the number of college graduates increased about 29% between 2001 and 2009, the number graduating with engineering degrees only increased 19%, according to the most recent statistics from the U.S. Dept. of Education. The number with computer and information-sciences degrees decreased 14%." Again, part of the problem is insufficient preparation before college for the STEM classes, and part is the discouragement of getting lower grades than those in non-STEM fields. Also, even with lower grades, the STEM majors are more work: "In a recent study, sociologists Richard Arum of New York University and Josipa Roksa of the University of Virginia found that the average U.S. student in their sample spent only about 12 to 13 hours a week studying, about half the time spent by students in 1960. They found that math and science—though not engineering—students study on average about three hours more per week than their non-science-major counterparts."
(For those who want to go for original sources, Arum and Roksa discuss the several thousand students that they surveyed over several years in their 2011 book Academically Adrift: Limited Learning on College Campuses. When they make comparisons back to how much students studied in 1960s, they are drawing on work by Philip Babcock and Mindy Marks. For a readable overview of that work, see their August 2010 essay on "Leisure College, USA: The Decline in Student Study Time," written as an Education Policy brief for the American Enterprise Institute. For the technical academic version of their work, see their essay in the May 2011 Review of Economics and Statistics (Vol. 93, No. 2, Pages 468-478), "The Falling Time Cost of College: Evidence from Half a Century of Time Use Data."
As noted, there are lots of reasons why students don't persevere in STEM courses: inadequate preparation at the high school level, students who have unrealistic expectations or don't want to commit the time to studying, or that the courses are just hard. It's of course possible to address these issues, but difficult. However, if one of the issues discouraging students from taking STEM courses is that grade inflation is happening faster in the humanities, then surely, this cause at least is fixable? In my own Journal of Economic Perspectives, which is freely available from the current issue going back to the late 1990s courtesy of the American Economic Association, several authors have taken a stab at quantifying the differences in grades across majors and what difference it makes to course choice.
The first such paper we published was back in the Winter 1991 issue. It was by Richard Sabot and John Wakemann-Linn, and called "Grade Inflation and Course Choice." It's too far back to be freely available on-line, but it's available through JSTOR. The complaints in that article sound quite familiar. They write:
"The number of students graduating from American colleges and universities who had majored in the sciences declined from 1970-71 to1984-85, both as a proportion of the steadily growing total and in
absolute terms. ... Students make their course choices in response to a powerful set of incentives: grades. These incentives have been systematically distorted by the grade inflation of the past 25 years. As a consequence of inflation, many universities have split into high- and low-grading departments. Economics, along with Chemistry and Math, tends to be low-grading. Art, English, Philosophy, Psychology, and Political Science tend to be high-grading." They present more evidence on grade inflation and course choice from Amherst College, Duke University, Hamilton College, Haverford College, Pomona College, the University
of Michigan, the University of North Carolina and the University of Wisconsin, and more detailed analysis from their own Williams College. As they write: "This sample is admittedly small, but was selected so as to include private and state schools, large universities and small colleges, and Eastern, Southern, Midwestern and Western schools."
Based on more detailed statistical analysis from Williams College, where they have access to more detailed data, they write: "Our simulation indicated that if Economics 101 grades were distributed as they are in English 101, the number of students taking one or more courses beyond the introductory course in Economics would increase by 11.9 percent. Conversely, if grades in English 101 were distributed as they are in Economics 101, the simulation indicated that the number of students taking one or more courses beyond the introductory course in English would decline by 14.4 percent. The results of applying this method to-the Math department, which had the lowest mean grade and the highest dispersion of grades, are more striking. If the Math department adopted in its introductory course the English 101 grading distribution, our simulation indicated an 80.2 percent increase in the number of students taking at least one additional Math course! Alternatively, if the English department adopted the Math grade distribution, there would be a decline of 47 percent in the number of students taking one or more courses beyond the introductory course in English."
We took another swing at the issue of grades and course choice with a couple of articles in our Summer 2009 issue. Alexandra C. Achen and Paul N. Courant asked "What Are Grades Made Of?" They argue: "Grades are an element of an intra-university economy that determines, among other things, enrollments and the sizes of departments. ... Departments generally would prefer small classes populated by excellent and highly motivated students. The dean, meanwhile, would like to see departments supply some target quantity of credit hours—the more the better, other things equal—and will penalize departments that don’t do enough teaching. In this framework, grades are one mechanism that departments can use to influence the number of students who will take a given class."
Focusing on 25 years of grade data from the University of Michigan, they find: "First, the distribution of grades is likely to be lower where courses are required, and where there are agreed-upon and readily assessed criteria—right or wrong answers—for grading. By contrast, departments that evaluate student performance using interpretative methods will tend to have higher grades, because using these methods increases the personal cost to instructors of assigning and defending low grades. Second, upper-division classes are likely to have higher grades than lower-division classes, both because students have selected into the upper-division courses where their performance is likely to be stronger and because faculty want to support (and may even like) their student majors. Third, grades can be used in conjunction with other tools to attract students to departments that have low enrollments and to deter students from courses of study that are congested. We find some evidence in support of each of these patterns. As it happens, the consequence of the preceding tendencies is that, indeed, the sciences (mostly) grade harder than the humanities. ..."
"We argue that differential grading standards have potentially serious negative consequences for the ideal of liberal education. At the same time, we conclude that any discussion of a policy response to grade inflation must begin by recognizing that American colleges and universities are now in at least the fifth decade of well-documented grade inflation and differences in grading norms by field. Current grading behavior must and will be interpreted in the context of current norms and expectations about grades, not according to some dimly imagined (anyone who actually remembers it is retired) age of uniform standards across departments. Proposals that attempt to alter grading behavior will face the costs of acting against prevailing customs and expectations, whether in altering pre-existing patterns of grades across departments within a college or university or in attempting to alter grades in one institution while recognizing that other universities may
not change."
In that same issue, Talia Bar, Vrinda Kadiyali, and Asaf Zussman discuss one proposal to alter the incentives for grade inflation about "Grade Information and Grade Inflation: The Cornell Experiment." They report that in "April 1996, the [Cornell] Faculty Senate voted to adopt a new grade reporting policy which had two parts: 1) the publication of course median grades on the Internet; and 2) the reporting of course median grades in students’ transcripts. ... Curbing grade inflation was not explicitly stated as a goalof this policy. Instead, the stated rationale was that “students will get a more
accurate idea of their performance, and they will be assured that users of the transcript will also have this knowledge."
To given a sense of the institutional obstacles here, they report that while median grades were publicly available on-line in 1998, at the time the article was written this information did not yet appear on actual student transcripts. As they point out, making this information available may have the undesired effect of encouraging students even more to take courses with easier grades! They also argue that the propensity to take easier-grading courses will be greater for lower-ability students. Thus, student will tend to sort themselves into higher-ability students in tougher-grading classes, and lower-ability students in easier-grading classes. Indeed, they estimate that nearly half of the grade inflation for Cornell as a whole, in the years after median grades were posted on the web, was attributable to students sorting themselves out in this way.
In short, grade inflation in the humanities has been contributing to college students moving away from science, technology, engineering, and math fields, as well as economics, for the last half century. It's time for the pendulum to start swinging back. A gentle starting point would be to making the distribution of grades by institution and by academic department (or for small departments, perhaps grouping a few departments together) publicly available, and perhaps even to add this information to student transcripts. If that answer isn't institutionally acceptable, I'm open to alternatives.
0
comments
Labels:
grades,
higher education
Job Openings, Labor Turnover, and the Beveridge Curve
The Bureau of Labor Statistics has just put out its "Job Openings and Labor Turnover Survey Highlights" with data up through September 2011--and lots of nice graphs and explanations. The 2010 Nobel Prize in Economics went toPeter A. Diamond, Dale T. Mortensen, Christopher A. Pissarides "for their analysis of markets with search frictions." Their work is a reminder that unemployment is not just about a shortfall in demand, but is also a matter of search and matching by potential workers and employers. The JOLTS data offers the factual background on job openings, separations, hires, and more. The overall picture is of an unpleasantly stagnant labor market.
As a starting point, look at the relationship between hires, separations, and employment. Most of the time, the red line showing job separations and the blue line showing hires are pretty close together, with one just a bit over the other. When separations exceed hires for some months running in the early 2000s, total employment declines. Then hires exceed separations by a bit, month by month, and total employment grows. During the Great Recession, hiring drops off a cliff and separations fall sharply as well (more on that in a second). Total employment has rebounded a bit since late 2009, but it's interesting to note that the levels of hires and separations remain so low. Those with jobs are tending to stay in them; those without jobs aren't getting hired at a rapid rate.
What explains why job separations would fall during the Great Recession? After all, don't more people lose their jobs in a recession? Yes, but the category of "job separations" has two parts: voluntary quits and layoffs/discharges. During the recession, layoffs and discharges do rise sharply as shown by the red line, but quits fall even faster as shown by the blue line, as those with jobs hung on to them. Overall, job separations decreased. Notice that in the last year or so, layoffs and discharges have actually been relatively low compared to the pre-recession years in the mid-2000s. Quits have stayed lower, too.
The number of job openings in an economy tends to be a leading indicator for changes in employment. Notice the sharp drop in job openings in the recession of 2001, and again the sharp drop in job openings during the 2007-2009 recession. Employment levels decline soon after. However, it's interesting to note the upturn in job openings since the low point in July 2009, and how employment has correspondingly grown.
Looking at the ratio of job openings to the unemployed gives a sense of how difficult it is to find a job at any given time. Before the recession of 2001, there were about 1.2 unemployed people per job opening. In the aftermath of the "jobless recovery" from that recession, there were about 2.8 unemployed people per job opening in late 2003. There were about 1.5 unemployed people per job opening in mid-2007, but just after the end of the recession in late 2009, there were almost 7 unemployed people for every job opening. This statistic helps to emphasize that it isn't just that the unemployment rate remains high, but that unemployed people in a stagnant labor market, with low hiring and few separations, objectively will have a hard time finding jobs.
A final figure from this data is called the Beveridge curve. BLS explains: "This graphical representation of the relationship between the unemployment rate and the vacancy rate is known as the Beveridge Curve, named after the British economist William Henry Beveridge (1879-1963). The economy’s position on the downward sloping Beveridge Curve reflects the state of the business cycle. During an expansion, the unemployment rate is low and the vacancy rate is high. During a contraction, the unemployment rate is high and the vacancy rate is low." The figure is usefully colored in time segments, so the period before the 2001 recession is in light blue in the upper left corner; the 2001 recession is the darker blue line t; the period of growth in the mid-2000s is the red line; the Great Recession is the green line; and the period since the recession is the purple line at the bottom right. The severity of the Great Recession is apparent as the green line stretches down to the right, with much higher unemployment and lower rates of job openings than the 2001 recession.
But the Beveridge curve also raises an interesting question: Is the economy getting worse at matching people with jobs? The low levels of hiring and separations suggest a stagnant labor market. The Beveridge curve might be another signal. As BLS explains: "The position of the curve is determined by the efficiency of the labor market. For example, a greater mismatch between available jobs and the unemployed in terms of skills or location would cause the curve to shift outward, up and toward the right." Notice that as the number of job vacancies has increased, the unemployment rate hasn't fallen as quickly as one might expect. To put it another way, the purple line is not retracing its way back up the green line of the Great Recession, but instead is above it and to the right.
Of course, relationships in the economy aren't going to be as precise as, say, the relationship between altitude and air pressure. There isn't yet enough data to prove whether the Beveridge curve has in fact shifted out. But if it has indeed become harder in the U.S. economy to match unemployed workers with job openings--perhaps because the skills that employers are searching for are not the same as the skills that the unemployed have to offer?--then it will be even harder to bring down the unemployment rate.
As a starting point, look at the relationship between hires, separations, and employment. Most of the time, the red line showing job separations and the blue line showing hires are pretty close together, with one just a bit over the other. When separations exceed hires for some months running in the early 2000s, total employment declines. Then hires exceed separations by a bit, month by month, and total employment grows. During the Great Recession, hiring drops off a cliff and separations fall sharply as well (more on that in a second). Total employment has rebounded a bit since late 2009, but it's interesting to note that the levels of hires and separations remain so low. Those with jobs are tending to stay in them; those without jobs aren't getting hired at a rapid rate.
What explains why job separations would fall during the Great Recession? After all, don't more people lose their jobs in a recession? Yes, but the category of "job separations" has two parts: voluntary quits and layoffs/discharges. During the recession, layoffs and discharges do rise sharply as shown by the red line, but quits fall even faster as shown by the blue line, as those with jobs hung on to them. Overall, job separations decreased. Notice that in the last year or so, layoffs and discharges have actually been relatively low compared to the pre-recession years in the mid-2000s. Quits have stayed lower, too.
The number of job openings in an economy tends to be a leading indicator for changes in employment. Notice the sharp drop in job openings in the recession of 2001, and again the sharp drop in job openings during the 2007-2009 recession. Employment levels decline soon after. However, it's interesting to note the upturn in job openings since the low point in July 2009, and how employment has correspondingly grown.
Looking at the ratio of job openings to the unemployed gives a sense of how difficult it is to find a job at any given time. Before the recession of 2001, there were about 1.2 unemployed people per job opening. In the aftermath of the "jobless recovery" from that recession, there were about 2.8 unemployed people per job opening in late 2003. There were about 1.5 unemployed people per job opening in mid-2007, but just after the end of the recession in late 2009, there were almost 7 unemployed people for every job opening. This statistic helps to emphasize that it isn't just that the unemployment rate remains high, but that unemployed people in a stagnant labor market, with low hiring and few separations, objectively will have a hard time finding jobs.
A final figure from this data is called the Beveridge curve. BLS explains: "This graphical representation of the relationship between the unemployment rate and the vacancy rate is known as the Beveridge Curve, named after the British economist William Henry Beveridge (1879-1963). The economy’s position on the downward sloping Beveridge Curve reflects the state of the business cycle. During an expansion, the unemployment rate is low and the vacancy rate is high. During a contraction, the unemployment rate is high and the vacancy rate is low." The figure is usefully colored in time segments, so the period before the 2001 recession is in light blue in the upper left corner; the 2001 recession is the darker blue line t; the period of growth in the mid-2000s is the red line; the Great Recession is the green line; and the period since the recession is the purple line at the bottom right. The severity of the Great Recession is apparent as the green line stretches down to the right, with much higher unemployment and lower rates of job openings than the 2001 recession.
But the Beveridge curve also raises an interesting question: Is the economy getting worse at matching people with jobs? The low levels of hiring and separations suggest a stagnant labor market. The Beveridge curve might be another signal. As BLS explains: "The position of the curve is determined by the efficiency of the labor market. For example, a greater mismatch between available jobs and the unemployed in terms of skills or location would cause the curve to shift outward, up and toward the right." Notice that as the number of job vacancies has increased, the unemployment rate hasn't fallen as quickly as one might expect. To put it another way, the purple line is not retracing its way back up the green line of the Great Recession, but instead is above it and to the right.
Of course, relationships in the economy aren't going to be as precise as, say, the relationship between altitude and air pressure. There isn't yet enough data to prove whether the Beveridge curve has in fact shifted out. But if it has indeed become harder in the U.S. economy to match unemployed workers with job openings--perhaps because the skills that employers are searching for are not the same as the skills that the unemployed have to offer?--then it will be even harder to bring down the unemployment rate.
0
comments
Labels:
labor market
An Alternative Poverty Measure from the Census Bureau
When the Census Bureau released its annual estimates of the poverty statistics in September, I mentioned some of the main themes in U.S. Poverty by the Numbers. I also mentioned that the Census Bureau was going to follow up with a report offering an alternative measure of poverty, which has now been published. Kathleen Short describes "The Research Supplemental Poverty Measure: 2010" in Current Population Reports P60-241.
As a starting point, here's my one-paragraph overview of the genesis of the current poverty line, taken from Chapter 16 of my Principles of Economics textbook available from Textbook Media:
When all this is done, what picture of poverty in the United States emerges? How does that picture of poverty differ from the official existing poverty rates? Here are some main themes:
The absolute number of people below the poverty line is much the same, but slightly higher. In 2010, there were 46.6 million people below the official poverty line, for a poverty rate of 15.2%; with the new Supplemental Poverty Measure, it would have been 49.1 million people below the poverty line, for a poverty rate of 16.0%. In this sense, my quick reaction is that the existing poverty line has held up fairly well. However, the Supplemental Poverty Measure identifies a somewhat different group of people as poor.
One striking difference is poverty rates by age. Under the official poverty rate, it has long been true that poverty rate for those age 18 and younger is much higher than the poverty rate for those 65 and older: in 2010, the official "under 18 years" poverty rate was 22.5%, while the "over 65" poverty rate was 9.0%. However, under the new Supplemental Poverty Measure, the "under 18" poverty rate would be lower at 18.2%, while the "over 65" poverty rate would be 15.9%. Part of the reason here is that the official poverty rates have a different standard for the over-65 group, while the SPM does not. Food stamps and the earned income tax credit and looking at shared "consumer units" tends to reduce poverty rates among children, while taking out-of-pocket medical care expenses into account tends to increase poverty rates among the elderly.
Other differences emerge as well. Although the overall poverty rate would be higher under the Supplemental Poverty Measure, for certain groups the Supplemental Poverty Measure rate would be lower. For example, the official poverty rate for blacks in 2010 was 27.5% under the official measure, but 25.4% under SPM. The poverty rate for renters was 30.5% under the official measure, but 29.4% under the SPM. The poverty rate for those living outside metropolitan statistical areas was 16.6% with the official measure, but would be 12.8% under the SPM. The poverty rate in Midwestern states was 14.0% with the official measure in 2010, but would be 13.1% under the SPM.
At present, the Census Bureau treats the Supplemental Poverty Measure as "a research operation," and says that it will "improve the measures presented here as resources allow." The official poverty line will remain the line that is used in legislation and as a basis for eligibility for various government programs. This seems wise to me. One great virtue of the existing poverty line is that it isn't changing each year in response to research or political calculations, so it serves as a steady, if imperfect, standard of comparison over time.
But the Supplemental Poverty Measure as it develops seems sure to become part of our national conversation about poverty, because the way it is calculated raises questions about what it means to be for a "consumer unit" to be poor, and what it means to define poverty across a large country with many local and regional differences.
As a starting point, here's my one-paragraph overview of the genesis of the current poverty line, taken from Chapter 16 of my Principles of Economics textbook available from Textbook Media:
"In the United States, the official definition of the poverty line traces back to a single person: Mollie Orshansky. In 1963, Orshansky was working for the Social Security Administration, where she published an article called "Children of the Poor" in a highly useful and dry-as-dust publication called the Social Security Bulletin. Orshansky's idea was to define a poverty line based on the cost of a healthy diet. Her previous job had been at the U.S. Department of Agriculture, where she had worked in an agency called the Bureau of Home Economics and Human Nutrition, and one task of this of this bureau had been to calculate how much it would cost to feed a nutritionally adequate diet to a family. Orshansky found evidence that the average family spent one-third of its income on food. Thus, she proposed that the poverty line be the amount needed to buy a nutritionally adequate diet, given the size of the family, multiplied by three. The current U.S. poverty line is essentially the same as the Orshansky poverty line, although the dollar amounts are adjusted each year to represent the same buying power over time."It has been argued for at least a couple of decades that while a poverty line defined in this way is workable, it can be improved. Back in 1995, a National Academy of Sciences Panel made some recommendations for a new approach to measuring poverty. Kathleen Short summarizes the main concerns of the NAS panel near the start of her report:
- "The current income measure does not reflect the effects of key government policies that alter the disposable income available to families and, hence, their poverty status. Examples include payroll taxes, which reduce disposable income, and in-kind public benefit programs such as the Food Stamp Program/Supplemental Nutrition Assistance Program (SNAP) that free up resources to spend on nonfood items.
- The current poverty thresholds do not adjust for rising levels and standards of living that have occurred since 1965. The official thresholds were approximately equal to half of median income in 1963–64. By 1992, one half median income had increased to more than 120 percent of the official threshold.
- The current measure does not take into account variation in expenses that are necessary to hold a job and to earn income—expenses that reduce disposable income. These expenses include transportation costs for getting to work and the increasing costs of child care for working families resulting from increased labor force participation of mothers
- The current measure does not take into account variation in medical costs across population groups depending on differences in health status and insurance coverage and does not account for rising health care costs as a share of family budgets.
- The current poverty thresholds use family size adjustments that are anomalous and do not take into
account important changes in family situations, including payments made for child support
and increasing cohabitation among unmarried couples. - The current poverty thresholds do not adjust for geographic differences in prices across the nation, although there are significant variations in prices across geographic areas."
When all this is done, what picture of poverty in the United States emerges? How does that picture of poverty differ from the official existing poverty rates? Here are some main themes:
The absolute number of people below the poverty line is much the same, but slightly higher. In 2010, there were 46.6 million people below the official poverty line, for a poverty rate of 15.2%; with the new Supplemental Poverty Measure, it would have been 49.1 million people below the poverty line, for a poverty rate of 16.0%. In this sense, my quick reaction is that the existing poverty line has held up fairly well. However, the Supplemental Poverty Measure identifies a somewhat different group of people as poor.
One striking difference is poverty rates by age. Under the official poverty rate, it has long been true that poverty rate for those age 18 and younger is much higher than the poverty rate for those 65 and older: in 2010, the official "under 18 years" poverty rate was 22.5%, while the "over 65" poverty rate was 9.0%. However, under the new Supplemental Poverty Measure, the "under 18" poverty rate would be lower at 18.2%, while the "over 65" poverty rate would be 15.9%. Part of the reason here is that the official poverty rates have a different standard for the over-65 group, while the SPM does not. Food stamps and the earned income tax credit and looking at shared "consumer units" tends to reduce poverty rates among children, while taking out-of-pocket medical care expenses into account tends to increase poverty rates among the elderly.
Other differences emerge as well. Although the overall poverty rate would be higher under the Supplemental Poverty Measure, for certain groups the Supplemental Poverty Measure rate would be lower. For example, the official poverty rate for blacks in 2010 was 27.5% under the official measure, but 25.4% under SPM. The poverty rate for renters was 30.5% under the official measure, but 29.4% under the SPM. The poverty rate for those living outside metropolitan statistical areas was 16.6% with the official measure, but would be 12.8% under the SPM. The poverty rate in Midwestern states was 14.0% with the official measure in 2010, but would be 13.1% under the SPM.
At present, the Census Bureau treats the Supplemental Poverty Measure as "a research operation," and says that it will "improve the measures presented here as resources allow." The official poverty line will remain the line that is used in legislation and as a basis for eligibility for various government programs. This seems wise to me. One great virtue of the existing poverty line is that it isn't changing each year in response to research or political calculations, so it serves as a steady, if imperfect, standard of comparison over time.
But the Supplemental Poverty Measure as it develops seems sure to become part of our national conversation about poverty, because the way it is calculated raises questions about what it means to be for a "consumer unit" to be poor, and what it means to define poverty across a large country with many local and regional differences.
0
comments
Labels:
poverty
A State-Level Gold Standard?
Barry Eichengreen provides "A Critique of Pure Gold" in the September/October issue of the National Interest. He speaks for most economists in referring to the idea of a return to the gold standard as "an oddball proposal," and explains why in some detail. What caught my eye is that apparently some states have been considering requiring payments in the form of gold--a sort of mini-gold standard. Eichengreen writes:
Eichengreen traces how the idea of a gold standard has re-entered public discourse, championed by Ron Paul, who in turn refers to the work of Friedrich Hayek. But as Eichengreen reminds us, while Hayek was a fierce critic of central banking, and argued that central bankers needed to be controlled lest they conduct monetary policy in a way that feeds cycles of boom and bust in the economy, Hayek did not support a gold standard. In Eichengreen's words, summing up Hayek's standard arguments against a gold standard:
Hayek's answer to the problems of unrestricted central bankers was to allow the rise of private sources of money. Eichengreen continues:
"A Montana measure voted down by a narrow margin of fifty-two to forty-eight in March would have required wholesalers to pay state tobacco taxes in gold. A proposal introduced in the Georgia legislature would have called for the state to accept only gold and silver for all payments, including taxes, and to use the metals when making payments on the state’s debt.It is odd, to say the least, than many of those who favor a gold standard have also been investing in gold hoping to see its price rise. But as Eichengreen notes, in a gold standard, the price of gold would typically be set at a fixed level--historically, often a level below what would otherwise have been the market price. When President Richard Nixon officially ended what remained of the gold standard in 1971, gold was only used to pay debts to foreign governments holding U.S. dollars, and at a fixed price of $35/ounce.
In May, Utah became the first state to actually adopt such a policy. Gold and silver coins minted by the U.S. government were made legal tender under a measure signed into law by Governor Gary Herbert. Given the difficulty of paying for a tank of gas with a $50 American eagle coin worth some $1,500 at current market prices, entrepreneurs then floated the idea of establishing private depositories that would hold the coin and issue debit cards loaded up with its current dollar value. It is unlikely this will appeal to the average motorist contemplating a trip to the gas station since the dollar value of the balance would fluctuate along with the current market price of gold. It would be the equivalent of holding one’s savings in the form of volatile gold-mining stocks.
Historically, societies attracted to using gold as legal tender have dealt with this problem by empowering their governments to fix its price in domestic-currency terms (in the U.S. case, in dollars)."
Eichengreen traces how the idea of a gold standard has re-entered public discourse, championed by Ron Paul, who in turn refers to the work of Friedrich Hayek. But as Eichengreen reminds us, while Hayek was a fierce critic of central banking, and argued that central bankers needed to be controlled lest they conduct monetary policy in a way that feeds cycles of boom and bust in the economy, Hayek did not support a gold standard. In Eichengreen's words, summing up Hayek's standard arguments against a gold standard:
"At the end of The Denationalization of Money, Hayek concludes that the gold standard is no solution to the world’s monetary problems. There could be violent fluctuations in the price of gold were it to again become the principal means of payment and store of value, since the demand for it might change dramatically, whether owing to shifts in the state of confidence or general economic conditions. Alternatively, if the price of gold were fixed by law, as under gold standards past, its purchasing power (that is, the general price level) would fluctuate violently. And even if the quantity of money were fixed, the supply of credit by the banking system might still be strongly procyclical, subjecting the economy to destabilizing oscillations, as was not infrequently the case under the gold standard of the late nineteenth and early twentieth centuries."
Hayek's answer to the problems of unrestricted central bankers was to allow the rise of private sources of money. Eichengreen continues:
"For a solution to this instability, Hayek himself ultimately looked not to the gold standard but to the rise of private monies that might compete with the government’s own. Private issuers, he argued, would have an interest in keeping the purchasing power of their monies stable, for otherwise there would be no market for them. The central bank would then have no option but to do likewise, since private parties now had alternatives guaranteed to hold their value."
0
comments
Labels:
gold,
monetary
Independence and Depression: Economics of the American Revolution
For at least half a century, economic historians looking at colonial America have started with 1840--when the U.S. census collected useful data about economic issues like occupations and industry--and then worked backward. A common approach was to divide the 1840 economy into sectors, and then work backwards trying to make reasonable estimates about the number of workers in each sector and their productivity.
Peter Lindert and Jeffrey Williamson have been taking an alternative approach. They have been collecting available archival data, like local censuses, tax lists, and occupational directories. They look for data on occupation or in some cases on social class, and then combine it with data on wages. They then extrapolate from documented localities within a region to similar undocumented localities within a region, and so on up to the national level. More broadly, instead of trying to estimate GDP from the production side of the economy, they try to estimate it from the income-earning side of the economy.
A nice readable overview of their work is available in an essay published in July on VOX called "America's Revolution: Economic disaster, development, and equality." Those who want to know more about how the sausage was made can look at their NBER working paper (#17211) from last July: "American Incomes Before and After the Revolution." And those who want to see the actual uncooked meat inside the sausage can look at their open-source data website here. The effort is clearly a work in progress: at one point they refer to it as "controlled conjectures" and at another point as "provocative initial results." Here are three of their findings:
During the Revolutionary War and in its aftermath, the U.S. economy contracted by Depression-level amounts. From 1774 up to about 1790, on their analysis, the U.S economy may have declined by "28% or even higher in per capita terms." They offer several plausible reasons for this decline: the destruction caused by the War itself; the sharp decline in exports caused by the Revolutionary War, including the loss of more than half of all pre-war trade with England by 1791; and the departure of skilled and well-connected loyalists. Urbanization is typically a sign of economic development, but during this time period, the U.S. economy was de-urbanizing. They write: To identify the extent of the urban damage, one could start by noting that the combined share of Boston, New York City, Philadelphia, and Charleston in a growing national population shrank from 5.1% in 1774 to 2.7% in 1790, recovering only partially to 3.4% in 1800. There is even stronger evidence confirming an urban crisis. The share of white-collar employment was 12.7% in 1774, but it fell to 8% in 1800; the ratio of earnings per free worker in urban jobs relative to that of total free workers dropped from 3.4 to 1.5 ..."
These economic losses seem to me an often-neglected part of the usual historical narrative of America's War for Independence. Those fighting for independence were sticking to their cause, even as the typical standard of living plummeted.
The American South was the region that suffered by far the most from the Revolutionary War.
On their estimate, the New England region suffered only a modest loss in per capita GDP of -.08% per year from 1774 to 1800, and then grew at a robust annual rate of 2.1% from 1800 to 1840. The Middle Atlantic region suffered a larger annual decline in per capita GDP of 0.45% from 1774 to 1800, but bounced back with an annual growth rate in per capita GDP of 1.45% from 1800 to 1840. However, the Southern region experienced a near-catastrophic drop of 1.57% per year in per capita GDP over the quarter-century from 1774-1800, and rebounded to a growth rate of just 0.43% from 1800 to 1840. On their numbers, the South is has by far the highest incomes of the three regions in 1774, and by far the lowest per capita GDP of the three regions by 1840. Indeed, on their estimated, real per capita GDP in the South in 1840 was about 20% below its level in 1774!
This absolute and relative decline of the South has been used as an example of how institutions can shape long-run economic development. The basic argument is that when the New World was settled, certain areas seemed well suited mining and plantation agriculture. Those areas ended up with what Daron Acemoglu, Simon Johnson, James A. Robinson in a 2002 article referred to as "extractive institutions, which concentrate power in the hands of a small elite and create a high risk of expropriation for the majority of the population, are likely to discourage investment and economic development. Extractive institutions, despite their adverse effects on aggregate performance, may emerge as equilibrium institutions because they increase the rents captured by the groups that hold political power." The alternative is areas where extractive economics won't work, and these areas instead receive a "cluster of institutions ensuring secure property rights for a broad cross section of society, which we refer to as institutions of private property, are essential for investment incentives and successful economic performance." In their 2002 article in the Quarterly Journal of Economics, the authors apply this dynamic broadly across the settlement of the New World, and they title the article: "Reversal of Fortune: Geography and Institutions in the Making of the Modern World Income
Distribution." For a nice readable article laying out a similar theory, see "Factor Endowments, and Paths of Development in the New World," by Kenneth L. Sokoloff and Stanley L. Engerman, in the Summer 2000 issue of my own Journal of Economic Perspectives. (The JEP is publicly available, including the most recent issue and archives going back more than a decade, courtesy of the American Economic Association.)
I can't claim any expertise on the interaction of economic conditions and public mood in the years leading up the U.S. Civil War. But it does seem to me that seeing the U.S. South as a region where per capita GDP had for decades been struggling to recover from an enormous decline, while in relative terms falling ever farther behind other regions of the country, helps to deepen my understanding of the South's sense of separateness which fed into a willingness to secede.
Without the economic damage from the Revolutionary War, the U.S. economy might have started its period of more rapid economic growth several decades sooner--and perhaps been the first nation in the world to do so. Economic historians do love considering counterfactual possibilities, and this one strikes me as a good provocative one. Lindert and Williamson write: "It seems clear that America joined Kuznets’s modern economic growth club sometime after 1790, with the North leading the way, while the South underwent a stunning reversal of fortune. And without the 1774-1790 economic disaster, it appears that America might well have recorded a modern economic growth performance even earlier, perhaps the first on the planet to do so."
Peter Lindert and Jeffrey Williamson have been taking an alternative approach. They have been collecting available archival data, like local censuses, tax lists, and occupational directories. They look for data on occupation or in some cases on social class, and then combine it with data on wages. They then extrapolate from documented localities within a region to similar undocumented localities within a region, and so on up to the national level. More broadly, instead of trying to estimate GDP from the production side of the economy, they try to estimate it from the income-earning side of the economy.
A nice readable overview of their work is available in an essay published in July on VOX called "America's Revolution: Economic disaster, development, and equality." Those who want to know more about how the sausage was made can look at their NBER working paper (#17211) from last July: "American Incomes Before and After the Revolution." And those who want to see the actual uncooked meat inside the sausage can look at their open-source data website here. The effort is clearly a work in progress: at one point they refer to it as "controlled conjectures" and at another point as "provocative initial results." Here are three of their findings:
During the Revolutionary War and in its aftermath, the U.S. economy contracted by Depression-level amounts. From 1774 up to about 1790, on their analysis, the U.S economy may have declined by "28% or even higher in per capita terms." They offer several plausible reasons for this decline: the destruction caused by the War itself; the sharp decline in exports caused by the Revolutionary War, including the loss of more than half of all pre-war trade with England by 1791; and the departure of skilled and well-connected loyalists. Urbanization is typically a sign of economic development, but during this time period, the U.S. economy was de-urbanizing. They write: To identify the extent of the urban damage, one could start by noting that the combined share of Boston, New York City, Philadelphia, and Charleston in a growing national population shrank from 5.1% in 1774 to 2.7% in 1790, recovering only partially to 3.4% in 1800. There is even stronger evidence confirming an urban crisis. The share of white-collar employment was 12.7% in 1774, but it fell to 8% in 1800; the ratio of earnings per free worker in urban jobs relative to that of total free workers dropped from 3.4 to 1.5 ..."
These economic losses seem to me an often-neglected part of the usual historical narrative of America's War for Independence. Those fighting for independence were sticking to their cause, even as the typical standard of living plummeted.
The American South was the region that suffered by far the most from the Revolutionary War.
On their estimate, the New England region suffered only a modest loss in per capita GDP of -.08% per year from 1774 to 1800, and then grew at a robust annual rate of 2.1% from 1800 to 1840. The Middle Atlantic region suffered a larger annual decline in per capita GDP of 0.45% from 1774 to 1800, but bounced back with an annual growth rate in per capita GDP of 1.45% from 1800 to 1840. However, the Southern region experienced a near-catastrophic drop of 1.57% per year in per capita GDP over the quarter-century from 1774-1800, and rebounded to a growth rate of just 0.43% from 1800 to 1840. On their numbers, the South is has by far the highest incomes of the three regions in 1774, and by far the lowest per capita GDP of the three regions by 1840. Indeed, on their estimated, real per capita GDP in the South in 1840 was about 20% below its level in 1774!
This absolute and relative decline of the South has been used as an example of how institutions can shape long-run economic development. The basic argument is that when the New World was settled, certain areas seemed well suited mining and plantation agriculture. Those areas ended up with what Daron Acemoglu, Simon Johnson, James A. Robinson in a 2002 article referred to as "extractive institutions, which concentrate power in the hands of a small elite and create a high risk of expropriation for the majority of the population, are likely to discourage investment and economic development. Extractive institutions, despite their adverse effects on aggregate performance, may emerge as equilibrium institutions because they increase the rents captured by the groups that hold political power." The alternative is areas where extractive economics won't work, and these areas instead receive a "cluster of institutions ensuring secure property rights for a broad cross section of society, which we refer to as institutions of private property, are essential for investment incentives and successful economic performance." In their 2002 article in the Quarterly Journal of Economics, the authors apply this dynamic broadly across the settlement of the New World, and they title the article: "Reversal of Fortune: Geography and Institutions in the Making of the Modern World Income
Distribution." For a nice readable article laying out a similar theory, see "Factor Endowments, and Paths of Development in the New World," by Kenneth L. Sokoloff and Stanley L. Engerman, in the Summer 2000 issue of my own Journal of Economic Perspectives. (The JEP is publicly available, including the most recent issue and archives going back more than a decade, courtesy of the American Economic Association.)
I can't claim any expertise on the interaction of economic conditions and public mood in the years leading up the U.S. Civil War. But it does seem to me that seeing the U.S. South as a region where per capita GDP had for decades been struggling to recover from an enormous decline, while in relative terms falling ever farther behind other regions of the country, helps to deepen my understanding of the South's sense of separateness which fed into a willingness to secede.
Without the economic damage from the Revolutionary War, the U.S. economy might have started its period of more rapid economic growth several decades sooner--and perhaps been the first nation in the world to do so. Economic historians do love considering counterfactual possibilities, and this one strikes me as a good provocative one. Lindert and Williamson write: "It seems clear that America joined Kuznets’s modern economic growth club sometime after 1790, with the North leading the way, while the South underwent a stunning reversal of fortune. And without the 1774-1790 economic disaster, it appears that America might well have recorded a modern economic growth performance even earlier, perhaps the first on the planet to do so."
0
comments
Labels:
growth,
history
