For a better browsing experience, please upgrade your browser.

FiveThirtyEight

Roger Federer has a good chance later this year to pass Pete Sampras for most aces in his career. Ivo Karlovic, in turn, is hot on Federer’s heels for No. 4 on the all-time men’s tennis aces list and could pass him at the U.S. Open this week.

You may start hearing more about this leaderboard as Federer and Karlovic keep overtaking former greats and each other. If you do, it’s worth keeping in mind that among trivial sports records, the all-time aces title is a particularly meaningless accomplishment:

  • The ace counts only go back to 1991, when umpires at ATP World Tour and Grand Slam events started recording point-by-point data, including aces (serves that land in and which the returner can’t touch with his racket). All-time leader Goran Ivanisevic and Sampras (No. 3 in aces) debuted three years before the match stats did. Sampras probably would have a much wider lead over Federer and No. 5 Karlovic if his 125 matches from his first three years on tour counted toward his ace total. And Ivanisevic would have a more secure grip on No. 1 if he got credit for his 136 matches between 1988 and 1990; as it stands now, his countryman Karlovic could pass him as soon as late next year. No. 11 Marc Rosset — who also debuted in 1988, a banner year for big servers — might have held the lead among Swiss players over Federer for longer if his 62 matches through 1990 counted. Even more of the careers of big servers Boris Becker, John McEnroe and Roscoe Tanner are shrouded by the sport’s statistical blind spot.
  • The Davis Cup, the sport’s international team competition, counts toward players’ official match records. But merely for administrative reasons — “with all the ties all over the world we don’t have a system to get the data,” ATP stats overseer Greg Sharko said in an email — Davis Cup stats don’t count toward ATP totals. That means Karlovic doesn’t get credit for his 78 aces in a Davis Cup loss in 2009. On the other hand, Karlovic has played only 17 Davis Cup singles matches, while Federer has played 42 — more than Ivanisevic and Sampras, though three fewer than the 45 played by Andy Roddick, who is second on the all-time aces list. No aces from those matches count toward these players’ totals.
  • All counting stats are crude ways to estimate athletes’ ability. Some quarterbacks amass more yards because they play for pass-happy offensive coordinators. Baseball players can get more runs batted in by batting often with teammates on base. Raw ace counts in a single match face the same problem: Karlovic got those 78 aces in the 2009 Davis Cup match because it went to 16-14 in the fifth set. That’s less impressive than his 44 aces in a three-set match earlier this year. It’s all the less meaningful, then, to compare ace counts over a season or career. Federer has a narrow lead over Karlovic mainly because he has played more than twice as many matches. That’s a credit to Federer’s overall superiority as a tennis player compared to Karlovic, and to just about everyone else who ever has lifted a racket. The better you are at the sport, the more you win, the more matches you play, and the more chances you have to rack up aces. It’s hard for Karlovic to keep up with Federer while losing in the first or second round of many tournaments and watching his rival get to keep serving for four to six matches into the final. But if the point of ace counts is to say who’s the best player, there are plenty of better ways to measure it. And if it’s to measure who has the best serve — ignoring returns and everything else in tennis — then a rate stat would do much better. Karlovic has aced opponents on nearly 23 percent of his service points since 2010, or more than double Federer’s 10 percent. Appropriately, Karlovic hit 24 aces in his U.S. Open debut Tuesday, while Federer hit 10. Both men won.
  • Karlovic has started catching up — by playing more matches. The hard way to do that is to win more matches at big tournaments. The easier way to do it is to sign up for lots of the lower-level tournaments called 250s, which are the weakest ATP World Tour events that count toward ace counts. Karlovic has played in 23 250-level tournaments over the past two years. Federer has played in four. Those events have helped Karlovic make up ground on Federer, both because he’s getting more matches and because he’s playing weaker opponents. By median and mean ranking of opponents, Karlovic’s schedule this year and last has been roughly twice as easy as Federer’s. Weaker opponents are, on average, easier to ace. For instance, the last time Federer and Karlovic played each other, Federer’s ace percentage was higher than usual, at 11.5 percent, while Karlovic’s was lower, at 19.1 percent.

comments Add Comment

Last week, Grantland’s Bill Barnwell put together a method to determine the fastest (and slowest) teams in the NFL, aggregating the combine and pro day 40-yard dash times for each team’s starting offensive skill players and secondary. It’s a clever way to address the nebulous question of “team speed,” but it made me wonder about alternative ways to quantify which teams are the fastest.

My idea? Use the player “speed” ratings from EA Sports’ “Madden NFL 15” video game and compute the team average for every 2013 NFL roster, weighted by each individual player’s Approximate Value (Pro-Football-Reference’s attempt to generate a single numerical value representing the quality of any player’s season, regardless of position).

By that method of computation, here were the fastest and slowest teams from a year ago:

paine-datalab-NFLspeed

Madden ratings are, of course, subjective at their core, which makes them a matter of fierce debate each year — as you would expect to be the case when fans’ favorite players are put on trial using an arbitrary scoring system. By contrast, 40 times are objective and allow for absolute comparisons (mostly), but the only publicly available versions come from a player’s rookie year, which doesn’t necessarily offer a snapshot of how fast a player is later in his career.

In other words, there’s no perfect way to measure team speed. But there is a mild correlation, -0.31, between the Madden method and Barnwell’s process (the correlation is negative because Barnwell summed all the players’ 40 times, meaning higher is slower). And the two approaches share conclusions. Like Barnwell hinted at in his piece, speed isn’t necessarily associated with good offenses. I found a negative correlation between a team’s “Madden speed” and its offensive Simple Rating System (SRS) score from last season. (If you’re curious, defense had a very slight positive correlation with speed.)

And here’s the twist: In terms of overall team quality (as measured by winning percentage, where the correlation to speed was a paltry -0.056, or SRS, where it was -0.082), how good a team was bore essentially no relationship to how fast its players were. For instance, while the league’s three fastest rosters — Kansas City, Philadelphia and Seattle — had good seasons in 2013, Nos. 4 and 5 on the speed list — Washington and Houston — each won fewer than 20 percent of their games.

Looking at the whole league, it appears Al Davis’s all-consuming obsession with team speed was misguided. In 2013 at least, the speed of a roster was completely incidental to how well a team performed.

comments Add Comment

The earthquake that struck California’s Napa Valley early Sunday morning could cost $1 billion or more, warn many news headlines.

“Could” is the key word in that statement. Calculating economic loss from earthquakes is a lengthy and unpredictable process. Acknowledging that uncertainty, the United States Geological Survey (USGS) reports a wide range of estimated economic losses, as it does after every earthquake. According to the latest version of its alert, issued Tuesday morning, the USGS says there is a 3-in-5 chance that losses will top $1 billion and a 1-in-4 chance that losses will top $10 billion. Then again, there’s a 1-in-8 chance losses will come in under $100 million.

The USGS is more confident in its forecast that no one was killed by the Napa earthquake, thanks in part to Northern California’s earthquake-resistant construction standards. The agency says there’s a 3-in-4 chance that no one died, and there almost certainly were fewer than 10 fatalities. (The USGS doesn’t forecast injuries.)

These estimates aren’t based on what the USGS calls “ground truth”: assessing damage to buildings and infrastructure and calculating costs. Instead they’re outputs from a model based on earthquake magnitude, seismic activity, shaking intensity, the population and building stock in the area, the damage from past earthquakes and other factors.

The USGS’s forecasts trade precision for statistical realism and speed. The first Napa assessment from the agency’s Prompt Assessment of Global Earthquakes for Response (PAGER) system came out 13 minutes after the quake, according to Kristin Marano, a geophysicist who works on the PAGER Project in Golden, Colorado.

The current estimate for the Napa quake is the 25th one issued in the past two-and-a-half days. Loss estimates have changed substantially four times, data provided by Marano shows. According to the first estimate, there was a 55 percent chance the loss would total less than $100 million and just a 14 percent chance of a loss over $1 billion. About a half-hour later, the probability of a loss under $100 million dropped by half, and the probability of a loss greater than $1 billion more than doubled, to 37 percent. It rose further, to 56 percent, by six hours after the quake. Then it fell to below 50 percent the day after the quake, before settling at its current level of 61 percent — with just a 2 percent chance of a loss under $10 million, one-tenth the level in the first alert.

bialik-datalab-napaquake-table

The shift upward in the economic-loss estimates is even more dramatic than the probability percentages suggest. That’s because of the exponential growth between categories. Even if we assume the probability the loss falls into each category is the probability it’s at the low end of the range, an increase of 1 percentage point in the chance of losses of $100 billion or more — and corresponding decrease in the chance of losses under $1 million — increases the expected loss by $1 billion. Using that same conservative assumption, the expected loss has multiplied by 23 times from the first estimate to the current estimate of about $8.3 billion.

These updates reflect that not every earthquake-sensor station transmits its readings of seismic data via cell networks or wireless Internet — either because they’re not equipped to do so, or because the quake disrupted transmission. At these stations, “someone has to go out and actually pull out a data card,” Marano said. When it comes in, the new data gets fed into the model, which produces updated estimates.

The fatality forecast has stayed stable through these updates. It also so far has proven correct, with no deaths but three or six people critically injured, depending on the report.

Roughly 500 subscribers, many of them first responders from state and federal U.S. agencies, and from other countries with earthquake risks, get emails with the alerts. And the speed of the system means it often beats news reports by a few hours, giving responders a head start, according to the USGS.

Its accuracy is harder to measure. In a paper for the Second European Conference on Earthquake Engineering and Seismology in Istanbul this week, Marano and six co-authors assessed PAGER forecasts for more than 2,500 events between 2010 and 2013. Most of the events were expected to kill no one and killed no one. For the rest, researchers compared USGS fatality forecasts with numbers from a USGS database and news reports. They found the forecasts usually were well calibrated: For example, most yellow earthquakes, expected to kill between one and 99 people, did in fact do so. But the system tended to underestimate the toll of earthquakes in parts of China and to overestimate death counts in southern Iran.

Here are two charts from the paper, which Marano provided and which were not online as of Wednesday afternoon:

Initial_fixed

Final_fixed

The authors had no good way to assess their economic-loss estimates. “Economic-loss data is very scarce and thus my comparison of actual vs. recorded would only have seven data points, and many of the ‘actual’ losses in my sources are estimates as well,” Marano said in an email. That’s much tougher than evaluating, say, election pollsters by comparing their survey results with vote counts.

Two economic-loss estimates for the same event also aren’t always comparable. For instance, USGS forecasts total losses. But private earthquake-monitoring companies often report insured losses. Eqecat put those at $500 million to $1 billion on Sunday for Napa.

Another gap in the USGS forecast is injury counts. “There are many levels of injuries, so does one consider injuries that require home care, those that require emergency care, those that require hospital stays, or a combination of all these?” Marano asked.

comments Add Comment

Wednesday marks the 50th anniversary of “Mary Poppins,” in which, as the Internet Movie Database puts it, “a magic nanny comes to work for a cold banker’s unhappy family.” The Disney film — based on the character created by Australian author P.L. Travers — takes place in 1910 in London. In one scene, Poppins encourages the Banks children to pay a woman a “tuppence” for bread crumbs to feed the birds (in the movie, Michael Banks is encouraged by his father and other bankers to instead invest his tuppence).

A reader asked us, in lieu of investing that 0.02 pounds in bird seed, what if Michael had invested it in a savings account? What would the exponential wizardry of compound interest do for him if he went back to his account today?

Answer? Not much! I plugged the details of the question into Wolfram Alpha‘s compound interest calculator, and unsurprisingly Michael’s payoff heavily depends on the interest rate. Had he put 0.02 pounds in an account with 6 percent interest compounded quarterly for 104 years, his account balance would read only 9.79 pounds now, which is about $16.23. Not exactly making a mint here.

The limited return on Michael’s tuppence speaks to the hype that sometimes surrounds compound interest. The reality is that to exploit exponential growth, you can’t have a low initial investment, a low interest rate and a short period of time. At least one of those variables needs to be big to get that big payout.

So let’s look at this from another direction. Let’s say Michael decides to go into loansharking with his tuppence and manages to finagle a 15 percent interest rate instead of 6 percent. Well, 104 years later, he’s turned that 0.02 pounds into 87,990 pounds — a fantastic return.

Or tweaking another factor, let’s say Michael accidentally cryogenically freezes himself “Futurama” style and wakes up 500 years later, in 2410. The Bank of England would inform him that his account was sitting at roughly 171 billion pounds.

Compound interest is fun to think about, but it doesn’t have the magical properties many people think it does. For it to work, you need either a lot of money, a lot of time or a great rate.

Unless Michael has access to cryogenics or doesn’t have scruples about unethical loan practices, he should probably just go ahead and feed the birds.

A note: Disney is the corporate parent of FiveThirtyEight and ESPN.

comments Add Comment

USA Basketball head coach Mike Krzyzewski on Friday made his final cuts before the 2014 FIBA World Cup, slicing the team’s roster to 12 players. So we now know who will represent the Stars and Stripes in Spain next week. But how does this year’s edition stack up to previous versions of Team USA?

To measure this, I used Statistical Plus/Minus (SPM), a box-score-based metric that tries to estimate a player’s on-court influence per 100 possessions. (For current players, I’d normally use ESPN’s Real Plus/Minus, but for this exercise we also need numbers for players going back to the early 1990s.) By averaging together each player on a given team’s NBA performance in the seasons before and after a particular international tournament — both FIBA events and the Olympics — we can approximate how much talent each American roster had to work with. (The full rosters are in a table at the end of this post.)

paine-datalab-teamUSA2014-1

A few notes: For this year’s team, I used minutes from the 2014 Team USA exhibitions, excluding players who were cut. I also averaged the players’ 2013-14 performance with what we’d predict for 2014-15 using the rough SPM projection system we’ve employed over the summer. The 1998 team wasn’t included because few of its players were in the NBA. Finally, Magic Johnson didn’t play in 1991-92 or 1992-93, so I used his SPM from 1990-91 and deducted 0.4 rating points per an aging curve I computed.

It comes as no surprise that the U.S. saves its best rosters for the Olympics. Including the fabled 1992 Dream Team at No. 1, each of the four most gifted American teams on our list were sent to the Summer Games, and the drop-off between No. 4 (the Redeem Team of the 2008 Olympics) and No. 5 (the 2006 FIBA World Cup squad) is substantial.

You can also trace USA Basketball’s twisting path over the last two-plus decades by looking at these team ratings. The 1992 and 1996 Olympic teams were every bit the powerhouses their reputations would suggest, but the 2000 and 2004 versions were considerably weaker, culminating in an embarrassing performance in Athens. (That 2000’s team still won gold, however narrowly, while the 2004 squad fell to bronze probably speaks to how much teams in the rest of the world improved in the intervening four years.)

The humiliation of 2004 would lead to a renewed commitment to American basketball dominance, headed up by the brain trust of former Phoenix Suns executive Jerry Colangelo and Krzyzewski, the Duke University coach. Unlike the U.S. teams sent to the 1994, 1998 and 2002 FIBA Worlds, the 2006 team (and the subsequent 2007 FIBA Americas team) was nearly Olympic-level in quality, and the 2008 Olympic squad was the best the U.S. had fielded since 1996 — the lessons of the weak 2004 selection were duly heeded.

This year’s team is of roughly the same quality as the 2010 FIBA Worlds team (the U.S. tore through that tournament without losing). There is no LeBron James or Kevin Durant on the roster, but Stephen Curry, James Harden, Anthony Davis, Kyrie Irving, DeMarcus Cousins and company still make for a formidable group. (Our projection, based on last year’s numbers, also considers Derrick Rose to be a below-average player, which may be badly underrating him if Krzyzewski’s impressions from the exhibition season are correct.)

This isn’t the best team the U.S. has ever had to offer, but it’s above the usual standards of FIBA World Cup fare, 2006’s vengeance-minded selection notwithstanding.

paine-datalab-teamUSA2014

CORRECTION (Aug. 27, 10:22 a.m.): A previous version of this article incorrectly said Mike Krzyzewski was Duke’s former men’s basketball coach. He is the current coach.

comments Add Comment

The number of homeless veterans in the United States has fallen 33 percent since 2010, to just under 50,000 as of January. The number of homeless veterans sleeping in the street, as opposed to in shelters, fell even faster, down nearly 40 percent over the past four years.

At least those were the figures put out by a trio of federal agencies in a news release Tuesday. When I first saw the numbers, I was more than a little skeptical. A change that dramatic often reflects a shift in the way data is collected or some other statistical quirk, not a trend. Moreover, the Obama administration has every incentive to make the numbers look good, especially at a time when the Department of Veterans Affairs is mired in scandal.

I’ve looked into the numbers, though, and it seems my skepticism was misplaced. The number of homeless veterans really does seem to be falling. What’s more, it’s falling at least in large part due to government intervention.

Here at FiveThirtyEight, we had a few theories about how the numbers could be misleading. My first thought was demographics: Vietnam veterans are starting to qualify for Social Security and Medicare, which might be helping them get off the streets. In that scenario, the decline in homelessness is real, but it isn’t anything for which the Obama administration can claim credit.

In a similar vein, DataLab editor Micah Cohen wondered whether the post-2010 decline was more about the improving economy than any policy shift. My colleague Carl Bialik, meanwhile, offered a more pessimistic theory: Maybe this year’s unusually cold winter artificially reduced homelessness counts by driving people indoors.

None of those explanations turned out to hold much water. Carl’s is the easiest to debunk (sorry, Carl): The number of homeless vets has been falling pretty steadily for the past four years, so this wasn’t a one-year blip driven by cold weather. The same goes for the number of “unsheltered” homeless veterans, who were presumably the ones most affected by the icy winter.

Micah’s theory about the economic recovery is harder to reject because good data on the number of homeless veterans only goes back to 2009. But we have data on overall homelessness going back to 2007, before the recession began. Those figures show that homelessness didn’t increase much during the recession, ticking up only slightly in 2010. In general, the number of homeless Americans has been trending down, but not nearly as quickly as the number of homeless vets. There’s no particular reason to think that the recession would have disproportionately hurt veterans, or that the recovery would have disproportionately helped them. So it’s unlikely that the recovery is the only thing helping vets.

As for my theory, it’s possible that demographics are playing some role, but they don’t appear to be the primary explanation. The government’s main annual report on homelessness doesn’t usually include any demographic information, but the 2011 report did provide a basic age breakdown of veterans in homeless shelters. (It doesn’t have data on veterans living on the street.) About 10 percent of them were 62 or older, and 42 percent were between 51 and 61. A report from the nonpartisan Congressional Research Service in November also looked at data from the VA’s programs for homeless vets. Those sources found that between a quarter and a third of homeless vets served during the Vietnam era, but that even more – about half – served in the post-Vietnam, pre-Gulf War era. More recent veterans, meanwhile, make up an additional 20 percent or so. So, while there are indeed lots of homeless veterans entering their 60s, there isn’t the kind of big post-Vietnam drop-off that could explain the recent decline in homelessness.

11-7-13hous-f2

What’s really going on? According to Will Fischer, an analyst who has written about the issue for the left-leaning Center on Budget and Policy Priorities, this is a case of a government intervention that worked. In recent years, the federal government has vastly increased spending on housing assistance for veterans, mostly in the form of direct housing vouchers.

“It’s been a policy priority,” Fischer said. “It would be surprising if there was this much policy focus and it didn’t have an impact.”

Just to make sure I wasn’t missing anything, I also called the National Coalition for Homeless Veterans, an advocacy group that presumably has no incentive to minimize the problem. It said it considers the government’s official figures reliable.

comments Add Comment

Last Thursday, FXX began its “Every Simpsons Ever” promotion. The network’s airing “The Simpsons” — all 552 episodes — over 12 consecutive days, and Season 11’s “Missionary: Impossible,” at 10 a.m. Tuesday, marked the end of hour No. 120.

As with any ludicrous television marathon — especially with a show as popular as “The Simpsons” — there are going to be a few ambitious individuals who try to watch as long as they can. I’d bet that some writer somewhere is working on a stunt piece titled “I Tried To Watch Every Simpsons Ever And Here’s What Happened.”

So, what’s happening to these poor souls? I reached out to my friend Olivia Walch, a mathematics doctoral student at the University of Michigan. Walch made the sleep-repair app Entrain, so she sent us a pile of sleep deprivation studies so we could find out what people trying to mainline “The Simpsons” are going through.

The impairment brought on by sleep deprivation in the early couple of days is often compared to impairment from alcohol consumption. Essentially, for the first few hours of sleep deprivation, the watcher would feel as if they’d thrown back a couple of Duff beers. A 1997 study with 40 participants published in the journal Nature (find links to all mentioned studies at the end of this piece) calculated that “each hour of wakefulness between 10 and 26 hours was equivalent to the performance decrement observed with a 0.004% rise in blood alcohol concentration” (BAC).

So, after 17 hours of wakefulness — right as Bart gives blood to Mr. Burns in the Season 2 finale, “Blood Feud” — you’d be functioning at the level of someone with a 0.05 percent BAC. After 24 hours — as Season 3’s classic “Lisa the Greek” airs — you’d have a performance deficit of someone with a BAC of 0.10 percent.

At hour 49.5 — just as the Season 5 classic “Sweet Seymour Skinner’s Baadasssss Song” would hit the airwaves — we have data about how people’s sense of humor changes with sleep deprivation. The authors of a 2006 study, published in the journal Sleep, kept people awake and then gave them a humor appreciation test. The researchers found that people having gone without sleep tend not to find things as funny as they normally would. “The means for both verbal and visual humor” of people kept awake for 49.5 hours without stimulants was a full standard deviation below normally wakeful people.

And after a certain amount of time, the sleep-deprived start to see stuff.

In “Responses to Sleep Deprivation,” a 1962 study published in the Annals of New York Academy of Sciences, most of the 12 subjects grew increasingly paranoid at about the 100-hour mark; they developed “systematized delusions.” In “Simpsons” terms, that means systematized delusions onset shortly after Season 9’s Emmy-winning “Trash of the Titans,” in which Homer becomes Springfield’s sanitation commissioner.

Having just made it past the 120-hour benchmark early Tuesday, our hypothetical marathoners are, perhaps surprisingly, feeling pretty good.

“Around hour 120 is when you see the ‘fifth day turning point,’ ” Walch said. “A lot of older papers observe this — basically, people get a temporary second wind and feel better about the whole no-sleep thing. But at the same time, this is when the psychotic symptoms really start to manifest. So, it’s possible that the ‘second wind’ is actually just a sign that you’re really losing it.”

Essentially, it’s about to get weird.

Walch here referred to several papers discussing that turning point. A 1962 study ominously titled “The Psychosis of Sleep Deprivation” (also in the Annals of New York Academy of Sciences) observed that “gross disturbances of reality testing are seen to persist for increasing periods of time” around the fifth night of sleep deprivation, with hallucinatory experiences becoming more vivid and paranoia increasing.

So, what’s coming next?

The further back in time, the weirder — and less rigorous — the papers get. A 1933 study in the Archives of Neurology & Psychiatry monitored a volunteer who thought that sleep wasn’t really necessary. The individual (“Z”) stayed up for nine-and-a-half days with only eight brief incidents of intermittent accidental sleep. (This study would probably not be ethically or scientifically sound in contemporary sleep research.)

“On the ninth day he was able to think only in fragments and had reminiscences,” the authors reported. Each day, Z was asked to stare into a crystal orb and report what he saw. “He saw several soap dishes, but he looked away most of the time.” In “Simpsons” time, this would put fragmented thinking just after Season 20’s “Lisa the Drama Queen,” the final episode broadcast in standard definition. The last episode that Z would have stayed up to watch was Season 21’s “American History X-cellent,” the 458th episode. On this day, the authors said, Z “was unable to report a single thought or image.”

And after that? That’s when we really start running out of even anecdotal academic data. If the pattern holds, the more recent you get in the run of “The Simpsons,” the more incoherent and prone to passing out you get.

Which, I suppose, we already knew.

Here are the links to the studies. They make for outstanding reading.

comments Add Comment

When a reader asked me last week about trust funds, I ended up taking a bigger look at how many Americans receive inheritances and how much they receive. We’re not talking about pocket money here. According to the 2010 Survey of Consumer Finances, the median inheritance is $69,000. So, even though only 22.5 percent of Americans said they had inherited, I wanted to find out who those people were.

And, as my original article suggested, this isn’t mere nosiness: The demographics of inheritance could affect economic equality.

Maury Gittleman at the U.S. Bureau of Labor Statistics teamed up with Edward N. Wolff of New York University to produce a working paper that looked at inheritances from 1989 to 2007. When they studied the prosperity of households over that period, the most likely Americans to inherit were those who, on the face of it, needed it the least. On average, 38 percent of households earning $250,000 a year or more received an inheritance, compared with only 17 percent of households with an income of $15,000 or less.

The gap was even wider when Gittleman and Wolff looked at those households’ wealth rather than annual incomes. Wealthier households aren’t just more likely to inherit, they also receive larger sums.

There were racial and ethnic differences, too: 25 percent of non-Hispanic whites said they had received an inheritance in 2007, but just 10 percent of African-Americans and 6 percent of Hispanics said the same. And those gaps haven’t changed by more than a few percentage points since 1989.

In light of all this, you might be surprised that the authors concluded that inheritances and other transfers “had a sizable effect on reducing the inequality of wealth.”

chalabi-datalab-inheritance2-table

The authors, however, acknowledge important caveats: Their research doesn’t consider a possible underreporting trend (if rich people are more likely to be ashamed of inherited wealth, they might conveniently forget to mention it in a survey), and the paper’s method of calculating the data might have overemphasized transfers to the poor and thereby underestimated inequality.

Of course, this is just one study. Others have looked at inheritance in the U.K., Paris and the U.S., and reached different conclusions. But that research also used different data. The advantage of referring to the Gittleman and Wolff study is it used the data set I used in my original piece.

And one final thought: Those inheritors might not be receiving wealth in the form of cash — businesses (perhaps with accompanying Rolodexes) and homes are also among the inheritances. Those transfers could have more lasting effects, which might not be captured in the absolute dollar value of the inheritance at the time it was inherited.

comments Add Comment

Every Monday, the National Bureau of Economic Research, a nonprofit organization made up of some of North America’s most respected economists, releases its latest batch of working papers. The papers aren’t peer-reviewed, so their conclusions are preliminary (and occasionally flat-out wrong). But they offer an early peek into some of the research that will shape economic thinking in the years ahead. Here are a few of this week’s most interesting papers.

 

Title: Import Competition and the Great U.S. Employment Sag of the 2000s

Authors: Daron Acemoglu, David Autor, David Dorn, Gordon H. Hanson, Brendan Price

What they found: Increased competition from Chinese imports between 1999 and 2011 led to a loss of 2 million to 2.4 million jobs in the United States.

Why it matters: The U.S. labor market was growing slowly even before the Great Recession hit. Jobs in manufacturing, which held steady in the 1990s, fell nearly 20 percent between 2000 and 2007. Partly causing this sag in the job market was an astonishing increase in import competition from China. The share of U.S. manufacturing imports from China rose from 4.5 percent in 1991 to 23.1 percent in 2011. The direct employment losses associated with this flood of Chinese goods totaled 560,000 manufacturing jobs (about 10 percent of all manufacturing jobs lost between 1999 and 2011). But the researchers employ other statistical strategies to capture the indirect effects, such as jobs lost due to lower aggregate demand (following the loss of those manufacturing jobs), or the ripple effects on other industries not directly affected by the Chinese imports (but linked to those U.S. industries affected). In total, the direct and indirect job losses total 2 million to 2.4 million.

Key quote: “Our estimates show sizable job losses in exposed industries, and few if any offsetting job gains in non-exposed industries, a pattern that is consistent with substantial job loss due to aggregate demand spillovers.”

Data they used: U.N. Comtrade Database for international trade data; County Business Patterns for employment data

 

Title: Affirmative Action and Human Capital Investment: Evidence From a Randomized Field Experiment

Authors: Christopher Cotton, Brent R. Hickman, Joseph P. Price

What they found: Affirmative action policies for disadvantaged fifth- to eighth-grade students significantly improves performance on a national mathematics exam and raises the amount of effort they spent preparing. Such policies, however, have no negative impact on the incentive to study among advantaged students.

Why it matters: Most affirmative action studies about education seek to measure the effect after admissions is gained, rather than study the incentives before. The study concerns an experiment on fifth- to eighth-grade students preparing to take the American Mathematics Competition 8 (AMC8) exam. The students were divided into two groups: one “colorblind” control group and one affirmative action group. Prizes were reserved for disadvantaged students (defined as an underrepresented minority) in the affirmative action group. Conversely, prizes were awarded based on scores alone in the control group. Researchers then tracked the students’ time spent on a practice website in the 10 days before the exam. The experiment’s results showed that affirmative action students saw a significant boost in pre-exam studying, and their scores were significantly improved. There was hardly any evidence that, on average, advantaged students put in less effort before the exam.

Key quote: “From a policy perspective, these finds are important, as they indicate how AA [affirmative action] not only promotes more racial diversity on college campuses, but at the same time it may also narrow achievement gaps between Whites/Asians and Blacks/Hispanics by motivating higher levels of pre-college human capital investment on the part of under-represented minority students.”

Data they used: Experimental study data

 

Title: Making Progress on Foreign Aid

Author: Nancy Qian

What she found: No more than 5.25 percent of $3.5 trillion in foreign aid has gone to the poorest 20 percent of countries.

Why it matters: Between 1960 and 2013, at least $3.5 trillion (in 2009 prices) was deployed as foreign aid from rich countries to poor ones. And the flow of aid was relatively steady, originating from roughly the same group of top donor countries (the U.S., Japan, France, Germany and the U.K). But aid does not flow to countries that need it most: The poorest fifth of countries only received between 1.69 and 5.25 percent of all foreign aid between 1960 and 2013. The composition of foreign aid also varies widely, from debt relief to humanitarian assistance to cash transfers to food. The author also concludes that the state of foreign aid research needs an upgrade. In reviewing the literature on the effectiveness of existing and past aid, she finds results that are sensitive to timing, measurement, sampling and statistical judgement.

Key quote: “The polarized arguments of the necessity of aid versus the detrimental effects of aid are premature, and the discussion of total foreign aid and the lack of economic improvement for the poorest countries in the world is somewhat misleading.”

Data she used: Foreign aid, termed “Official Development Assistance,” as reported via the Organization for Economic Cooperation and Development’s Development Assistance Committee

comments Add Comment

At six of the last 10 Grand Slam tournaments, a woman has reached her first major singles final. All six first-time finalists lost the match, four of them in straight sets while winning no more than six games. Five then lost their first match at the next tournament. None has reached another major final since. Four of them failed to reach the quarterfinals at the next major they played. Three have fallen out of the Top 10 in the rankings.

Breakthrough performances have been followed by letdowns.

datalab-bialik-usopenwomen-table

The most promising of the six players is Simona Halep. She came the closest to winning her major final debut, taking 15 games off Maria Sharapova at the French Open in June. Halep followed that by reaching the semifinal at Wimbledon the next month. And she enters the U.S. Open — which began this week — ranked No. 2 in the world. Yet she doesn’t look likely to reach the final in Flushing, New York. She won just two matches at warm-up tournaments, and Halep dropped the first set to unranked Danielle Rose Collins (the U.S. college singles champ) before coming back to win her opening match Monday.

“Every day we have to work to reach the top and to stay there, because it’s more difficult to stay there than to reach it,” Halep said at a news conference after her win.

It’s a bit early to declare the most recent first-time finalist a letdown; Eugenie Bouchard hasn’t gotten a chance to play another major since reaching the Wimbledon final this summer. On Tuesday, she begins her U.S. Open against Olga Govortsova. Early returns for Bouchard aren’t good, though: She’s won just one match in three tournaments since getting routed by Petra Kvitova in the Wimbledon final.

Like the current group of young contenders, Kvitova didn’t immediately back up her breakthrough performance. She won Wimbledon in 2011, at age 21, in her first major final. Then she lost three of her next five matches, including her first-round match at the U.S. Open. But she won two tournaments and the Fed Cup later that summer, and Wimbledon this summer. She has been a regular in the Top 10 since reaching her first major final.

Victoria Azarenka followed shortly after Kvitova and was more consistently successful. She reached her first major final at the Australian Open in 2012, at age 22, and won it — routing Maria Sharapova, as Kvitova had done the previous summer at Wimbledon. Then Azarenka won the next two tournaments she played and held the No. 1 ranking for much of the next year, including during her successful defense of her Australian Open title the next year.

It’s natural that an athlete who is playing her first major final against a player who has been there before would be an underdog. And it’d be unfair to expect the player to repeat her performance at the next major, rather than regressing a bit to the mean. Plus, the women who have broken through recently are young and have time to return to the sport’s most prominent matches.

Among the six most recent first-time major finalists, Sara Errani was the oldest at the time of her breakthrough. She had just turned 25 when she reached the 2012 French Open final, relatively young in the aging sport of tennis. Four of the others were younger than 24 when they reached their first Grand Slam final. But only Bouchard was younger at her first breakthrough than Kvitova and Azarenka were.

comments Add Comment

More >

Powered by WordPress.com VIP