Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{1} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

First, an unlucky puzzle that comes a week late:

Depending on the year, there can be one, two or three Friday the 13ths. Last week happened to be the second Friday the 13th of 2020.

What is the *greatest* number of Friday the 13ths that can occur over the course of four consecutive calendar years?

*Extra credit: *What’s the greatest number of Friday the 13ths that can occur over a four-year period (i.e., a period that doesn’t necessarily begin on January 1)?

From Patrick Lopatto comes a riddle we can all be thankful for:

To celebrate Thanksgiving, you and 19 of your family members are seated at a circular table (socially distanced, of course). Everyone at the table would like a helping of cranberry sauce, which happens to be in front of you at the moment.

Instead of passing the sauce around in a circle, you pass it randomly to the person seated directly to your left or to your right. They then do the same, passing it randomly either to the person to *their* left or right. This continues until everyone has, at some point, received the cranberry sauce.

Of the 20 people in the circle, who has the greatest chance of being the *last* to receive the cranberry sauce?

Congratulations to 👏 Nathan Ainslie 👏 of Bloomington, Indiana, winner of last week’s Riddler Express.

Last week, you were a contestant on the TV show Jeopardy! You were competing in the (single) Jeopardy! round and your opponents were simply no match for you. You chose first and never relinquished control, working your way horizontally across the board by first selecting all six $200 clues, then all six $400 clues, and so on, until you finally selected all the $1,000 clues. You responded to each clue correctly before either of your opponents could.

One randomly selected clue was a Daily Double. Rather than award you the prize money associated with that clue, it instead allowed you to double your winnings (up to that point) or wager up to $1,000 should you have less than that. Being the aggressive player you are, you always bet the most you could have. (In reality, the Daily Double was more likely to appear in certain locations on the board than others, but for this problem you assumed it had an equal chance of appearing anywhere on the board.)

How much money did you expect to win during the Jeopardy! Round?

There were a total of 30 clues on the board, and any one of those 30 clues could have been the Daily Double. That meant there were 30 cases to consider: when the Daily Double was the first clue you selected, when it was the second clue, the third clue, and so on. And when it was one of the first six clues selected, the Daily Double was worth $1,000; otherwise, it doubled your money. (Technically, if it was the sixth clue, it was worth $1,000 *and* doubled your money, since you had won exactly $1,000 by that point.)

Many solvers listed out all 30 cases and averaged the winnings. Alternatively, you could have added up the total amount of prize money across all the cases and then divided by 30. Across these cases, each clue was the Daily Double once, which meant it *wasn’t* the daily double 29 times. Adding up the values of all the clues — without worrying about the Daily Double — gave you 6·(200 + 400 + 600 + 800 + 1000), or $18,000, which meant that 29 boards of clues were worth $522,000.

Now to add up the Daily Doubles. As we already said, for the first six cases the Daily Double was worth $1,000. When the Daily Double was the seventh clue, it was worth $1,200. From there, its value increased by $400 until it was $3,600 as the 13th clue. Then, it increased by $600 until it was $7,200 as the 19th clue. Next, it increased by $800 until it was $12,000 as the 25th clue. Finally, it increased by $1,000 until it was a whopping $17,000 as the 30th and final clue. That was some James Holzhauer wagering right there.

Even without a spreadsheet, you needed some hefty addition to tally up these Daily Doubles. Their total value across the 30 cases turned out to be $192,000 — roughly 36 percent of what the non-Daily Double clues had been worth.

Combining the regular clues with the Daily Doubles gave a sum of $714,000 for all 30 cases. That meant the average — the amount you’d “expect” to win — was **$23,800**.

For extra credit, instead of working your way horizontally across the board, you selected random clues from anywhere on the board, one at a time. Now how much money did you expect to win during the Jeopardy! Round?

This was a much thornier version of the problem. First, there were five rows in which the Daily Double might have appeared. Then, for each row, you had to consider the order in which you worked your way across the board. So instead of 30 cases, there were in fact 5·30!/((6!)^{4}5!), or about 4.1×10^{19} cases to consider.

From here, most solvers used Monte Carlo methods, simulating the game of Jeopardy! thousands or even millions of times to approximate the answer. As a few brave coders (like Josh Silverman, Lowell Vaughn and Alex Vornsand) found the answer was **approximately $26,150**. (This was quite close to the result you’d get — $26,146.67 — if you assumed every clue had been worth the average amount of $600. The answer was *slightly* larger than this, since the Daily Double more than doubled your winnings whenever you had less than $1,000.)

It made sense that picking clues randomly would net you more winnings on average, because you now had a greater chance of selecting higher-value clues (like the $800 and $1,000 rows) before hitting the Daily Double.

For this puzzle, we’ll let Alex Trebek have the last word. We miss you.

Congratulations to 👏 Alex Zorn 👏 of Brooklyn, New York, winner of last week’s Riddler Classic.

Last week, you modeled blown football leads, something the Atlanta Falcons know a thing or two (or three) about. The Georgia Birds and the Michigan Felines were playing a game in which a fair coin was flipped 101 times. In the end, if heads came up at least 51 times, the Birds won; but if tails came up at least 51 times, the Felines won.

What was the probability that the Birds had at least a 99 percent chance of winning at *some point* during the game — meaning their probability of victory was 99 percent or greater given the flips that remained — and then proceeded to lose?

This was a challenging riddle, to be sure. For starters, you first had to make sense of what it meant to have “at least a 99 percent chance of winning,” while avoiding the tempting answer of 1 percent. Before any flips were made, the Birds had a 50 percent chance of winning. But suppose, through incredible luck, that the first 50 tosses all came up heads. From there, the Birds could still technically lose if the final 51 tosses all came up tails — an event whose probability was 1/2^{51}.

That was just one (albeit very unlikely) way for the Birds to have a 99 percent chance of winning and then blow the game. But there were many other, more likely scenarios, each of which involved an excess of heads flipped toward the beginning. One way to add up the probabilities of these scenarios was to analyze (preferably via code) what was happening at each combination of wins (*W*) and losses (*L*) for the Birds.

Here’s a graph that shows the pairs (*W*, *L*), shown in red, from which the Birds had at least a 99 percent chance of winning. You could determine these directly using combinatorics, or working backwards recursively, noting that the probability of winning from (*W*, *L*) was the average of the probabilities from (*W*+1, *L*) and (*W*, *L*+1).

Next, you wanted to find the probability that the Birds passed through at least one of the red locations in the graph above. That is, for each (*W*, *L*), what was the probability that at *some* point the Birds had a 99 percent chance of winning?

This graph was very similar to the first. But there were some places — in green and aqua — where the Birds had ventured into the 99 percent region and then back out. These paths through the graph were what made it possible for the Birds to blow their lead.

From here, you had to focus on which of these paths resulted in an overall loss for the Birds, and work backwards. The final graph below shows the Birds’ chances of reaching a 99 percent chance of victory at some point and then blowing the lead, for each (*W*, *L*).

These chances were greatest when the Birds had won 50 games and lost 51, when there was a roughly 2 percent chance that they had blown a 99 percent lead somewhere along the way. Working backwards, their chances of blowing a lead when they had zero wins and zero losses were about 10 times smaller, or **0.21 percent**. This was the solution to the riddle: the Birds’ chances of at some point having a 99 percent chance at victory and then proceeding to lose.

For extra credit, instead of 101 total flips, there were many, many more (i.e., you had to consider the limit as the number of flips went to infinity). Again, the Birds won if heads came up at least half the time. *Now* what was the probability that the Birds had a win probability of at least 99 percent at some point and then proceeded to lose?

To solve this, some coders continued increasing the number of flips beyond 101 and looked for asymptotic behavior. This week’s winner, Alex Zorn, approached the extra credit analytically by first defining three key probabilities:

*h*, the probability that the Birds ever hit 99 percent win probability

*c*, the probability that the Birds choke after attaining their 99 percent win probability

*w*, the probability that the Birds hit 99 percent and go on to win

With these three variables defined, Alex was able to set up a few equations. For example, *w* = 0.99·*h* and *c* = 0.01·*h*, since that’s what was meant by a 99 percent win probability. Moreover, the probability that the Birds were the ultimate winners was 0.5, which *had to* equal *w*. That meant *h* = 50/99, which in turn meant that *c* — the probability of blowing that 99 percent lead, was **1/198**.

Finally, solver Allen Gu noticed some rather peculiar oscillating behaviors as the number of coin flips increased and the threshold for “choking” was closer to 50 percent:

Personally, I’d be very curious to see this plotted on a logarithmic graph. My hunch is that the oscillations are due to the discrete nature of the problem, when the border between the yellow and blue regimes in that first graph shifts.

In any case, I have been informed that the Birds are doing a little better as of late. Those Los Angeles Lightning Bolts, on the other hand … not so much.

Well, aren’t you lucky? There’s a whole book full of the best puzzles from this column and some never-before-seen head-scratchers. It’s called “The Riddler,” and it’s in stores now!

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{2} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

Last Sunday we lost Alex Trebek, a giant in the world of game shows and trivia. The show he hosted over the course of four decades — Jeopardy! — has previously appeared in this column. Today, it makes a return.

You’re playing the (single) Jeopardy! Round, and your opponents are simply no match for you. You choose first and never relinquish control, working your way horizontally across the board by first selecting all six $200 clues, then all six $400 clues, and so on, until you finally select all the $1,000 clues. You respond to each clue correctly before either of your opponents can.

One randomly selected clue is a Daily Double. Rather than award you the prize money associated with that clue, it instead allows you to double your current winnings or wager up to $1,000 should you have less than that. Being the aggressive player you are, you always bet the most you can. (In reality, the Daily Double is more likely to appear in certain locations on the board than others, but for this problem assume it has an equal chance of appearing anywhere on the board.)

How much money do you expect to win during the Jeopardy! round?

*Extra credit:* Suppose you change your strategy. Instead of working your way horizontally across the board, you select random clues from anywhere on the board, one at a time. Now how much money do you expect to win during the Jeopardy! round?

The solution to this Riddler Express can be found in the following week’s column.

From Angela Zhou comes a bad football puzzle. The puzzle’s great, but the football is bad:

Football season is in full swing, and with it have been some incredible blown leads. The Atlanta Falcons know a few things about this, not to mention a certain Super Bowl from a few years back. Inspired by these improbabilities, Angela wondered just how likely one blown lead truly is.

The Georgia Birds and the Michigan Felines play a game where they flip a fair coin 101 times. In the end, if heads comes up at least 51 times, the Birds win; but if tails comes up at least 51 times, the Felines win.

What’s the probability that the Birds have at least a 99 percent chance of winning at *some point* during the game — meaning their probability of victory is 99 percent or greater given the flips that remain — and then proceed to lose?

*Extra credit: *Instead of 101 total flips, suppose there are many, many more (i.e., consider the limit as the number of flips goes to infinity). Again, the Birds win if heads comes up at least half the time. *Now* what’s the probability that the Birds have a win probability of at least 99 percent at some point and then proceed to lose?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 Christoph 👏 of Southampton, United Kingdom, winner of last week’s Riddler Express.

Last week, FiveThirtyEight’s editor Santul Nerkar completed two 20-mile runs on a treadmill while training for the New York City Marathon. For the first run, he set the treadmill to a constant speed so that he ran every mile in 9 minutes.

The second run was a little different. He started at a 10 minute-per-mile pace and accelerated continuously until he was running at an 8-minute-per mile pace at the end. Moreover, Santul’s minutes-per-mile pace (i.e., *not* his speed) changed linearly over time. So a quarter of the way through the duration (in time, not distance) of the run, his pace was 9 minutes, 30 seconds per mile, halfway through it was 9 minutes per mile, etc.

Which training run was faster (i.e., took less time) overall? And what were Santul’s times for the two runs?

The first run wasn’t particularly tricky. If Santul ran 20 miles and each mile took 9 minutes, then he ran for a total of **180 minutes**, or 3 hours.

It was the second run that tripped a few readers up. His initial pace was 10 minutes per mile, meaning he was running 6 miles per hour. His final pace was 8 minutes per mile, or 7.5 miles per hour. At this point, you might have assumed that his speed steadily (i.e., linearly) increased from 6 to 7.5 miles per hour. But that was not the case!

As the problem stated, it was Santul’s minute-per-miles *pace* that steadily decreased (i.e., he sped up) from 10 minutes per mile down to 8 minutes per mile. If it took him *T* minutes to run the 20 miles, then his pace after *t* minutes was 10−2*t*/*T* minutes per mile. (You can check that this equals 10 when *t* = 0, and 8 when *t* = *T*.) If his pace was 10−2*t*/*T* minutes per mile, then what was his speed? It was 60/(10−2*t*/*T*), which gave you the correct units of miles per hour.

At this point, you knew Santul’s speed as a function of time. To figure out how long he was running for, you had to integrate this speed over time — yes, you needed calculus — and you had to set that equal to 20, his distance in miles. This is precisely what solver Eli Wolfhagen did, and solving this integral for *T *gave the correct answer: 2/(3·ln(5/4)) hours, or about **179.26 minutes**. Accelerating this way meant that Santul ran about 45 seconds faster, which can feel like an eternity when running a marathon.

No matter how he trained, I think it’s fair to say that Santul ran one hell of a race!

Congratulations to 👏 Nick Meyer 👏 of Oakland, California, winner of last week’s Riddler Classic.

Last week, inspired by John von Neumann, you tried to simulate a fair coin using a biased coin that had a probability *p* of landing heads.

Suppose you wanted to simulate a fair coin in at most three flips. For which values of *p* was this possible?

Von Neumann’s approach worked for any value of *p*, but didn’t guarantee a simulation that lasted a finite number of flips. However, for certain values of *p*, it was indeed possible to simulate a fair join in just a few flips.

Of course, with one flip there was only one way to simulate a fair coin — when *p* was 1/2.

With two flips, things got a little more complicated. When *p* was 1/2, you could still simulate a fair coin (e.g., the same outcome on both flips could simulate heads, while different outcomes simulated tails). But there were two other values of *p* that were also possible. When *p* was 1/√2, the probability of getting two heads (HH) was 1/2, while the probability of getting anything else (HT, TH or TT) was also 1/2. You could similarly simulate a fair coin when *p* was 1−1/√2, in which case the probability of getting TT was 1/2.

So with two coin flips, there were three values of *p* that could be used to simulate a fair coin: 1/2, 1/√2 and 1−1/√2. Just one problem — the riddle asked about *three* flips, not two. But at this point, your strategy was set. You were looking for different ways to cobble together different outcomes so that their combined probabilities equaled exactly 1/2.

With three coin flips, there were eight total outcomes:

- HHH, which had probability
*p*^{3} - HHT, HTH and THH, which each had probability
*p*^{2}(1−*p*) - HTT, THT and TTH, which each had probability
*p*(1−*p*)^{2} - TTT, which had probability (1−
*p*)^{3}

From here, it was a matter of adding these probabilities together and seeing when these sums equaled 1/2. In all, there were 64 ways to combine them: *p*^{3}, *p*^{3} + *p*^{2}(1−*p*), *p*^{3} + 2*p*^{2}(1−*p*), etc. Here are all 64 polynomials plotted when *p* was between 0 and 1:

It’s a little hard to see from the graph (solver David Lewis zooms in on a similar graph), but there were 19 locations where these curves cross the 0.5 mark, meaning there were **19** distinct values of *p* that could simulate a fair coin in at most three tosses. Solver Josh Silverman used a combinatorial approach to count up the solutions without needing to find them all, while Emma Knight was able to list them all out.

For extra credit, you were asked to simulate a fair coin in at most *N* flips, rather than just three flips. For how many values of *p* is this possible?

We already saw that for one flip, there was one value of *p*. For two flips, there were three values. And for three flips, there were 19 values. Brave souls who looked at four flips found there were 271 possible values of *p*, while there were 8,635 values for five flips and 623,533 values for six flips. As it often turns out in this column, there was an OEIS sequence for that… but wait a minute — solver Tracy Hall just posted this sequence last weekend, meaning it was a brand new discovery. Amazing!

Solver Angela Zhou decided to plot a histogram showing all 623,533 values of *p* that could yield a fair coin in at most six flips:

There’s a lot of mathematical richness to unpack there, like the chasm surrounding 0.5 (although 0.5 is itself a solution), not to mention all those other peaks and valleys.

It’s hard to compete with von Neumann, but Riddler Nation has definitely given him a run for his money.

Well, aren’t you lucky? There’s a whole book full of the best puzzles from this column and some never-before-seen head-scratchers. It’s called “The Riddler,” and it’s in stores now!

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>For decades, I’ve tuned into the trivia game show “Jeopardy!” for the facts. On Sunday, the show lost its judicious leader, Alex Trebek, who died at age 80 after a battle with cancer.

Trebek had hosted “Jeopardy!” for my entire life. He began in 1984 and hosted every episode since — save for April Fool’s Day in 1997 when he and Pat Sajak of “Wheel of Fortune” swapped places — more than 8,000 half-hour shows in all. For many, watching Trebek was ritualistic — the way a day ends and a night begins. It was also a way to get real facts, a brief respite from the “alternative facts” so prevalent over the past few years.

Trebek and the show — the two difficult now to divorce — were a beacon of democratic ideals, flattening the world for all to consume. High and low, popular and obscure, new and old, holy and profane, Trebek put all of them on equal terms. Look no further than the episode of “Jeopardy!” that aired this past Friday, for example. It featured clues about Rihanna, Madonna and Katy Perry, and clues about the city of Vaduz, the Russian navy and the German chancellor. All were on the same game board — though Gene Wilder and the Cook Strait were worth more money than any of them.

“Jeopardy!” and Trebek have been a haven for facts as monuments of truth have crumbled in the public sphere. The holder of the country’s highest office now baselessly disputes legitimate election results, espousers of baseless and dangerous conspiracy theories are elected to Congress, and fact-checking is a booming industry.

Trebek’s performance hasn’t changed to combat the untruths of the last few years, in part because it already was a kind of totem to the value of fact. Since 1984, he has stood each weekday in a suit behind a podium with a sheet of notes and delivered more than 400,000 clues, each a minor daily inoculation against the creep of lies — or whatever you want to call them.

“Alex was so much more than a host,” tweeted James Holzhauer, who set a series of unreal records on “Jeopardy!” last spring. “He was an impartial arbiter of truth and facts in a world that needs exactly that.”

The show’s eclectic and unpredictable subject matter is the result of an unassuming, wide-angle lens cast upon a large and complicated world. We call the material Trebek delivered “trivia,” but few things are less trivial than turning a generous eye toward the world — in all its strange and diverse splendor — and calling things by their right name.

The headline on a great 2014 profile of Trebek in the New Republic declared him the “Last King of the American Middlebrow.” This is true not in a pejorative sense but in a statistical one, as in the average between high and low. Trebek’s essential demeanor — and therefore “Jeopardy!” itself — is stripped of pretension and pretext.

The show’s archive showcases its diverse interests, each of which was presented on the show’s giant board simply and equally in the iconic all-caps white text on a blue square. Knowledge of the former British Prime Minister Benjamin Disraeli is worth the same as that of “The Office” actor Steve Carrell, as that of the toy Mr. Potato Head. The playwright Shakespeare is equal to the running back Walter Payton. The element helium is as valuable as the airport Heathrow.

Trebek’s performance as host emphasized this egalitarianism. His affect was steady and his pronunciation — of French, most famously — was impeccable, and he took pre-show notes with diacritical marks to ensure that he’d get it right. And at the same time he was, among other things, an underrated rapper, as Holzhauer joked, enthusiastically engaging with the repertoires of Lil Wayne, Drake and Kendrick Lamar.

“I’m not too good at it, but I was getting into it,” Trebek said after delivering the verses, and I believed him.

Moreover, Trebek’s performance didn’t just lack pretension, it was *anti*-pretentious. He once famously lightly chastised three contestants, who had correctly answered clues about Molière and Thor Heyerdahl, for not knowing enough about football. “I have to talk to them,” Trebek said disapprovingly before going to commercial. The democratic nature of trivia had been thrown off-kilter, and Trebek had to correct it.

By all accounts, Trebek embodied these trivia ideals while also being a genuine and gracious human.

“Alex wasn’t just the best ever at what he did,” tweeted Ken Jennings, the show’s most famous contestant, who won a record 74 consecutive times. “He was also a lovely and deeply decent man, and I’m grateful for every minute I got to spend with him.”

It was with this same ethos of decency and calm that Trebek announced, in March 2019, that he had been diagnosed with stage 4 pancreatic cancer. He said he “wanted to prevent you from reading or hearing some overblown or inaccurate reports regarding my health.” The rest of the short statement was filled with facts, and he vowed to fight the cancer and continue working. “Truth told, I *have* to,” he said, with a characteristic pinch of humor. “Because under the terms of my contract, I have to host ‘Jeopardy!’ for three more years.”

I wish he were able to do so. The show may go on, but it’s hard to imagine anyone personifying its ideals as well as Alex Trebek did. Of course, the contestants on “Jeopardy!” just ask the questions. It was Trebek who had all the answers.

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{3} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

Last weekend’s New York City Marathon was canceled. But runners from Des Linden — one of the top American marathoners — to FiveThirtyEight’s own Santul Nerkar — my number one editor — still went out there and braved the course. Santul finished in a time of 3:41:43 (3 hours, 41 minutes, 43 seconds), which averaged to 8 minutes, 27 seconds per mile.

Suppose, while training, Santul completed two 20-mile runs on a treadmill. For the first run, he set the treadmill to a constant speed so that he ran every mile in 9 minutes.

The second run was a little different. He started at a 10 minute-per-mile pace and accelerated continuously until he was running at an 8-minute-per mile pace at the end. Moreover, Santul’s minutes-per-mile pace (i.e., *not* his speed) changed linearly over time. So a quarter of the way through the duration (in time, not distance) of the run, his pace was 9 minutes, 30 seconds per mile, halfway through it was 9 minutes per mile, etc.

Which training run was faster (i.e., took less time) overall? And what were Santul’s times for the two runs?

Mathematician John von Neumann is credited with figuring out how to take a biased coin (whose probability of coming up heads is *p*, not necessarily equal to 0.5) and “simulate” a fair coin. Simply flip the coin twice. If it comes up heads both times or tails both times, then flip it twice again. Eventually, you’ll get two different flips — either a heads and then a tails, or a tails and then a heads, with each of these two cases equally likely. Once you get two different flips, you can call the second of those flips the outcome of your “simulation.”

For any value of *p* between zero and one, this procedure will always return heads half the time and tails half the time. This is pretty remarkable! But there’s a downside to von Neumann’s approach — you don’t know how long (i.e., how many flips) the simulation will last.

Suppose I want to simulate a fair coin in at most* three* flips. For which values of *p* is this possible?

Extra credit: Suppose I want to simulate a fair coin in at most* N *flips. For how many values of *p* is this possible?

Congratulations to 👏 Nilesh Shah 👏 of Seattle, Washington, winner of last week’s Riddler Express.

Last week, while waiting in line to vote early, I overheard a discussion between election officials. Apparently, there was a political sign within 100 feet of the polling place, against New York State law.

This got me thinking. Suppose a polling site was a square building whose sides were 100 feet in length. An election official took a string that was also 100 feet long and tied one end to a door located in the middle of one of the building’s sides. She then held the other end of the string in her hand.

What was the area of the region outside of the building she could reach while holding the string?

Had there been no building, but if she were still constrained to be within 100 feet of a fixed point, then the region she could have reached was anywhere within a 100-foot circle, whose area was 𝜋(100)^{2}, or about 31,416 square feet. Due to the building, that area served as an upper bound, meaning the answer had to be less than that.

At this point, there were two approaches. A few readers subtracted the overlapping area between the building and the 100-foot circle. The geometry here was a little involved, but it yielded an answer of (5𝜋/6−√3/4)·100^{2}, or about 21,849 square feet. However, this wasn’t quite right either.

Why was that? Because this assumed that the string was able to pass through the building. While it *was* Halloween last week, that sort of “ghostly” string behavior would simply have been too paranormal. So whenever the election official lost line of sight with the door where the string was tied, 50 feet of string were taut against the wall of the building, leaving her with a 50-foot radius with which to move around.

Solver Tamera Lanham sketched out the accessible area:

The red region on the right was a semicircle with a radius of 100 feet, meaning its area was 𝜋(100)^{2}/2, or about 15,708 square feet. Meanwhile, the two blue regions were both quarter-circles with a radius of 50 feet. Together, these quarter-circles formed a semicircle whose area was 𝜋(50)^{2}/2, or about 3,927 square feet.

All together, the area of the region the election official could reach was **6,250𝜋**, or about 19,635 square feet.

This riddle was a version of what are known as goat problems, in which a goat is tethered to part of a fence and you are tasked with calculating its grazing area. If you thought the square shape of the building wasn’t challenging enough, try re-running the problem for a circular building. It’s tricky!

Congratulations to 👏 Brian Corrigan 👏 of Los Angeles, California, winner of last week’s Riddler Classic.

Last week, you and 60 of your closest friends decided to play a socially distanced game of hot pumpkin.

Before the game started, you all sat in a circle and agreed on a positive integer *N*. Once the number had been chosen, you (the leader of the group) started the game by counting “1” and passing the pumpkin to the person sitting directly to your left. She then declared “2” and passed the pumpkin one space to *her* left. This continued with each player saying the next number in the sequence, wrapping around the circle as many times as necessary, until the group had collectively counted up to *N*. At that point, the player who counted “*N*” was eliminated, and the player directly to his or her left started the next round, again proceeding to the same value of *N*. The game continued until just one player remained, who was declared the victor.

In the game’s first round, the player 18 spaces to your left was the first to be eliminated. Ricky, the next player in the sequence, began the next round. The second round saw the elimination of the player 31 spaces to Ricky’s left. Zach began the third round, only to find himself eliminated in a cruel twist of fate. (Woe is Zach.)

What was the smallest value of *N* the group could have used for this game?

A good first step was to nail down the important information from the problem. In the first round, when there were 61 players, the pumpkin went around the circle some number of times (it could have been zero times, once, twice, etc.) and then 18 more spaces. In other words, *N* had to be some multiple of 61 (one multiple for each time the pumpkin went around the circle) *plus* 19. In modular arithmetic notation, this meant that *N* ≡ 19 (mod 61).

Why did you have to add 19, and not 18? Because after the pumpkin had finished going around the circle, *you* and the 18 people to the left of you all counted off. And you plus the 18 others made 19 people in total.

In the second round, when there were 60 players, the pumpkin again went around some number of times and then 31 more spaces. That told you that *N *≡ 32 (mod 60). And in the third round, when poor Zach took himself out of the competition, you learned that *N *was 1 more than a multiple of 59, i.e., *N* ≡ 1 (mod 59).

Because the numbers 59, 60 and 61 were all relatively prime, the Chinese remainder theorem told you that there was a single value of *N* between 1 and 59·60·61, or 215,940, that satisfied all three of the aforementioned modular relations. But while it was one thing to prove there was a solution, finding it was another matter entirely.

One way to do this was through brute force coding — searching for the one number less than 215,940 that had a remainder of 19 when divided by 61, 32 when divided by 60 and 1 when divided by 59. That smallest number was **136,232**. Thanks to the wonders of modular arithmetic, any multiple of 215,940 plus 136,232 would also have resulted in the same sequence of eliminations.

For extra credit, you had to play this game to its conclusion, where you were identified as Player No. 1, the player to your left as Player No. 2 and so on. Once again, your computer was your friend. If you simulated the rest of the game, it was **Player No. 58** who emerged as the winner.

And for extra extra credit, you had to find the smallest value of *N* for which you, Player No. 1, won the game. This question was ambiguous as to whether you simply wanted the smallest value of *N* or the smallest value of *N* that also resulted in the eliminations listed in the original problem. The answer to the former was **140**, while the answer to the latter was a whopping **42,892,352**. Only five solvers found this latter result: Angela Zhou, Peter from Hanover, Germany, Emma Knight, Jim Waters and Asha O’Shaughnessy. They will each be winning an imaginary hot pumpkin!

Finally, hats off to those who attempted this not with code, but by hand or with spreadsheets. One such solver, Shane Tilton, said he was content with simply having been invited to play a game of hot pumpkin in the first place.

Who needs to play a game that requires 42 million turns to win when, more importantly, you have 60 good friends to play it with?

Well, aren’t you lucky? There’s a whole book full of the best puzzles from this column and some never-before-seen head-scratchers. It’s called “The Riddler,” and it’s in stores now!

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{4} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

While waiting in line to vote early last week, I overheard a discussion between election officials. Apparently, there may have been a political sign that was within 100 feet of the polling place, against New York State law.

This got me thinking. Suppose a polling site is a square building whose sides are 100 feet in length. An election official takes a string that is also 100 feet long and ties one end to a door located in the middle of one of the building’s sides. She then holds the other end of the string in her hand.

What’s the area of the region outside of the building she can reach while holding the string?

The solution to this Riddler Express can be found in the following week’s column.

From Ricky Jacobson comes a party game that’s just in time for Halloween:

Instead of playing hot potato, you and 60 of your closest friends decide to play a socially distanced game of hot pumpkin.

Before the game starts, you all sit in a circle and agree on a positive integer *N*. Once the number has been chosen, you (the leader of the group) start the game by counting “1” and passing the pumpkin to the person sitting directly to your left. She then declares “2” and passes the pumpkin one space to *her* left. This continues with each player saying the next number in the sequence, wrapping around the circle as many times as necessary, until the group has collectively counted up to *N*. At that point, the player who counted “*N*” is eliminated, and the player directly to his or her left starts the next round, again proceeding to the same value of *N*. The game continues until just one player remains, who is declared the victor.

In the game’s first round, the player 18 spaces to your left is the first to be eliminated. Ricky, the next player in the sequence, begins the next round. The second round sees the elimination of the player 31 spaces to Ricky’s left. Zach begins the third round, only to find himself eliminated in a cruel twist of fate. (Woe is Zach.)

What was the smallest value of *N* the group could have used for this game?

*Extra credit:* Suppose the players were numbered from 1 to 61, with you as Player No. 1, the player to your left as Player No. 2 and so on. Which player won the game?

*Extra extra credit:* What’s the smallest *N* that would have made *you* the winner?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 Eric Mentzell 👏 of Takoma Park, Maryland, winner of last week’s Riddler Express.

Last week, four-time WNBA champion Sue Bird and Seattle Storm teammate Breanna Stewart were interested in testing whether Bird had a “hot hand” — that is, if her chances of making a basket depended on whether or not her previous shot went in. Bird happened to know that her chances of making any given shot was *always* 50 percent, independent of her shooting history, but she agreed to perform an experiment.

In each trial of the experiment, Bird took three shots, while Stewart recorded which shots Bird made or missed. Stewart then looked at all the trials with at least one shot that was immediately preceded by a made shot. She randomly picked one of these trials, and then randomly picked a shot that was immediately preceded by a made shot. (If there was only one such shot to pick from, she chose that shot.)

What was the probability that Bird made the shot that Stewart picked?

At first glance, you might have thought that the answer was 50 percent. After all, Bird acknowledged that she had a 50 percent chance of making any given shot. Right?

Wrong. And Riddler Nation was not fooled. To see why the answer wasn’t 50 percent, many solvers, like Deborah Abel, listed out all eight possible shot sequences. If we let “X” represent a made shot and “O” a missed shot, then the eight possible shot sequences were: OOO, OOX, OXO, XOO, OXX, XOX, XXO, and XXX. Because Bird had an equal chance of making or missing any shot, all eight of these shot sequences were equally likely.

During her analysis, Stewart first looked at shots that were “immediately preceded by a made shot.” This didn’t occur for sequences OOO and OOX, so we know that Stewart was limiting her analysis exclusively to the remaining six sequences, and that each had a one-in-six chance of being selected.

Next, Stewart “randomly picked a shot that was immediately preceded by a made shot.” Here’s what happened if you looked at the six possible sequences one at a time:

- OXO: Stewart looked at the last shot. For this sequence, Bird made zero percent of the shots Stewart could have picked.
- XOO: Stewart looked at the second shot. For this sequence, Bird made zero percent of the shots Stewart could have picked.
- OXX: Stewart looked at the last shot. For this sequence, Bird made 100 percent of the shots Stewart could have picked, which gives a 1/6 chance of selecting a made shot in this sequence.
- XOX: Stewart looked at the second shot. For this sequence, Bird made zero percent of the shots Stewart could have picked.
- XXO: Stewart could have looked at the second or third shots. For this sequence, Bird made 50 percent of the shots Stewart could have picked, which gives a 1/12 chance of selecting a made shot in this sequence.
- XXX: Stewart could have looked at the second or third shots. For this sequence, Bird made 100 percent of the shots Stewart could have picked, which gives a 1/6 chance of selecting a made shot in this sequence.

Putting this all together, the probability Bird had made the shot that Stewart picked was 1/6 + 1/12 + 1/6, or **5/12** — the solution!

So while Bird had a 50 percent chance of making any given shot, Stewart’s methodology for selecting a shot was somehow biased toward shots that Bird missed. What was going on here?

Solver Madeline Argent of Launceston, Australia offered an explanation. Among the six shot sequences listed above, there were eight cases in which Bird made one of her first two shots. Of these eight, Bird made four of the next shots and missed four of them. So if Stewart had selected among all *shots* that were immediately preceded by a made shot, Bird would have made half the selected shots. But because Stewart first randomly selected a *trial*, the last two shot sequences — XXO and XXX, in which Bird made most of her shots — were unfairly weighted equally alongside the other four sequences, even though they had twice as many made shots for Stewart to choose from.

That was what happened when Bird took three shots per trial. Meanwhile, solver Dean Ballard found similar results when Bird took more shots. The probability she had made a selected shot approached 50 percent as the number of shots increased, but it never quite reached 50 percent.

Clearly, this methodology for determining whether a basketball player had a “hot hand” was flawed. It may surprise some readers that this was precisely the methodology used in an attempt to debunk the “hot hand” back in 1985 — a debunking that itself was later debunked. If you’d like to read more on this, check out this 2019 article that connects all of this to the Monty Hall problem. (Kudos to submitter Drew Mathieson for the link!)

So if Stewart performed the experiment as outlined, and found that Bird had made *half* the selected shots (rather than 5/12 of them), she would have rightfully concluded that Bird *did* have a “hot hand.”

Congratulations to 👏 Lucas Robinson 👏 of Oakwood, Ohio, winner of last week’s Riddler Classic.

Last week, four-time NBA champion LeBron James was playing a game of sudden-death, one-on-one basketball with Los Angeles Lakers teammate Anthony Davis. They flipped a coin to see which of them had first possession, and whoever made the first basket won the game.

Both players had a 50 percent chance of making any shot they took. However, Davis was the superior rebounder and would always rebound any shot that either of them missed. Every time Davis rebounded the ball, he dribbled back to the three-point line before attempting another shot.

Before each of Davis’s shot attempts, James had a probability *p* of stealing the ball and regaining possession before Davis could get the shot off. What value of *p* made this an evenly matched game of one-on-one, in which both players had an equal chance of winning *before* the coin was flipped?

Suppose James’s probability of winning when he had possession was *J*, while James’s probability of winning when *Davis* had possession was *D*. We can write an equation for *J*: For James to win when he had possession, he either had to score (with probability 1/2) or, upon missing (also with probability 1/2), he’d have to somehow win after Davis got the rebound and gained possession. In other words, *J* = 1/2 + 1/2·*D*. We can similarly write an equation for *D*: For James to win when Davis had possession, he either had to steal the ball (with probability *p*) and regain possession, or, upon not getting the steal (with probability 1−*p*), he needed Davis to miss (with probability 1/2) so James could have another chance at victory. In other words, *D* = *pJ* + (1−*p*)*D*/2.

That gave you two equations with three unknowns. What was missing? Based on the coin flip, James started with possession half the time, and Davis started with possession the other half the time. James’s probability of winning was therefore *J*/2 + *D*/2, which the problem stated was to equal 1/2, so the third equation was *J*/2 + *D*/2 = 1/2, or *J* + *D* = 1.

Solving this system of three equations gave you the result that *J* = 2/3 (James had a two-thirds chance of winning when he had possession), *D* = 1/3 (James had a one-third chance of winning when Davis had possession) and *p* = 1/3. In other words, for the game to be fair, James had to have a **one-third** chance of stealing the ball.

Solver Quoc Tran extended the puzzle, simulating how James’s fortunes changed in the more realistic scenario where Davis didn’t nab *every* rebound. For the game to be fair, we already saw that James needed to steal one-third of the time when Davis rebounded every shot. Meanwhile, if James and Davis had had equal chances of getting a rebound, then to maintain parity, James couldn’t steal at all. But when Davis had a rebounding edge, interesting mathematics was happening, as shown below:

How fitting that this game was fair where purple met gold.

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{5} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

From Drew Mathieson comes an exploration of basketball’s historied hot hand:

This season, on the way to winning her fourth WNBA championship in her 17-year career, Sue Bird made approximately 50 percent of her field goal attempts. Suppose she and Seattle Storm teammate Breanna Stewart are interested in testing whether Bird has a “hot hand” — that is, if her chances of making a basket depend on whether or not her previous shot went in. Bird happens to know that her chances of making any given shot is *always* 50 percent, independent of her shooting history, but she agrees to perform an experiment.

In each trial of the experiment, Bird will take three shots, while Stewart will record which shots Bird made or missed. Stewart will then look at all the trials that had at least one shot that was immediately preceded by a made shot. She will randomly pick one of these trials and then randomly pick a shot that was preceded by a made shot. (If there was only one such shot to pick from, she will choose that shot.)

What is the probability that Bird made the shot that Stewart picked?

The solution to this Riddler Express can be found in the following week’s column.

Now that LeBron James and Anthony Davis have restored the Los Angeles Lakers to glory with their recent victory in the NBA Finals, suppose they decide to play a game of sudden-death, one-on-one basketball. They’ll flip a coin to see which of them has first possession, and whoever makes the first basket wins the game.

Both players have a 50 percent chance of making any shot they take. However, Davis is the superior rebounder and will always rebound any shot that either of them misses. Every time Davis rebounds the ball, he dribbles back to the three-point line before attempting another shot.

Before each of Davis’s shot attempts, James has a probability *p* of stealing the ball and regaining possession before Davis can get the shot off. What value of *p* makes this an evenly matched game of one-on-one, so that both players have an equal chance of winning *before* the coin is flipped?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 Frank Probst 👏 of Houston, Texas, winner of last week’s Riddler Express.

Last week, the residents of Riddler City were electing a mayor from among three candidates. The winner was the candidate who received an outright majority (i.e., more than 50 percent of the vote). But if no one achieved this outright majority, there would be a runoff election among the top two candidates.

If the voting shares of each candidate were uniformly distributed between 0 percent and 100 percent (subject to the constraint that they add up to 100 percent, of course), then what was the probability of a runoff?

The “uniformly distributed” wording in the problem was ambiguous and was interpreted several ways by readers. How can you randomly choose three numbers between 0 and 100 that add up to 100? Here, I will write about three popular interpretations.

First, imagine randomly picking three values between 0 and 100, and call them *x*, *y* and *z*. Each choice of (*x*, *y*, *z*) corresponded to a point in a cube that measured 100 by 100 by 100. But only *some* points in this cube had coordinates that summed to 100 — those that also lay on the plane *x*+*y*+*z* = 100. This intersection between a cube and a plane might have been hard to visualize — it was an equilateral triangle (shown below). If you divided this triangle into quarters, then three of those quarters had one value (*x*, *y* or *z*) that exceeded 50. That meant the probability of a runoff, with *no* voting shares that exceeded 50, was **1/4**.

That was one interpretation. Another way to “uniformly” pick three values was to draw a number line between 0 and 100 and break it into three segments by randomly picking two points, *a* and *b*. Assuming *b* was greater than *a*, the three lengths that summed to 100 were *a*, *b*−*a* and 100−*b*. The challenge was to find when each of these three values exceeded 50 inside the triangle defined by 0 ≤ *a *≤ *b* ≤ 100. Each of the three inequalities (a ≥ 50, *b*−*a* ≥ 50 and 100−*b* ≥ 50) carved out a quarter of the larger triangle. And so, once again, that meant the probability of a runoff was **1/4**.

Yet another way to “uniformly” pick three values was to go ahead and pick three numbers between 0 and 100 (again, let’s call them *x*, *y* and *z*) and then “normalize” them — that is, divide each number by *x*+*y*+*z* and multiply by 100, so that they were guaranteed to add up to 100. Like before, each choice of (*x*, *y*, *z*) corresponded to a point in a cube. But this time, to avoid a runoff, you needed one of the values to exceed the sum of the other two, meaning it would exceed 50 percent of the sum of all three numbers. There were three regions in the cube where a runoff would *not* occur: *x* > *y*+*z*, *y* > *x*+*z* and *z* > *x*+*y*. Each region made up one-sixth of the cube, so all together they represented half the cube, as shown below. In the other half, a runoff was necessary. So according to this interpretation of the problem, the answer was **1/2**. (Some solvers, like Benjamin Dickman, noted that this approach was identical to finding the probability that three lengths from a random, uniform distribution satisfy the triangle inequality.)

For extra credit, you wanted the probability of a runoff when there were *N* candidates instead of three. Once again, the answer depended on your interpretation of the problem. Based on the first interpretation, this general solution was 1−*N*/2^{N−1} (nicely explained by Josh Silverman), since each of the N candidates had a 1/2^{N−1} chance of winning an outright majority. Similarly, based on the second interpretation, the answer was again 1−*N*/2^{N−1}. But based on the third interpretation, the answer was 1−1/(*N*−1)!.

That’ll do it for Riddler City’s mayoral election. Don’t forget to vote in *any other elections* that may be happening!

Congratulations to 👏 Asher S. 👏 of Chicago, Illinois, winner of last week’s Riddler Classic.

Last week, you were playing a modified version of “The Price is Right.” In this version’s bidding round, you and two (not three) other contestants had to guess the price of an item, one at a time.

The true price of this item was a randomly selected real number between 0 and 100. Among the three contestants, the winner was whoever guessed the closest price *without going over*. For example, if the true price was 29 and you guessed 30, while another contestant guessed 20, then they would be the winner even though your guess was technically closer.

In the event all three guesses exceeded the actual price, the contestant who had made the lowest (and therefore closest) guess was declared the winner.

If you were the first to guess, and all contestants played optimally (taking full advantage of the guesses of those who had gone before them), what were your chances of winning?

At first, this three-player game might have seemed unsolvable. As the first to guess, you would want to know what the second and third players’ strategies would be. But their strategies depended on yours and on each other’s. Was there any way out of this mess?

Indeed there was. One approach was to work backwards. Suppose you (the first player) guessed a price *A* and the second player guessed a price *B*. What should the third player do? For now, let’s assume *A* was less than *B*. The third player would then choose from among three options:

- Guess a value of zero, in which case they’d win if the true price was between 0 and
*A*— a range of*A*. - Guess a value infinitesimally greater than A, in which case they’d win if the true price was between
*A*and*B*— a range of*B*−*A*. - Guess a value infinitesimally greater than
*B*, in which case they’d win if the true price was between*B*and 100 — a range of 100−*B*.

But which of the three values should the third player guess? Whichever corresponded to the greatest range. (If *A* had been greater than B, there would again be three options, but with *A* and *B* reversed.)

So for any combination of *A* and *B*, the third player’s strategy was known. Next, it was time to look more closely at the second player.

For each value of *A*, the second player could figure out their chances of winning for any *B *they picked, since they now knew what the third player would do given *A* and *B*. For each *A*, the second player would then pick a *B* that maximized their own chances of winning.

At last, we’re back to you, the first player. By now, you knew exactly what the second and third players would do in response to any guess *A*. As with the second player, that meant you had to pick the value that maximized your own chances of winning.

Amidst all this strategizing, I neglected to mention just what these optimal guesses were. In the end, your best guess was two-thirds of 100 (~66.7). Then the second player’s best move was to guess one-third of 100 (~33.3), and the third player’s best move was to guess anything less than that (e.g., zero). All players had a **one-third** chance of winning. If you deviated from these optimal values, the second and third players would both have the advantage over you, each with a greater than one-third chance of winning.

Some solvers said they would have guessed a price that was *one*-third of 100. But as Emma Knight observed, Player 2 could then have guessed anything less than that and Player 3 slightly more. That would have meant Player 2 *still* had a one-third chance of winning, while Player 3’s chances went up to two-thirds, leaving you with nothing. Player 2 might not have chosen to sabotage your hopes of winning, but why leave it to chance?

Finally, Keith Wynroe of Edinburgh, Scotland offered a neat extension to this problem, asking how the three players’ strategies might change if the goal was not simply maximizing one’s chances of winning, but rather maximizing the *expected value* of the prize won. According to Keith, while this shift would incentivize all three players to bet higher values, no one’s chances of winning actually changed.

After all was said and done, it turned out to be a fair game. How sweet.

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{6} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

As you may have seen in FiveThirtyEight’s reporting, there’s an election coming up. Inspired, Vikrant Kulkarni has an electoral enigma for you:

On Nov. 3, the residents of Riddler City will elect a mayor from among three candidates. The winner will be the candidate who receives an outright majority (i.e., more than 50 percent of the vote). But if no one achieves this outright majority, there will be a runoff election among the top two candidates.

If the voting shares of each candidate are uniformly distributed between 0 percent and 100 percent (subject to the constraint that they add up to 100 percent, of course), then what is the probability of a runoff?

*Extra credit:* Suppose there are *N* candidates instead of three. What is the probability of a runoff?

The solution to this Riddler Express can be found in the following week’s column.

This week, we return to the brilliant and ageless game show, “The Price is Right.” In a modified version of the bidding round, you and two (not three) other contestants must guess the price of an item, one at a time.

Assume the true price of this item is a randomly selected value between 0 and 100. (Note: The value is a real number and does not have to be an integer.) Among the three contestants, the winner is whoever guesses the closest price *without going over*. For example, if the true price is 29 and I guess 30, while another contestant guesses 20, then they would be the winner even though my guess was technically closer.

In the event all three guesses exceed the actual price, the contestant who made the lowest (and therefore closest) guess is declared the winner. I mean, *someone* has to win, right?

If you are the first to guess, and all contestants play optimally (taking full advantage of the guesses of those who went before them), what are your chances of winning?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 Nick Russell 👏 of Vancouver, Canada, winner of last week’s Riddler Express.

Last week, you were helping me park across the street from a restaurant for some contactless curbside pickup. There were six parking spots, all lined up in a row.

While I *could* parallel park, it definitely wasn’t my preference. No parallel parking was required when the rearmost of the six spots was available or when there were two consecutive open spots. If there was a random arrangement of cars occupying four of the six spots, what was the probability that I had to parallel park?

With six spots and four cars, there were 6 choose 4, or 15 cases to consider. Some solvers, like Lisa Fondren of Montrose, Michigan, listed them all out and counted how many required parallel parking.

But there were other ways to find the solution that didn’t require working through every case. Rather than considering where the four cars were, solver Libby Aiello equivalently looked at where the two empty spots were. Among the total 15 combinations, there were five in which the two empty spots were adjacent: the first and second spots, the second and third, the third and fourth, the fourth and fifth, and the fifth and sixth.

There were also five combinations in which the last spot was open, since there were five spots from which to choose the *other* open spot. Combining these two cases (having two consecutive open spots and having the sixth spot open), there appeared to be 10 combinations that didn’t require parallel parking.

But that wasn’t quite right. As Libby noted, one combination — when the fifth and sixth spots were open — was counted twice, since two consecutive spots were open *and* the sixth spot was open. Subtracting one to account for this double counting meant there were nine combinations that didn’t require parallel parking, and six combinations that did. Therefore, the probability I had to parallel park was 6/15, or **40 percent**.

I was pleased to see how many readers solved this combinatorics challenge. Now if only that many drivers could successfully parallel park…

Congratulations to 👏 Douglas Thackrey 👏 of Loulé, Portugal, winner of last week’s Riddler Classic.

Parking cars was one thing — parking trucks was another thing entirely. Last week, you looked at a *very long* truck (with length *L*) with two front wheels and two rear wheels. (The truck was so long compared to its width that you could consider the two front wheels as being a single wheel, and the two rear wheels as being a single wheel.)

You were asked to determine the truck’s turning radius, given the angles by which you could turn the front or rear wheels. First, you considered what would happen if the front wheels could be turned up to 30 degrees in either direction (right or left), but the rear wheels did not turn.

There was no doubt among readers that this was a geometry puzzle, but the challenge lay in translating the constraints on the wheels (i.e., only turning a certain amount or not at all) into the resulting motion of the truck.

As suggested by the term “turning radius,” the key was to think about the circular motion of the truck. When the driver rotated the front wheels a full 30 degrees in one direction and drove forward, both the front and the rear of the truck would move in circles. The front of the truck always made a 30 degree angle with the tangent line to the circle it was moving around.

Meanwhile, the rear wheels of the truck couldn’t turn. That meant the truck’s rear was always tangent to the circle it was moving around.

If that wasn’t clear, here’s an animation to illustrate how the truck was moving:

The green line segment represents the truck, and the circles represent the paths of the truck’s front and rear. Sure enough, the angle between the truck and the tangent line (represented by the white segment) is always 30 degrees. This means that the front of the truck is moving around a *wider* circle than the rear — and, consequently, that the front of the truck moves *faster* than the rear!

At this point, calculating the turning radius was a matter of geometry and trigonometry. If the green segment was doubled in length so that it formed a chord within the larger circle, the 30 degree angles meant that this chord was one side of an inscribed regular hexagon, whose sides all equal the circle’s radius. And so if the truck had length *L*, the turning radius — that is, the radius of the circle around which the front of the truck moved — was **2**** L**. (Solvers who gave the turning radius for the truck’s midpoint or rear and explained their reasoning were also given full credit.)

That was the case when you could only turn the front wheels. You were also asked for the turning radius when both the front and rear wheels could be independently turned up 30 degrees in either direction. To make the tightest possible circle, the front and rear wheels were both rotated the full 30 degrees, but in opposite directions, allowing the front and rear to move along the same circle. Again, here’s an animation:

This time, the truck made up a complete chord that was a side of an inscribed regular hexagon. That meant the turning radius was equal to the truck’s length, ** L**. (Again, solutions for different locations on the truck were also accepted.)

A few solvers, including Laurent Lessard and Josh Silverman, tackled the general version of this problem, in which the front wheels could be turned an angle *θ*_{1} and the rear wheels could be turned an angle *θ*_{2}. The turning radius at the front of the truck was *L*·cos(*θ*_{2})/sin(*θ*_{1}+*θ*_{2}), while the turning radius at the rear was *L*·cos(*θ*_{1})/sin(*θ*_{1}+*θ*_{2}).

These formulas checked out for both questions in the riddle. And when neither wheel could turn (i.e., *θ*_{1} and *θ*_{2} were both zero), the turning radius went to infinity, which also made sense. In that case, you’d just have to keep on truckin’.

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{7} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

Every weekend, I drive into town for contactless curbside pickup at a local restaurant. Across the street from the restaurant are six parking spots, lined up in a row.

While I *can* parallel park, it’s definitely not my preference. No parallel parking is required when the rearmost of the six spots is available or when there are two consecutive open spots. If there is a random arrangement of cars currently occupying four of the six spots, what’s the probability that I will have to parallel park?

The solution to this Riddler Express can be found in the following week’s column.

Parking cars is one thing — parking trucks is another thing entirely. Suppose I’m driving a *very long* truck (with length *L*) with two front wheels and two rear wheels. (The truck is so long compared to its width that I can consider the two front wheels as being a single wheel, and the two rear wheels as being a single wheel.)

Question 1: Suppose I can rotate the front wheels up to 30 degrees in either direction (right or left), but the rear wheels do not turn. What is the truck’s turning radius?

Question 2: Suppose I can also rotate the rear wheels — independently from the front wheels — 30 degrees in either direction. *Now* what is the truck’s turning radius?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 Derek Carnegie 👏 of Geneva, Switzerland, winner of last week’s Riddler Express.

Last week, you considered “hip” numbers that could be written as the difference between two perfect squares. For example, the number 40 was hip, since it equals 7^{2}−3^{2}, or 49−9. But hold the phone, 40 was *doubly* hip, because it also equals 11^{2}−9^{2}, or 121−81. Meanwhile, 42 was not particularly hip.

It was then left to you to determine the hipness of the number 1,400 — how many ways could 1,400 be written as the difference of two perfect squares?

You could have tested out lots of square numbers or written code to do it for you, like solver Eero Kuusi from Helsinki, Finland. But most readers recognized that any difference of squares, written as *A*^{2}−*B*^{2}, could be factored as (*A*+*B*)(*A*−*B*). So to find the different possible values of *A* and *B*, you had to identify the factor pairs of 1,400 — the greater factor was equal to *A*+*B* and the lesser factor was equal to *A*−*B*. In other words, *A* was the average of the two factors, while *B* was half their difference.

The number 1,400 had a prime factorization of 2^{3}×5^{2}×7^{1}. So for a number to be a divisor of 1,400, it had to have between zero and three powers of 2, between zero and two powers of 5 and zero or one power of 7. That meant 1,400 had a total of 24 factors (four times three times two), or 12 factor pairs. Here they are:

- 1 and 1,400
- 2 and 700
- 4 and 350
- 5 and 280
- 7 and 200
- 8 and 175
- 10 and 140
- 14 and 100
- 20 and 70
- 25 and 56
- 28 and 50
- 35 and 40

At this point, you might have thought the answer was 12. But not so fast! Remember, when it came to the difference of squares, the root of the greater square was the *average* of the two factors. For this to be a whole number, either both factors had to be even or both factors had to be odd. Among the 12 factor pairs of 1,400, six of them had one even number and one odd number (1×1,400, 5×800, 7×200, 8×175, 25×56 and 35×40). The remaining factors pairs had two even numbers. (Since 1,400 was an even number, it couldn’t have been the product of two odd factors.)

In all, there were six factor pairs with two even numbers, which meant **six** was the correct answer — 1,400 could be written as the difference of squares in exactly six ways:

- 351
^{2}− 349^{2}= 123,201 − 121,801 = 1,400 - 177
^{2}− 173^{2}= 31,329 − 29,929 = 1,400 - 75
^{2}− 65^{2}= 5,625 − 4,225 = 1,400 - 57
^{2}− 43^{2}= 3,249 − 1,849 = 1,400 - 45
^{2}− 25^{2}= 2,025 − 625 = 1,400 - 39
^{2}− 11^{2}= 1,521 − 121 = 1,400

For extra credit (which admittedly went well beyond “Riddler Express territory”), you had to find a general approach for determining how many ways a whole number *N* could be written as the difference of squares. Like the original puzzle, a good first step was to start with the number’s prime factorization, which we could write as *N* = 2^{a}×3^{b}×5^{c}×7^{d}×….

Again, you wanted to determine the number of unique factor pairs in which both factors were even or both factors were odd. But first, it was helpful to count the total number of *odd* factors, (*b*+1)(*c*+1)(*d*+1)…, which we’ll call *F*.

When *a* was zero, all *F* of *N*’s factors were odd. That meant *N* had *F*/2 factor *pairs*. (Okay, technically it was the ceiling of *F*/2, accounting for cases in which *N* was a square number.) And when *a* was one, every factor pair had exactly one even number and one odd number, so there were no ways to write *N* as a difference of squares.

When *a* was greater than one, at least one number in each factor pair was even, so you wanted to count the factor pairs in which *both* numbers were even. There were (*a*+1)×*F*/2 total factor pairs, *F* of which included an odd factor. That meant there were (*a*−1)×*F*/2 pairs of even factors, and so this was the number of ways you could write *N* as a difference of squares. (Again, it was technically the ceiling of (*a*−1)×*F*/2, accounting for when *N* was a square number.) In other words, when *N* was even and divisible by 4, the answer was the number of factor pairs for *N*/4.

This math checked out in the case when *N* was 1,400, or 2^{3}×5^{2}×7^{1}. For this value of *N*, *N*/4 was 350, or 2^{1}×5^{2}×7^{1}, which had 12 factors, or six factor pairs — the answer to the Express.

Anyway, no matter your approach, 1,400 was indeed a sextuply hip number.

Congratulations to 👏 Gary M. Gerken 👏 of Littleton, Colorado, winner of last week’s Riddler Classic.

Last week, I had 10 chocolates in a bag: Two were milk chocolate, while the other eight were dark chocolate. One at a time, I randomly pulled chocolates from the bag and ate them — that is, until I picked a chocolate of the other kind. When I got to the other type of chocolate, I put it back in the bag and started drawing again with the remaining chocolates. I kept going until I had eaten all 10 chocolates.

For example, if I first pulled out a dark chocolate, I ate it. (I always ate the first chocolate I pulled out.) If I pulled out a second dark chocolate, I ate that as well. If the third one was milk chocolate, I did not eat it (yet), and instead placed it back in the bag. Then I started again, eating the first chocolate I pulled out.

What were the chances that the *last* chocolate I ate was milk chocolate?

But before we get into any analytical solutions, let’s check in with our Monte Carlo-minded friends. Ian Greengross and Kyle Giddon both ran 1 million simulations of the problem, while Josh Silverman ran 10 million simulations. All three of them found the same result: In about half of the simulations, the last chocolate was milk chocolate. Could it be that the answer was simply… a half?

Yes! The answer was precisely **a half**. What follows is a proof of why this is true, based on the explanation of solver Guy D. Moore:

Suppose there are two chocolates in the bag: one dark and one milk. In this “base case,” they each have a 50 percent chance of being the last (i.e., the second) chocolate you’ll eat.

Next, we’ll take a mathematical leap of faith. Suppose we’ve proven — somehow — that there’s a 50 percent chance the last chocolate is either milk or dark chocolate whenever the total starting number of chocolates is less than some value *N*. What would happen if I now started with *N* chocolates?

Well, one of three things could happen. (Again, let’s suppose that *d* of these *N* chocolates are dark and *m* are milk. Note that this implies *d*+*m* = *N*, since every chocolate must be either dark or milk.)

- I go on a “dark” streak and pick out all
*d*dark chocolates first, which means the last chocolate I eat will be milk. This is one of the*N*choose*d*total ways to order the chocolates, so the probability this occurs is one divided by*N*choose*d*, which is*d*!(*N*−*d*)!/*N*!, or*d*!*m*!/*N*!, since*N*−*d*equals*m*. - I go on a “milk” streak and pick out all
*m*milk chocolates first, which means the last chocolate I eat will be dark. Again, this is one of the*N*choose*d*ways to order the chocolates, so the probability this occurs is*also d*!*m*!/*N*!. - I don’t go on either streak. I’ll eat some milk or dark chocolates until I pick out a chocolate of the other kind. At this point, I’m effectively starting over with a bag that has fewer than
*N*chocolates. By my earlier leap of faith, I assumed the probability of now finishing with a milk chocolate was 50 percent.

So then what are the chances of finishing with a milk chocolate when there are *N* chocolates in the bag? It’s 50 percent! That’s because finishing with dark or milk is equally likely in the third case, and equally likely when you consider the first and second cases together.

Let’s take a final step back to get a bigger picture of what just happened. When there were two chocolates in the bag (one of each type), you had a 50 percent chance of finishing with a milk chocolate. We also showed that if there’s a 50 percent chance for fewer than *N* chocolates, there’s also a 50 percent chance for *N* chocolates. Since it was 50 percent when there were two chocolates, it must also be true when there were three. That means it must also be true when there were four. And five. And then six. And so on!

All of this amounts to a proof by induction: showing something is true for a “base case” (here, two), and then using an induction step to show it is true for all higher cases.

Indeed, this was a beautiful problem with an even more beautiful result. No matter how many chocolates of either type you started with — as long as you had at least one of each — the chances of finishing with either were precisely 50 percent.

In a neat little extension, solver Laurent Lessard proved that this was *not* the case when there were three types of chocolates. So it’s a good thing we left out the white chocolate. (That, and because it’s not even chocolate.)

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{8} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

From Benjamin Dickman comes proof that it’s hip to be (a difference of) squares:

Benjamin likes numbers that can be written as the difference between two perfect squares. He thinks they’re hip. For example, the number 40 is hip, since it equals 7^{2}−3^{2}, or 49−9. But hold the phone, 40 is *doubly* hip, because it also equals 11^{2}−9^{2}, or 121−81.

With apologies to Douglas Adams, 42 is not particularly hip. Go ahead and try finding two perfect squares whose difference is 42. I’ll wait.

Now, Benjamin is upping the stakes. He wants to know just how hip 1,400 might be. Can you do him a favor, and figure out how many ways 1,400 can be written as the difference of two perfect squares? Benjamin will really appreciate it.

*Extra credit:* Can you find a general formula or approach for counting the number of ways *any* whole number can be written as the difference between two perfect squares? (Your approach might be a function of whether the number is even or odd, its prime factorization, etc.)

The solution to this Riddler Express can be found in the following week’s column.

From mathematician (and author of Basic Probability: What Every Math Student Should Know) Henk Tijms comes a choice matter of chance and chocolate:

I have 10 chocolates in a bag: Two are milk chocolate, while the other eight are dark chocolate. One at a time, I randomly pull chocolates from the bag and eat them — that is, until I pick a chocolate of the other kind. When I get to the other type of chocolate, I put it back in the bag and start drawing again with the remaining chocolates. I keep going until I have eaten all 10 chocolates.

For example, if I first pull out a dark chocolate, I will eat it. (I’ll always eat the first chocolate I pull out.) If I pull out a second dark chocolate, I will eat that as well. If the third one is milk chocolate, I will not eat it (yet), and instead place it back in the bag. Then I will start again, eating the first chocolate I pull out.

What are the chances that the *last* chocolate I eat is milk chocolate?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 Karen Campe 👏 of New Canaan, Connecticut, winner of last week’s Riddler Express.

Last week, you took a swing at a riddle about golf. (See what I did there?)

A typical hole is about 400 yards long, while the cup measures a mere 4.25 inches in diameter. Suppose that, with every swing, you hit the ball *X* percent closer to the center of the hole. For example, if *X* were 75, then with every swing the ball would be four times closer to the hole than it was previously.

For a 400-yard hole with no hazards (water, sand or otherwise) in the way, what was the minimum value of *X* so that you shot par, meaning you hit the ball into the cup in exactly four strokes?

First, it was a good idea to work with a single set of units. One yard equals 3 feet, or 36 inches. That meant the total length of the hole — 400 yards — was 14,400 inches. Meanwhile, the cup’s diameter was 4.25 inches, which meant its radius was 2.125 inches. So once the ball came within 2.125 inches of the hole’s center, it fell in. (Some readers used the diameter instead of the radius here, which led to an answer that was off by a few percentage points.)

Most solvers assumed left the value of *X* as a variable. As we said, the initial distance to the hole was 14,400 inches. After one swing, it was 14,400·(1−*X*). After two swings, it was 14,400·(1−*X*)^{2}. After three swings, it was 14,400·(1−*X*)^{3}. Finally, after four swings, it was 14,400·(1−*X*)^{4}.

You shot par when this distance, 14,400·(1−*X*)^{4}, was equal to 2.125 inches (or, rather, just a shade less than 2.125 inches). Mathematically, that meant 14,400·(1−*X*)^{4} = 2.125, or (1−*X*)^{4} = 17/115,200. Taking the fourth root of both sides and solving gave a solution of approximately 0.88978. In other words, *X* had to be at least about **89** for you to make par.

Some solvers took this problem even further, calculating the value of X for holes that were longer or shorter and also for different numbers of strokes. For example, what would it take to shoot a birdie (a score of one less than par — in this case, three strokes) on this hole? To answer this question, you had to solve the equation (1−*X*)^{3} = 17/115,200, with an exponent of three rather than four. That gave a value of *X* that is approximately 95. In other words, to go from a par to a birdie, you would need to hit the ball about twice as close to the hole with each shot.

Anyway, the next time you’re out on a golf course and you’re hitting the ball 89 to 95 percent of the way to the hole on each shot, be respectful to those playing ahead of you and shout, “Four!”

Golf humor. Apologies.

Congratulations to 👏 Jez Schaa 👏 of Singapore, winner of last week’s Riddler Classic.

Last week, you had half of a 10-inch pizza that you were cramming into your fridge. You had circular plates of all different sizes, so to save space in your fridge, you wanted to place the pizza on the smallest plate that held the entire semicircle (i.e., with no pizza hanging off the plate).

Unfortunately, the smallest plate that could hold half of a 10-inch pizza was just a circle with a 10-inch diameter. So much for saving space.

But there was a catch — you were allowed to make a single straight cut and then rearrange the two resulting pieces on a circular plate, provided the pieces didn’t overlap and no pizza was hanging off the plate.

What was the diameter of the *smallest* circular plate onto which you could fit the two resulting pieces?

Reader Pierre Bierre put it best: “This [was] a humdinger of a geometry Riddler.” What made this problem so difficult? Consider all the dimensions at play here:

- First, should both endpoints of the cut have been on the circumference, or should one have been along the circumference and the other on the diameter? The latter was a better bet.
- Where along the circumference should the first endpoint have been?
- Where along the diameter should the second endpoint have been?
- Finally, how should you have arranged the two pieces so that they were enclosed by the smallest possible circle?

Ideally, you would have turned those last three questions into three parameters. From there, you had to find the radius of the enclosing circle as a function of those three parameters and then minimize it (either with calculus or computationally). So yes, this was certainly a “humdinger.”

Some readers cheated and had the two pieces overlapping. In this case, the best you could do was to cut the semicircle in half, forming two quarter-circles that could be stacked directly on top of each other. The smallest plate each quarter-circle could fit on had a diameter of 5√2, or about 7.07 inches. While this was not the correct answer, it served as a good lower bound — the answer could be no less than 7.07 inches.

Tom Singer of Melbourne, Florida was able to properly fit his semicircular pizza onto a plate with a diameter of approximately 8.79 inches by assuming the cut was perpendicular to the slice’s diameter. Here’s a similar arrangement, courtesy of the puzzle’s submitter, Dean Ballard:

But if you made a cut that was *not* necessarily perpendicular to the diameter, it was possible to do even better. The best anyone was able to do was about **8.16 inches**. Again, here’s an illustration from Dean:

And if that wasn’t hard enough, last week’s extra credit had you making *two* straight cuts, resulting in three pieces to fit on your plate. Whereas the two-cut problem involved optimizing across three parameters, the three-cut version had those same parameters, plus many more. In particular, you had to choose which piece had the second cut, the particular edges or curves that were the endpoints of this second cut and how you could arrange all three pieces as tightly as possible.

Dean made his first cut perpendicular to the diameter, achieving a minimum plate that was about 7.737 inches in diameter (shown below). Meanwhile, Steven Trautmann of Aurora, Colorado wrote some C++ code to solve this for him, which gave an answer of about 7.584 inches.

As far as I’m concerned, this remains an open problem. If you find a plate size that works and that’s smaller than these answers, let me know (preferably with an image included!).

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{9} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

From Dan Levin comes a hazardous riddle for the ironclad and eagle-eyed:

The U.S. Open concluded last weekend, with physics major Bryson DeChambeau emerging victorious. Seeing his favorite golfer win his first major got Dan thinking about the precision needed to be a professional at the sport.

A typical hole is about 400 yards long, while the cup measures a mere 4.25 inches in diameter. Suppose that, with every swing, you hit the ball *X* percent closer to the center of the hole. For example, if *X* were 75 percent, then with every swing the ball would be four times closer to the hole than it was previously.

For a 400-yard hole, assuming there are no hazards (water, sand or otherwise) in the way, what is the minimum value of *X* so that you’ll shoot par, meaning you’ll hit the ball into the cup in exactly four strokes?

The solution to this Riddler Express can be found in the following week’s column.

From Dean Ballard comes a pernicious pizza puzzle:

Dean ordered a personal pizza that was precisely 10 inches in diameter. He ate half of it, and he wants to save the remaining semicircle of pizza in his refrigerator. He has circular plates of all different sizes, so to save space in his fridge, he’ll place the pizza on the smallest plate that holds the entire semicircle (i.e., with no pizza hanging off the plate).

Unfortunately, the smallest plate that can hold half of a 10-inch pizza is just a circle with a 10-inch diameter. So much for saving space.

But Dean has a thought: If he *cuts* the pizza, he can squeeze both of the resulting pieces onto a smaller circular plate — again, with no pizza hanging off the plate and without the pieces lying on top of each other.

If Dean makes a single straight slice, what is the diameter of the *smallest* circular plate onto which he can fit the two resulting pieces?

*Extra credit:* Dean wants to save even more space in his fridge. So instead of one straight slice, he will now make *two* straight slices. First, he will cut the semicircular pizza into two pieces. Then, he will take one of those pieces and make his second slice, giving him a total of three pieces. What is the diameter of the smallest circular plate onto which he can fit all three pieces?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 Graham E. McGrath 👏 of Boston, Massachusetts, winner of last week’s Riddler Express. (This problem might have hit a little too close to home for Graham, who is a lab technician and uses centrifuges for a living!)

Last week, you were doing your best not to break a microcentrifuge, a piece of equipment that separates components of a liquid by spinning around very rapidly. Liquid samples were pipetted into small tubes, which were then placed in one of the microcentrifuge’s 12 slots evenly spaced in a circle.

For the microcentrifuge to work properly, each tube had to hold the same amount of liquid. Also, importantly, the center of mass of the samples had to be at the very center of the circle — otherwise, the microcentrifuge would not be balanced and likely break.

You needed to spin exactly seven samples. In which slots (numbered 1 through 12, as in the diagram above) could you have placed them so that the centrifuge was balanced?

Balancing centrifuges is certainly not a new problem, and was previously addressed on the Numberphile YouTube channel, with Holly Krieger explaining the solution.

In the case of 12 slots, it was always possible to balance the centrifuge for any even number of tubes. All you had to do was arrange them into pairs and place each pair on diametrically opposite ends of the circle. The center of mass of each pair was at the center of the circle, which meant the total center of mass was also at the center.

Alas, the problem asked how you could place seven tubes, and seven is not an even number. Several readers suggested adding an eighth tube, which was against the spirit of the puzzle.

As for odd numbers of tubes, balancing a single tube was not possible. However, it was indeed possible to balance three tubes by arranging them evenly around the circle so they were the vertices of an equilateral triangle (e.g., in positions 1, 5 and 9 in the diagram above). It was further possible to balance five tubes by again arranging three so they formed an equilateral triangle (e.g., in positions 1, 5 and 9) and another two on opposite sides (e.g., in positions 2 and 8). Again, because the centers of mass for both the trio and the pair were at the center of the circle, their combined center of mass was also at the center of the circle.

Returning to the original puzzle, it was possible to balance seven tubes by arranging three in an equilateral triangle, another two on opposite sides and the last two on a different pair of opposite sides. (Alternatively, some solvers noted that placing seven tubes was equivalent to choosing the five slots that *didn’t *have tubes, which was essentially the same problem as placing five tubes.) There were several correct answers here — more on that in a second — one of which was slots **1, 2, 5, 6, 8, 9 and 12**, found by solver Alicia Zamudio.

For extra credit, you had to find the total number balanced arrangements for seven tubes. It turned out that there were exactly **12 arrangements**, all rotations of each other. Solver Reece Goiffon verified this by finding all the ways seven vectors on a unit circle could add up to zero:

Meanwhile, in what I believe is a first for The Riddler, Chris Sears wrote code in Commodore 64 BASIC, checking all 792 (i.e., 12 choose 7) ways to place seven tubes in 12 slots.

The general version of this problem, with *N* slots and *k* tubes, was addressed in the aforementioned Numberphile video and was further proven by T. Y. Lam and K. H. Leung back in 2000. As for how many arrangements there are for given values of *N* and *k*, there’s an OEIS sequence for that! (Thanks to solver Eric Thompson-Martin for spotting it.)

When *N* = 12 and *k *increased from 0 to 12, the number of balanced arrangements was 1, 0, 6, 4, 15, 12, 24, 12, 15, 4, 6, 0 and 1. If you look closely, you’ll see that the sequence is symmetric, since placing *k* tubes was exactly the same problem as choosing where *not* to place *k* tubes.

I encourage readers to verify these results for themselves. But not the grad students or technicians out there. I don’t need angry lab directors emailing me about broken centrifuges.

Congratulations to 👏 Lowell Vaughn 👏 of Bellevue, Washington, winner of last week’s Riddler Classic. (Lowell is finally joining his son in the winner’s circle.)

Last week, you were introduced to an online game, Guess My Word, in which you tried to guess a secret word by typing in other words. After each guess, you were told whether the secret word was alphabetically before or after your guess. The game stopped and congratulated you once you had guessed the secret word.

The secret word was randomly chosen from a dictionary with exactly 267,751 entries. If you had this dictionary memorized and played the game as efficiently as possible, how many guesses should you have expected to make to uncover the secret word?

Most solvers recognized that the most efficient strategy was a binary search, where you first guessed the middle word in the dictionary. If the secret word came later, you then guessed the middle word in the second half of the dictionary. But if the secret word came earlier, you guessed the middle word in the first half of the dictionary. With each guess, you narrowed the possibilities by approximately half, and there was also a chance that the guess itself was the secret word.

The sequence of guesses could therefore be structured as a complete binary tree whose top node was your first guess (the exact middle word in the dictionary). If the secret word came earlier in the dictionary, you moved to the right; if the secret word came later in the dictionary, you moved to the left. Eventually, you’d find the secret word.

Now that you had the optimal guessing strategy, all that was left was calculating the average number of guesses. In your binary tree, one word — the middle word in the dictionary — would be guessed on the very first attempt. Two words required two guesses, four words required three guesses, eight words required four guesses, 16 words required five guesses, and so on. For each additional guess, there were twice as many words.

It turned out that the number of words in the dictionary, 267,751, was 1 + 2^{1} + 2^{2} + 2^{3} + … + 2^{16} + 2^{17} + 5608. And each term in the sum required one more guess than the term before it. To find the average number of guesses, you had to divide each of these terms by 267,751 and then multiply by the corresponding number of guesses. In the end, the average was approximately **17.042 guesses**.

Several solvers, like Michael Branicky, Josh Silverman and Rajeev Pakalapati, generalized this method, finding a formula for the expected number of guesses for a dictionary with *any* number of words. Meanwhile, Emma Knight went in a completely different direction — recursion — still arriving at the same solution.

Thanks again to Oliver Roeder for introducing Riddler Nation to this fun word game. (If only it weren’t *so* darn addictive.)

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{10} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

From Quoc Tran comes a curious case of centrifugation:

Quoc’s lab has a microcentrifuge, a piece of equipment that can separate components of a liquid by spinning around very rapidly. Liquid samples are pipetted into small tubes, which are then placed in one of the microcentrifuge’s 12 slots evenly spaced in a circle.

For the microcentrifuge to work properly, each tube must hold the same amount of liquid. Also, importantly, the center of mass of the samples must be at the very center of the circle — otherwise, the microcentrifuge will not be balanced and may break.

Quoc notices that there is no way to place exactly one tube in the microcentrifuge so that it will be balanced, but he can place two tubes (e.g., in slots 1 and 7).

Now Quoc needs to spin exactly seven samples. In which slots (numbered 1 through 12, as in the diagram above) should he place them so that the centrifuge will be balanced?

*Extra credit:* Assuming the 12 slots are distinct, how many different balanced arrangements of seven samples are there?

From Oliver Roeder, who knows a thing or two about riddles, comes a labyrinthine matter of lexicons:

One of Ollie’s favorite online games is Guess My Word. Each day, there is a secret word, and you try to guess it as efficiently as possible by typing in other words. After each guess, you are told whether the secret word is alphabetically before or after your guess. The game stops and congratulates you when you have guessed the secret word. For example, the secret word was recently “nuance,” which Ollie arrived at with the following series of nine guesses: naan, vacuum, rabbi, papa, oasis, nuclear, nix, noxious, nuance.

Each secret word is randomly chosen from a dictionary with exactly 267,751 entries. If you have this dictionary memorized, and play the game as efficiently as possible, how many guesses should you expect to make to guess the secret word?

Congratulations to 👏 Seth Cohen 👏 of Concord, winner of last week’s Riddler Express.

Last week, you were helping the folks from Blacksburg, Greensboro and Silver Spring, who were getting together for a game of pickup basketball. Each week, anywhere from one to five individuals showed up from each town, with each outcome equally likely.

Using all the players that showed up, they wanted to create exactly two teams of equal size. Everyone was wearing a jersey that matched the color mentioned in the name of their town. To avoid confusion, they agreed that the residents of two towns should combine forces to play against the third town’s residents.

What was the probability that, on any given week, it was possible to form two equal teams with everyone playing, where two towns were pitted against the third?

Since there were five possibilities for the number of players from each of the three towns, that meant the total number of outcomes to consider here was 5×5×5, or 125. Some solvers, like the Highlands Latin School statistics class in Louisville, Kentucky, analyzed all 125 cases, finding the number of cases in which two towns combined to have the same number of players as the third. The probability was then *that* number divided by 125.

Meanwhile, many solvers worked backwards, starting with all the different ways to have two numbers add up to the third, and then counting up the corresponding number of permutations. For example, the towns could have had one, three and four players respectively, since 1+3=4. There were then six total ways to assign these numbers to the three towns: 1/3/4, 1/4/3, 3/1/4, 3/4/1, 4/1/3 and 4/3/1.

In total, there were six ways for two whole numbers between 1 and 5 so to add up to another number between 1 and 5. Here they are, along with how many ways each set of numbers could be assigned to the three towns:

- 1+1=2, three ways
- 1+2=3, six ways
- 1+3=4, six ways
- 1+4=5, six ways
- 2+2=4, three ways
- 2+3=5, six ways

Adding these up, there were 30 total ways for two towns to be fairly matched up against the third. Since there were 125 total outcomes to consider, the probability of a fair match was 30/125, or **24 percent**.

For extra credit, you looked at a broader version of the puzzle, in which each town had anywhere from one to *N* players, rather than just one to five. Just as the denominator in the original riddle was 5^{3}, here it was *N*^{3}. But finding the numerator was trickier work.

Solver Nicholas Robbins (from Blacksburg, Virginia!) and Alberto Rorai both supposed that Blacksburg happened to have the most players among the three towns. If Blacksburg had two players, there was only one way for the other two towns to make a fair match (1+1). If Blacksburg had three players, there were two ways (1+2 and 2+1). If Blacksburg had four players, there were three ways (1+3, 2+2 and 3+1). This pattern continued all the way up to when Blacksburg had *N* players, when there were *N*−1 ways. Counting these all up gave you 1+2+3+…+(*N*−1), or *N*(*N*−1)/2.

But wait! That was only when Blacksburg had the most players. What about Greensboro and Silver Spring? To account for them, Alberto multiplied by three (since there were *three* towns), which meant there were 3*N*(*N*−1)/2 ways to have two numbers add up to the third number.

That was the numerator for our probability, while the denominator was *N*^{3}. Dividing them gave a final answer of **3( N−1)/(2N^{2})**. Sure enough, this checked out for the case when

When the towns move on to softball, they’ll definitely need a new system for assigning teams.

Congratulations to 👏 Mikolaj Franaszczuk 👏 of New York, New York, winner of last week’s Riddler Classic.

Last week marked the return of the Tour de FiveThirtyEight. For every mountain in the bicycle race, the first few riders to reach the summit were awarded “King of the Mountain” points.

You were racing against three other riders up one of the mountains. The first rider over the top would get 5 points, the second rider would get 3, the third rider would get 2, and the fourth rider would get 1.

All four of you were of equal ability — that is, under normal circumstances, you all had an equal chance of reaching the summit first. You were riding for Team A, one of your opponents was riding for Team B, but *two* of your competitors were both on Team C, meaning they could work together, drafting and setting a tempo up the mountain. Whichever teammate happened to be slower on the climb would get a boost from their faster teammate, and the two of them would both reach the summit at the faster teammate’s time (minus a very small fraction of a second).

As a lone rider, the odds were stacked against you. How many points were you expected to win on this mountain?

First off, if there hadn’t been any teams (i.e., all four riders were on their own), then the 5+3+2+1 points, or 11 points, would be evenly split among the four riders, on average. That meant you would have expected to get 2.75 points.

But the fact that there was a *team* threw a wrench into this analysis. Fortunately, like last week’s Riddler Express, this could be solved by working through a few cases. If you were Rider A, the other solo rider was B, and the two team riders were C, then you could concisely write the outcome of a race like ABCC (i.e., you came in first, then Rider B, then the two teammates). However, an outcome like CABC would turn into CCAB, since the slower teammate C would catch up to their partner.

Without further ado, here are the 12 total outcomes had the teammates *not* worked together, along with how they then finished by working together. In parentheses are the number of points you earn for each case.

- ABCC → ABCC (5 points)
- ACBC → ACCB (5 points)
- ACCB → ACCB (5 points)
- BACC → BACC (3 points)
- BCAC → BCCA (1 points)
- BCCA → BCCA (1 points)
- CABC → CCAB (2 points)
- CACB → CCAB (2 points)
- CBAC → CCBA (1 points)
- CBCA → CCBA (1 points)
- CCAB → CCAB (2 points)
- CCBA → CCBA (1 points)

Averaging these together meant you could expect 29/12, or about **2.417 points** on average. Several solvers, like Emma Beer and Sandeep Narayanaswami, grouped some of these 12 cases together (e.g., one-fourth of the time you came in first, regardless of how the teammates did) for an even more efficient calculation.

Meanwhile, Phil Rauscher noticed that this result was exactly one-third less than 2.75, the expected number of points when there was no team. Phil took this a step further, showing that whenever there are *N* racers, you can expect to finish one-third places further back on average when two of your opponents are teammates. This is because your placement will only be affected when you would otherwise have finished *between *them (not when you’re ahead of both of them or behind both of them), which happens one-third of the time.

The Tour de FiveThirtyEight continues to be a grueling test of endurance and cleverness. I’ll see you at the next stage!

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{11} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

From Zack Beamer comes a baffling brain teaser of basketball, just in time for the NBA playoffs:

Once a week, folks from Blacksburg, Greensboro, and Silver Spring get together for a game of pickup basketball. Every week, anywhere from one to five individuals will show up from each town, with each outcome equally likely.

Using all the players that show up, they want to create exactly two teams of equal size. Being a prideful bunch, everyone wears a jersey that matches the color mentioned in the name of their city. However, since it might create confusion to have one jersey playing for both sides, they agree that the residents of two towns will combine forces to play against the third town’s residents.

What is the probability that, on any given week, it’s possible to form two equal teams with everyone playing, where two towns are pitted against the third?

*Extra credit:* Suppose that, instead of anywhere from one to five individuals per town, anywhere from one to *N* individuals show up per town. Now what’s the probability that there will be two equal teams?

The solution to this Riddler Express can be found in the following week’s column.

This month, the Tour de France is back, and so is the Tour de FiveThirtyEight!

For every mountain in the Tour de FiveThirtyEight, the first few riders to reach the summit are awarded points. The rider with the most such points at the end of the Tour is named “King of the Mountains” and gets to wear a special polka dot jersey.

At the moment, you are racing against three other riders up one of the mountains. The first rider over the top gets 5 points, the second rider gets 3, the third rider gets 2, and the fourth rider gets 1.

All four of you are of equal ability — that is, under normal circumstances, you all have an equal chance of reaching the summit first. But there’s a catch — two of your competitors are on the same *team*. Teammates are able to work together, drafting and setting a tempo up the mountain. Whichever teammate happens to be slower on the climb will get a boost from their faster teammate, and the two of them will both reach the summit at the faster teammate’s time.

As a lone rider, the odds may be stacked against you. In your quest for the polka dot jersey, how many points can you expect to win on this mountain, on average?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👑 Brendan Hill 👑 of Edmond, Oklahoma, winner of last week’s Riddler and the new ruler of Riddler Nation!

Last week was the fifth Battle for Riddler Nation, and things were a little different this time around.

In a distant, war-torn land, there were 13 castles — three more than the usual 10 from prior battles. There were two warlords: you and your archenemy. Each castle had its own strategic value for a would-be conqueror. Specifically, the castles were worth 1, 2, 3, …, 12, and 13 victory points. You and your enemy each had 100 soldiers to distribute, any way you liked, to fight at any of the 13 castles. Whoever sent more soldiers to a given castle conquered that castle and won its victory points. If sent the same number of troops as your opponent, you split the points. You didn’t know what distribution of forces your enemy had chosen until the battles began. Whoever won the most points won the war.

I received a total of 970 battle plans. Of those, I excluded ones that were not valid, including any that had in excess of 100 troops, or, like the strategy submitted by Lowell Vaughn, tried to sneak in 101 troops to Castles 2 through 13 by having -1,112 (yes, a negative number) troops at Castle 1. Also, to keep things fair, whenever anyone submitted multiple strategies, I only counted the *last *strategy they submitted. In the end, there were 821 valid strategies.

Next, I ran all 336,610 one-on-one matchups, awarding one victory to each victor. In the event of a tie, both warlords were granted half a victory. Brendan Hill was the overall winner, tallying 630 wins against just 186 losses and 4 ties. Here’s a rundown of the 10 strongest warlords, along with how many soldiers they deployed to each castle:

The top 10 finishers in FiveThirtyEight’s Battle for Riddler Nation, with their distribution of soldiers for each castle and overall record

Soldiers per castle | Record | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Name | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | W | T | L | |

1 | Brendan Hill | 0 | 0 | 5 | 7 | 9 | 11 | 13 | 2 | 20 | 1 | 3 | 27 | 2 | 630 | 4 | 186 |

2 | Carl Schwab | 2 | 2 | 3 | 5 | 7 | 9 | 13 | 1 | 1 | 24 | 27 | 3 | 3 | 613 | 8 | 198 |

3 | Alex Conant | 0 | 1 | 1 | 2 | 2 | 11 | 3 | 16 | 16 | 16 | 3 | 3 | 26 | 601 | 21 | 198 |

4 | Fivey The Swing Voter | 0 | 0 | 1 | 1 | 3 | 12 | 13 | 3 | 18 | 2 | 27 | 1 | 19 | 592 | 22 | 206 |

5 | Kyle P. | 0 | 0 | 0 | 1 | 1 | 10 | 13 | 16 | 2 | 3 | 3 | 28 | 23 | 599 | 4 | 217 |

6 | Jonathan Siegel | 0 | 0 | 0 | 1 | 2 | 2 | 12 | 15 | 15 | 22 | 2 | 27 | 2 | 592 | 14 | 214 |

7 | Eric V. | 0 | 0 | 0 | 1 | 1 | 1 | 12 | 14 | 16 | 21 | 3 | 28 | 3 | 593 | 5 | 222 |

8 | David Zhu | 0 | 2 | 2 | 2 | 9 | 13 | 14 | 15 | 6 | 3 | 28 | 3 | 3 | 593 | 4 | 223 |

9 | Jonathan Hawkes | 0 | 2 | 3 | 5 | 9 | 7 | 11 | 16 | 3 | 3 | 24 | 8 | 9 | 590 | 8 | 222 |

10 | Matthew Altman | 2 | 1 | 3 | 3 | 5 | 3 | 11 | 16 | 21 | 25 | 3 | 1 | 6 | 589 | 7 | 224 |

In previous battles, when there were just 10 castles, there were 55 points in play. As long as you won more than half them — that is, at least 28 points — you were guaranteed a victory. The top strategies clustered soldiers into a small number of castles worth exactly 28 points. It took at least four castles to achieve 28 points, and there were several ways to do it: 4+5+9+10, 3+6+9+10, etc.

This time around, with 13 castles, there were 91 points in play, which meant you needed at least 46 points to secure a victory. Two-time Battle of Riddler Nation victor Vince Vatter was the one who suggested increasing the number of castles to 13, since there was only a single way to reach 46 points by winning exactly four castles: 10+11+12+13. Vince was curious whether that strategy would prevail or instead a strategy that targeted more castles would win the day.

Our winner, Brendan, placed at least five soldiers at a whopping seven castles. Adding the values of these castles gave 3+4+5+6+7+9+12, which was indeed exactly 46 points.

Vince did fine, by the way, coming in 69th place with 532 wins against 266 losses and 22 ties. Our previous champion, David Love, was a little lower down, coming in 335th with 438 wins, 352 losses and 30 ties.

The complete data set of strategies will be posted in the coming weeks. In the meantime, the following graph summarizes all the strategies. Each column represents a different castle, while each row is a strategy, with the strongest performers on top and the weakest on the bottom. The shading of a cell indicates the number of soldiers placed. It’s a lot to take in, but at the very least you might see a few “bands” — for example, the 10+11+12+13 strategies are clustered together in a few places, since they were similarly (somewhat) successful.

Finally, I was delighted to see that there was a fierce competition in Iowa’s Sheldon Community School District. Three classrooms — Sheldon Middle School Advanced Math, Sheldon Middle School TAG, and Sheldon High School STEM — submitted strategies. Among these, Sheldon Middle School TAG was the strongest, coming in 282nd place, with 458 wins, 347 losses and 15 ties. Definitely a *talented* group, there.

Email Zach Wissner-Gross at riddlercolumn@gmail.com

**CORRECTION (Sept. 11, 2020, 3:36 p.m.):** In an earlier version of this article, the table about Riddler Nation’s strongest warlords transposed the columns for ties and losses.

Welcome to The Riddler. Most weeks, I offer up two problems related to the things we hold dear around here: math, logic and probability.

But this week is special. The time has flown, and somehow it’s been a year since I (peacefully) took over this column from my predecessor, Oliver Roeder.

It is only fitting that we continue one of Ollie’s (and my) favorite traditions here at The Riddler: the Battle for Riddler Nation. In order to have a chance at 👑 winning 👑 and becoming the next ruler, I need to receive your battle plans before 11:59 p.m. Eastern time on Monday. Have a great weekend!

Some readers may be familiar with the first, second, third and fourth Battles for Riddler Nation. If you missed out, you may want to consult the thousands of attack distributions from these previous contests.

I am pleased to say that this week marks the *fifth* such competition — but this time, the number of castles has changed!

In a distant, war-torn land, there are 13 castles. There are two warlords: you and your archenemy. Each castle has its own strategic value for a would-be conqueror. Specifically, the castles are worth 1, 2, 3, …, 12, and 13 victory points. You and your enemy each have 100 soldiers to distribute, any way you like, to fight at any of the 13 castles. Whoever sends more soldiers to a given castle conquers that castle and wins its victory points. If you each send the same number of troops, you split the points. You don’t know what distribution of forces your enemy has chosen until the battles begin. Whoever wins the most points wins the war.

Submit a plan distributing your 100 soldiers among the 13 castles. Once we receive all your battle plans, we’ll adjudicate all the possible one-on-one matchups. Whoever wins the most wars wins the battle royale and is crowned ruler of Riddler Nation!

Who can steal the crown from 👑 David Love 👑 of Ambler, Pennsylvania, who currently sits atop the throne? Also, don’t count out Vince Vatter of Gainesville, Florida, a two-time champion eager to take back what was rightfully his.

Will *you* defeat them?

Do you have the cunning and logic to be the next ruler of Riddler Nation?

The results to this Riddler can be found in the following week’s column.

Congratulations to 👏 Dan Speirs 👏 of Newtown Square, Pennsylvania, winner of last week’s Riddler Express.

Last week, you had a giant sheet that lay flat on the ground, covering the Earth (assumed to be a perfect sphere with a radius of 6,378 kilometers).

You wanted to raise the sheet so it was instead always 1 meter off the ground. To make it so, how much did you have to increase the area of the sheet?

You were essentially being asked to find the difference in surface area between two spheres — one with the radius of the Earth, or 6,378,000 meters, and another whose radius was one meter longer, or 6,378,001 meters.

Most readers knew the formula: A sphere with radius *r* has a surface area of 4𝜋*r*^{2}. So one approach was to precisely calculate the areas of the two spheres and subtract them.

You could also have worked through a little algebra before plugging anything in. If *R* is the radius of the Earth, then the difference in surface area was 4𝜋(*R*+1)^{2}−4𝜋*R*^{2}. After canceling out the squared terms, this difference became 8𝜋*R*+4𝜋 square meters, or approximately 160.3 million square meters. Since there are a million (not a thousand!) square meters in one square kilometer, this was equivalent to **160.3 square kilometers**.

In the grand scheme of things, the answer was a pretty tiny fraction of the Earth’s surface — a shade over 0.00003 percent, to be precise.

For extra credit, you had to identify a city, country, land mass or body of water whose area was very close to the answer. As far as countries went, Liechtenstein, with an area of 160 square kilometers, was the closest.

So when it comes time to grow our Earth-covering quilt, I think the most prudent thing to do would be to head to Liechtenstein and host a quilting bee. I’ll see you there!

Congratulations to 👏 Richard Guidry Jr. 👏 of Baton Rouge, Louisiana, winner of last week’s Riddler Classic.

Last week, you wanted to play a very special game of War.

War is a two-player game in which a standard deck of cards is first shuffled and then divided into two piles with 26 cards each — one pile for each player. In every turn of the game, both players flip over and reveal the top card of their deck. The player whose card has a higher rank wins the turn and places both cards on the bottom of their pile. In the event that both cards have the same rank, the rules get a little more complicated, with each player flipping over additional cards to compare in a mini “War” showdown.

Assuming a deck was randomly shuffled before every game, how many games of War would you expect to play until you had a game that lasted exactly 26 turns, with no mini “Wars?”

As you might have guessed, such a “perfect” game of war is very rare. So rare, in fact, that trying to simulate it was a fruitless exercise. Instead, your best bet was to work out an analytical solution.

A good first step was to determine the probability *p* of having a perfect game of War (either for you or for your opponent — there was no such distinction in the problem). If we knew the value of *p*, then the probability of achieving one perfect game in exactly one attempt would be *p*, in exactly two attempts it would be (1−*p*)*p*, in exactly three attempts it would be (1−*p*)^{2}*p*, and so on. By combining these probabilities, the expected number of games until having a perfect one was then *p* + 2(1−*p*)*p* + 3(1−*p*)^{2}*p* + 4(1−*p*)^{3}*p* + …, an infinite arithmetico-geometric series whose sum was simply 1/*p*.

So all you had to do was find the probability of having a perfect game of War, and then compute the reciprocal of that number. Alas, determining this probability was easier said than done.

You could have tried a back-of-the-envelope calculation. If we put aside mini “Wars” for a moment, pretending that each player has a 50 percent chance of winning each turn outright, what’s the probability of your winning a perfect game? You would have to win all 26 turns, meaning your chances stood at 1/2^{26}, or approximately 1.49×10^{−8}. Pretty unlikely.

But that was still an estimate. For each turn, you actually had *less than* a 50 percent chance of winning outright, since there was a nonzero chance of a mini “War” in which their card had a matching rank, leaving you and your opponent to split the leftover probability. That meant 1.49×10^{−8} was an overestimate.

Finding the exact probability, it turned out, was an advanced exercise in combinatorics. Solver Laurent Lessard defined the probability as a function of the remaining number of cards of each rank. He then set up a recurrence relation and had his computer crunch the numbers via memoization.

Meanwhile, Peter Norvig similarly “abstracted” what a deck was, considering only how many cards there were of relative ranks (rather than their specific ranks), and then tracked the probability trees in which one player won all the rounds outright.

Both Laurent and Peter found that the probability of sweeping your opponent in 26 turns was approximately 3.1324×10^{−9} — not too far off from our back-of-the-envelope calculation. which meant you would expect to play about **319 million** games before winning in such a fashion. Solver Angela Zhou calculated this value more precisely, finding it was 29,908,397,871,631,390,876,014,250,000 divided by 93,686,147,409,122,073,121. Wow!

This was the expected number of games until *you* won in a sweep. The expected number of games until either you or your opponent won in a sweep was *half* as many (since the probability of either of you achieving this was twice as high), or about **159,620,171** games. The question was ambiguous as stated, so I gave credit for both answers.

Finally, if you enjoyed this riddle, you might try your hand at a few similar combinatorial challenges, courtesy of solver Eric Farmer. You’ll find boring snaps, Frustration Solitaire and prohibited subwords. Sounds like just another day at The Riddler.

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{12} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

Suppose you have a rope that goes all the way around the Earth’s equator, flat on the ground. (For the entirety of this puzzle, you should assume that the Earth is a perfect sphere with a radius of 6,378 kilometers.)

You want to lengthen the rope just the right amount so that it’s 1 meter off the ground all the way around the Earth. How much longer did you have to make the rope?

If you’ve never heard this puzzle before, the answer is surprisingly small — about 6.28 (i.e., 2𝜋) meters. Also, spoiler alert! (Darn, I was one sentence too late.)

Now, instead of tying the Earth up with *rope*, you’ve moved on to covering the globe with a giant *sheet* that lies flat on the ground. If you want the sheet to be 1 meter off the ground (just like the rope), by how much would you have to increase the area of your sheet?

*Extra credit:* What city, country, land mass or body of water has an area that is very close to your answer?

The solution to this Riddler Express can be found in the following week’s column.

From Duane Miller comes a riddle that is good for absolutely nothing:

Duane’s friend’s granddaughter claimed that she once won a game of War that lasted exactly 26 turns.

War is a two-player game in which a standard deck of cards is first shuffled and then divided into two piles with 26 cards each — one pile for each player. In every turn of the game, both players flip over and reveal the top card of their deck. The player whose card has a higher rank wins the turn and places both cards on the bottom of their pile. In the event that both cards have the same rank, the rules get a little more complicated, with each player flipping over additional cards to compare in a mini “War” showdown. Duane’s friend’s granddaughter said that for *every* turn of the game, she always drew the card of higher rank, with no mini “Wars.”

Assuming a deck is randomly shuffled before every game, how many games of War would you expect to play until you had a game that lasted just 26 turns with no “Wars,” like Duane’s friend’s granddaughter?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 Jacob Herlin 👏 of Denver, Colorado, winner of last week’s Riddler Express.

Last week, you were analyzing some unusual signals from deep space, measured at many regular intervals. You computed that you heard zero signals in 45 percent of the intervals, one signal in 38 percent of the intervals and two signals in the remaining 17 percent of the intervals.

Your research adviser suggested that it may just have been random fluctuations from *two* sources. Each source had some fixed probability of emitting a signal that you picked up, and together those sources generated the pattern in your data.

Was your adviser right? Was it possible for your data to have come from two random fluctuations?

At first, this may have sounded like an astronomy question, but as many solvers noted, it was a probability question in disguise. Since there were two random signals, we can assign probabilities to them and see what happens.

Suppose that, for any given interval, the first source emitted a signal with probability *p*, and the second source emitted a signal with probability *q*. If you made the reasonable assumption that the sources were independent, then the probability you heard *two* signals was the product of these probabilities, *pq*. Since you heard two signals 17 percent of the time, that meant *pq* = 0.17. But that still wasn’t enough information to solve the problem.

Next, most solvers opted to analyze the 45 percent of intervals that had zero signals (although looking at the one-signal intervals was equally valid). The probability an interval had zero signals was the probability that the first source didn’t emit a signal, 1−*p*, times the probability the second source didn’t emit a signal, 1−*q*. In other words, (1−*p*)(1−*q*) = 0.45. Expanding the expression on the left gave you 1−*p*−*q*+*pq* = 0.45. Substituting the value 0.17 for *pq* and rearranging a little resulted in the equation *p*+*q *= 0.72.

At this point, there was good news and bad news. The good news was that we were looking for two values, *p* and *q*, knowing both their sum (0.72) and their product (0.17). Anytime that’s the case, we can turn to the quadratic formula. The values *p* and *q* were the solutions to the quadratic equation *x*^{2}−0.72*x*+0.17 = 0.

The bad news, as noted by solver Madeline Barnicle, was that this equation had no real solutions. In other words, **your adviser was mistaken**, and you had an opportunity to show them up. So maybe that was some good news after all.

Looking back, what were the possible probabilities of seeing zero, one or two signals? If we replaced the respective values of 0.45, 0.38 and 0.17 with *A*, *B* and *C* and performed the same analysis, the resulting quadratic equation was *x*^{2}−(1+*C*−*A*)*x*+*C* = 0. For this equation to have any real roots, it had to have a nonnegative discriminant, which meant (1+*C*−*A*)^{2} ≥ 4*C*. So only when that equation was true could you have attributed your data to two random sources.

It seemed like the case was closed.

But not so fast, said solver Daniel Taub, who read the puzzle with a different interpretation. Rather than assume each source emitted exactly zero or one signal, Daniel assumed that each source emitted signals randomly at some average rate — essentially a Poisson process — and then was baffled when no interval produced *three* signals. Solvers Reece Goiffon and Tyler James Burch ran with this interpretation, nevertheless proving that two Poisson processes cannot possibly explain the frequencies listed in the puzzle.

Well, if your data didn’t come from random noise, the truth must still be out there.

Congratulations to 👏 Patrick Boylan 👏 of Alexandria, Virginia, winner of last week’s Riddler Classic.

Last week, you were building a large pen for your pet hamster. To create the pen, you had several vertical posts, around which you were wrapping a sheet of fabric. The sheet was 1 meter long — meaning the perimeter of your pen could be at most 1 meter — and weighed 1 kilogram, while each post weighed *k* kilograms.

You also wanted your pen to be lightweight and easy to move between rooms. The total weight of the posts and the fabric you used could not exceed 1 kilogram.

For example, if *k* were 0.2, then you could have made an equilateral triangle with a perimeter of 0.4 meters (since 0.4 meters of the sheet would weigh 0.4 kilograms), or you could have made a square with perimeter of 0.2 meters. However, you couldn’t have made a pentagon, since the weight of five posts would already hit the maximum and leave no room for the sheet.

You wanted to figure out the shape that enclosed the largest area possible. What was the greatest value of *k* for which you should have used four posts rather than three?

Three posts weighed 3*k*, which meant that you had 1−3*k* meters to build your triangular pen. Importantly, given a fixed perimeter, the triangle with the greatest area was an equilateral triangle (like courtesy of Keith from the United Kingdom). That meant each of the three sides was one-third of the total perimeter, or (1−3*k*)/3. With a little trigonometry, you found that area of this triangle was then (1−3*k*)^{2}√3/36.

Meanwhile, four posts weighted 4*k*, leaving you with 1−4*k* meters for a quadrilateral pen. Now, the shape that maximized the area was a square with sides of length (1−4*k*)/4, and with area (1−4*k*)^{2}/16.

At this point, it was just a matter of finding the values of *k* for which the square’s area was greater than the triangle’s area. The areas were equal when *k* was (3−2^{4}√3)/(12−6^{4}√3), or approximately **0.08964. **This was the answer — the greatest value of *k* for which four posts was a better choice than three posts.

For extra credit, you looked at pens with five posts, six posts, seven posts and so on. For which values of *k* did each number of posts provide the greatest area for your pen?

To solve this generalized version of the polygon, a good first step was to derive a general formula for the area of polygon with *N* sides and perimeter 1−*Nk* (since having *N* sides meant there would be *N* posts with a total weight of *Nk*). Like the triangle and the quadrilateral, the *N*-gon with the maximum area was a *regular N*-gon, with equal side lengths and angles.

First off, the central angle of a regular *N*-gon is 360/*N* (for those of you who prefer to work in degrees rather than radians). Then, using the fact that each of the *N* sides has a length of (1−*Nk*)/*N*, the area of the triangle formed by the central angle and its corresponding side was half the side length — (1−*Nk*)/(2*N*) — times the altitude of the triangle — (1−*Nk*)/(2*N*)·cot(180/*N*). Yes, that is the cotangent function.

But wait! That was the area for just one of the triangles. Since there were *N* of them, that meant the total area of the polygon was *N* times greater — *N*·(1−*Nk*)/(2*N*)·(1−*Nk*)/(2*N*)·cot(180/*N*), or (1−*Nk*)2/(4*N*)·cot(180/*N*).

Armed with this general formula for the area of an *N*-gon, all that remained was plugging in different values of *N* and *k* and seeing, for each *k*, which *N* resulted in the greatest area. Solver Mike Seifert plotted the graphs for the first few values of *N*, finding that as *k* decreased, the optimal number of posts went up incrementally.

So the extra credit was really asking for the points of intersection between the topmost curves from Mike’s graph. The formula for these points was quite messy, but if you’d like to see it nevertheless, check out the write-ups of Laurent Lessard, who found a condition on these values, and David Zimmerman, who attempted to find a closed-form solution.

While the rodent itself may have been of a usual size, I hope you took comfort in the fact that at least the pen could be of an unusual size — depending on your value of *k*.

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{13} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

From Josh Silverman comes a puzzle that’s out of this world:

When you started your doctorate several years ago, your astrophysics lab noticed some unusual signals coming in from deep space on a particular frequency — hydrogen times tau. After analyzing a trove of data measured at many regular intervals, you compute that you heard zero signals in 45 percent of the intervals, one signal in 38 percent of the intervals and two signals in the remaining 17 percent of the intervals.

Your research adviser suggests that it may just be random fluctuations from *two* sources. Each source had some fixed probability of emitting a signal that you picked up, and together those sources generated the pattern in your data.

What do you think? Was it possible for your data to have come from two random fluctuations, as your adviser suggests?

The solution to this Riddler Express can be found in the following week’s column.

From Scott Ogawa comes a riddle about rodents of usual size:

Quarantined in your apartment, you decide to entertain yourself by building a large pen for your pet hamster. To create the pen, you have several vertical posts, around which you will wrap a sheet of fabric. The sheet is 1 meter long — meaning the perimeter of your pen can be at most 1 meter — and weighs 1 kilogram, while each post weighs *k* kilograms.

Over the course of a typical day, your hamster gets bored and likes to change rooms in your apartment. That means you want your pen to be lightweight and easy to move between rooms. The total weight of the posts and the fabric you use should not exceed 1 kilogram.

For example, if *k* = 0.2, then you could make an equilateral triangle with a perimeter of 0.4 meters (since 0.4 meters of the sheet would weigh 0.4 kilograms), or you could make a square with perimeter of 0.2 meters. However, you couldn’t make a pentagon, since the weight of five posts would already hit the maximum and leave no room for the sheet.

You want to figure out the best shape in order to enclose the largest area possible. What’s the greatest value of *k* for which you should use four posts rather than three?

*Extra credit:* For which values of *k* should you use five posts, six posts, seven posts, and so on?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 Alban 👏 of Jacksonville Beach, Florida, winner of last week’s Riddler Express.

Last week, you had a large pile of squares that each had a side length of 1 inch. One square was blue, while all the other squares were white. You wanted to arrange several white squares so they covered part of the blue square but didn’t overlap with each other. (The entire blue square did not have to be covered, while the blue area that each white square covered had to be nonzero.)

What was the greatest number of white squares you could have placed?

First, a quick acknowledgment that this was very similar to a problem posed by Martin Gardner some years ago. Special thanks to reader Brian Kell for pointing this out!

Nevertheless, this still proved to be a very challenging express, with a lot of disagreement: Among the hundreds of submitted responses, 2 percent said the answer was four, 3 percent said the answer was five, 14 percent said the answer was six, 56 percent said the answer was seven, 7 percent said the answer was eight, and 4 percent said the answer was nine. (There was a smattering of other answers as well, including readers who creatively wanted to stack paper-thin squares in the third dimension.)

Nine was *not* the answer. Remember, all the squares were the same size. So if you placed a 3-by-3 grid of white squares over the blue one and then rotated the grid about its center, at most *five* of the white squares would have an overlapping area with the blue square.

The majority of readers thought the answer was seven — there must be something to that. Solver Michael Branicky’s five-year-old (!) daughter Lydia may have been the youngest to attempt this puzzle. She got seven white squares to overlap with the blue square by arranging them in a rotated honeycomb pattern:

At this point, we’ve seen that seven squares were possible, while nine squares were not possible. So what about eight squares?

Solver Daniel Thompson illustrated different examples, from one white square all the way up to eight:

It wasn’t as symmetric as the seven-square solution. But by tilting the various squares just right, it was possible to make room for that eighth white square in the middle.

While eight may not be a perfectly square number, this week it was a perfectly hip number.

Congratulations to 👏 Richard Guidry Jr. 👏 of Baton Rouge, Louisiana, winner of last week’s Riddler Classic.

Last week, the Riddler Manufacturing Company had an issue with their production of foot-long rulers. Each ruler was accidentally sliced at three random points along the ruler, resulting in four pieces. Looking on the bright side, that meant there were then four times as many rulers — they just happened to have different lengths.

On average, how long were the pieces that contained the 6-inch mark?

The problem as stated was slightly ambiguous. I have it on good authority that, for each ruler, the Riddler Manufacturing Company chose the three random points before doing any slicing. (If you assumed a slice was made after each point was selected, it was possible to get different and equally interesting results.)

Solver Quoc Tran declared that “math is for nerds,” before running 50,000 simulations of broken rulers like a total geek. He found that the length of the piece containing the 6-inch mark followed a rather curious distribution (shown below), with an average of approximately 5.621 inches.

Julian Gerez found a similar distribution, further noting that the unusual shape was due to the mashing together of two distinct cases, depending on whether the 6-inch mark was on one of the two middle pieces or on one of the two end pieces — but more on that in a moment!

So that was the computer geeks; now back to the math nerds. The most rigorous way to solve this is with calculus, integrating the product of each possible length multiplied by its relative probability — essentially calculating the mean value of the theoretical curve that Quoc Tran’s histogram approximates. Emma Knight fearlessly worked her way through those messy integrals, arriving at an answer of **5.625 inches**.

An alternative approach was to break the problem down into two cases, depending on which of the four pieces contained the 6-inch mark. If the 6-inch mark was on one of the two end pieces of the broken ruler, that meant the three random breaks were all between the 0- and 6-inch marks (which occurred with a one-in-eight probability) or all between 6- and 12-inch marks (which also occurred with a one-in-eight probability). In other words, 25 percent of the time the three random breaks were all on one side of the 6-inch mark, while the other 75 percent of the time there was a single break on one side of the 6-inch mark and two breaks on the other side.

When all three breaks were on one side, the length of the piece with the 6-inch mark was 6 inches plus the average distance between the 6-inch mark and nearest of the three breaks. Regular solvers of Riddler Classics may know a thing or two about order statistics, particularly the result that when you choose *N* random values from a uniform distribution between 0 and 1, the expected value of the smallest number is 1/(*N*+1), the expected value of the second smallest is 2/(*N*+1), and so on up to the largest number, which has an expected value of *N*/(*N*+1). So when the three random breaks all occurred (uniformly) over a 6-inch range, the average distance between the 6-inch mark and the nearest break was one-fourth (since four is one more than three) of the total range, or 1.5 inches. Putting the two pieces together, that meant 25 percent of the time the average length was 6+1.5, or 7.5 inches.

But what about the other 75 percent of the time? As we said, there were two random breaks on one side of the 6-inch mark and one break on the other side. The average distance between the 6-inch mark and nearest among two breaks was one-third (since three is one more than two) of the length, or 2 inches. And the average distance between the 6-inch mark and the single break on the other side was one-half (since two is one more than one) of the length, or three inches. Putting *these* two pieces together, that meant 75 percent of the time the average length was 2+3, or 5 inches.

No integrals required, and we almost have our answer! Using the linearity of expectation, as solver Steve Gabriel did, the average length was 0.25(7.5) + 0.75(5), or 5.625 inches — the same answer we got from calculus!

But the fun didn’t stop there. Several readers took this puzzle to new heights, looking at how the average length of the piece with the 6-inch mark changed with the number of break points, as well as the average length of pieces containing *other* points on the ruler like the 1- or 2-inch mark. Laurent Lessard combined these into a single generalization, finding the expected length of the piece a fraction *a* along a ruler of length *L* broken at *N* random points. By computing the distribution for the number of breaks on either side of the selected point and applying order statistics, he found this length to be (2−*a*^{N+1}−(1−*a*)^{N+1})*L*/(*N*+1).

One more thing: As *N* got very large — meaning there were many random breaks in the ruler — the average length Laurent found approached 2*L*/(*N*+1) for *any* value of *a*. In other words, as soon as you pick a point on the ruler and ask for the average length of the piece containing that point, the answer will be *twice* the length of the average piece. Again, that’s for any point you pick!

It’s a bizarre paradox, to be sure. My sense is this happens because you are looking for the next break in each of the *two* directions along the ruler. If anyone happens to make further headway on this, be sure to let Laurent (and me!) know.

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>It’s time for a very appropriate episode of FiveThirtyEight Debate Club!

Since we’ll be live blogging the Democratic and Republican National Conventions this week and next, we thought it would be a good idea to ask … why? In this episode, we argue about whether parties should even have political conventions. Before you watch the video, take a guess as to which side our politics editor, Sarah Frostenson, is on.

Whose side are you on? Be sure to weigh in on YouTube or Twitter. And while you’re there, let us know what you’d like us to debate in the next episode.

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{14} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

From Dean Ballard comes a sneaky sorting of squares:

You have a large pile of squares that each have a side length of 1 inch. One square is blue, while all the other squares are white. You want to arrange several white squares so they cover part of the blue square but don’t overlap with each other.

For example, here’s how you could arrange four white squares so they each cover part of the blue square.

What is the greatest number of white squares you can place so that each covers part of the blue square without overlapping one another? (The entire blue square does not have to be covered, while the blue area that each white square covers must be nonzero.)

The solution to this Riddler Express can be found in the following week’s column.

From Angela Zhou comes one riddle to rule them all:

The Riddler Manufacturing Company makes all sorts of mathematical tools: compasses, protractors, slide rules — you name it!

Recently, there was an issue with the production of foot-long rulers. It seems that each ruler was accidentally sliced at three random points along the ruler, resulting in four pieces. Looking on the bright side, that means there are now four times as many rulers — they just happen to have different lengths.

On average, how long are the pieces that contain the 6-inch mark?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 Richard Dickerman 👏 of Dallas, Texas, winner of last week’s Riddler Express.

Last week, I was hiking on Euclid Island, which was perfectly rectangular and measured 3 miles long by 2 miles wide. I was especially interested in locating the point on the shore that was nearest to my position.

From where I had been standing, there were in fact *two* distinct points on the shore that were *both* the closest such points. It turned out the trail I was hiking along connected all such locations on the island — those with multiple nearest points on the shore.

What was the total length of this trail on Euclid Island?

The dead center of the island was a logical place for many solvers to start. It was just 1 mile along the width — in either direction. But before we looked for other such points, what did it mean, mathematically, for there to be multiple nearest points on the shore?

One way to think about it was to grow a circle (kind of like blowing up a balloon) centered at your current position. As the circle got larger and larger, it eventually touched (i.e., was tangent to) the shore. The moment the circle touched the shore, you had to look at *how many places* it touched the shore. If there was only one such point of tangency, then you weren’t on the trail. But if there were two or more such points, then you were indeed on the trail.

So what did Euclid Island’s trail actually look like? Rebekah Murphy noticed the trail included a segment of length 1 that ran east-west down the middle of the island. Every point on this segment was equidistant from the north and south shores. (Interestingly, the endpoints of this segment were equidistant from *three* distinct points along the shore.)

But the fun didn’t end there. Ishaan Bhatia noted that the trail also had four additional segments, which ran between the endpoints of the middle segment and the four corners of the island. (While the corners themselves were not part of the trail — after all, they were *on* the beach — the trail came infinitesimally close, which meant the exclusion of these points didn’t affect your length calculations.)

The animation below puts all five segments together, revealing the entire green hiking trail on Euclid Island. The circle that grows and shrinks demonstrates the points of tangency (i.e., the nearest spots on the beach) for each location on the trail.

At this point, we’re ready to answer the original question, which asked you to determine the total length of the trail. As we already said, the middle segment had a length of 1. Using the Pythagorean theorem (or, rather, the Babylonian Formula), the remaining four segments had a length of √2. That meant the total length of the trail was **1+4√2 miles**, or about 6.657 miles.

For extra credit, you looked at Al-Battani Island, which was elliptical rather than rectangular. Al-Battani Island’s major axis was 3 miles long, while its minor axis was 2 miles long. Like Euclid Island, Al-Battani Island had a hiking trail that connected all locations with multiple nearest points on the shore. What was the total length of this trail on Al-Battani Island?

As with Euclid Island, a good place to start was the dead center, which was 1 mile from the two endpoints of the minor axis. And once again, there was a central horizontal segment that ran along the major axis. The challenge here was figuring out just where the trail ended.

It didn’t cover the *entire* major axis, because when it got too close to the endpoints, there was only a single point of tangency. In other words, you had to find when the circle — centered on an ellipse’s major axis and internally tangent to the ellipse — transitioned between one and two points of tangency, as illustrated below:

And that’s where the math got hairy. If you assumed the ellipse was centered at (0, 0), then it was described by the equation *x*^{2}/1.5^{2} + *y*^{2} = 1. Meanwhile, the equation for a circle with radius *R *centered at *h* on the *x*-axis was (*x*−*h*)^{2} + *y*^{2} = *R*^{2}. From there, you had to find coordinates (*x*, *y*) that satisfied both equations, but you also needed to use calculus to ensure the circle and the ellipse had matching slopes at these points.

In the end, as long as the center of the circle was between (−5/6, 0) and (5/6, 0), the circle was tangent to the ellipse in two locations. That meant the distance between these two points — about **1.667 miles** — was the length of the trail and the answer to the extra credit.

Quite the challenging hike, if you ask me. And if you’d like to explore this problem further, check out an interactive version of this graph on Desmos, courtesy of solver Hypergeometricx.

Congratulations to 👏 Bill Neagle 👏 of Springfield, Missouri, winner of last week’s Riddler Classic.

Last week, inspired by Kareem Carr, you looked at an alternative definition for addition and followed where it led. To compute the sum of *x* and *y*, you combined groups of *x* and *y* nematodes and left them for 24 hours. When you came back, you counted up how many you had — and that was the sum!

It turned out that, over the course of 24 hours, the nematodes paired up, and each pair had one offspring 50 percent of the time. (If you had an odd number of nematodes, they still paired up, but one was left out.) So if you wanted to compute 1+1, half the time you’d get 2, and half the time you’d get 3. If you computed 2+2, 25 percent of the time you’d get 4, 50 percent of the time you’d get 5, and 25 percent of the time you’d get 6.

We also redefined exponentiation: Raising a sum to a power meant leaving that sum of nematodes for the number of days specified by the exponent.

With this number system, what was the expected value of (1+1)^{4}?

A good strategy here was to work your way up, one power at a time. First, what was the expected value of (1+1)^{1}? You initially had two nematodes that paired up, and after 24 hours there was a 50 percent chance they’d have one offspring. So the expected value of (1+1)^{1} was 2.5.

Next, what was the expected value of (1+1)^{2}? In other words, what was the expected number of worms after 48 hours? As we said, after 24 hours, there was a 50 percent chance there were two worms and a 50 percent chance there were three worms. After *another* 24 hours, the two-worm case had an expected value of 2.5. Meanwhile, the three-worm case resulted in two worms that paired up along with a third wheel. The pair of worms resulted in an expected value of 2.5, and adding the third worm gave an expected value of 3.5. Half the time there would be 2.5 worms, and the other half the time there would be 3.5 worms, which meant the expected value was an even 3.

Let’s do one more: What was the expected value of (1+1)^{3}? Digging through the previous paragraph, we just saw that after 48 hours there was a 25 percent chance of having two worms, a 50 percent chance of having three worms and a 25 percent chance of having four worms. In the two-worm case, the expected number of worms after 72 hours was again 2.5. In the three-worm case, the expected number of worms after 72 hours was 3.5. The four-worm case was more complicated — 25 percent of the time there would still be four worms after 72 hours, 50 percent of the time there would be five worms, and 25 percent of the time there would be six worms. Putting all these cases (and sub-cases) together, the expected value of (1+1)3 was 0.25·2.5 + 0.5·3.5 + 0.25·(0.25·4 + 0.5·5 + 0.25·6), or 3.625.

I’ll spare you all the casework for (1+1)^{4}. After 96 hours, there could be anywhere from two to nine worms. The expected value was **4.40625** worms. Andrew Heairet neatly visualized the breakdown of probabilities over the course of 96 hours:

Mark Girard went beyond 96 hours and found the expected number of nematodes more than two weeks later. This value appeared to increase exponentially over time, which is the perfect segue into the extra credit, which asked you how the expected value of (1+1)^{N} behaved as *N* got larger and larger. As many solvers correctly observed, the expression goes off to infinity. But the real question was *how*.

The challenge here related to the “odd worms out,” which sat out reproductive cycles. Were it not for them, the math would have been more straightforward. For each pair of worms, their number either stayed the same or increased by 50 percent over each 24-hour period. Averaging those two possibilities meant the expected number increased by 25 percent every 24 hours. But again, this reasoning didn’t apply, thanks to the occasional one worm who didn’t pair up.

To figure this out, some solvers fit exponential models directly to the data. For example, Hypergeometricx found the growth was intriguingly close to 1.56·(1.23456789)^{N}. Meanwhile, Josh Silverman showed that 2·(1.25)^{N−1} was in fact a pretty decent approximation.

But Rajeev Pakalapati took the cake, showing — analytically, mind you — that 2·(1.25)^{N−1} + 0.5 was the limiting behavior of nematode exponentiation. His work is shown below, and you can also follow along via Steve Gabriel’s helpful transcription.

Indeed, this was a fun and challenging foray into an alternate set of operations. Only one submitter felt threatened enough to compare this riddle to an Orwellian dystopia. Needless to say, they didn’t get very far with the math.

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{15} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

The Riddler Isles are a chain of small islands on the Constant Sea. One of them, Euclid Island, is perfectly rectangular and measures 3 miles long by 2 miles wide. While walking across the island on a recent vacation, I was often interested in locating the point on the shore that was nearest to my current position.

One morning, I realized that from where I was standing there were *two* distinct points on the shore that were *both* the closest such points. I was excited by my discovery, only to realize it had been made years earlier. It turned out I was on a hiking trail that connected all such locations on the island — those with multiple nearest points on the shore.

What is the total length of this trail on Euclid Island?

*Extra credit:* Al-Battani Island is another of the Riddler Isles, but it’s elliptical rather than rectangular. Al-Battani Island’s major axis is 3 miles long, while its minor axis is 2 miles long. Like Euclid Island, Al-Battani Island has a hiking trail that connects all locations with multiple nearest points on the shore. What is the total length of this trail on Al-Battani Island?

The solution to this Riddler Express can be found in the following week’s column.

This week’s Riddler Classic is inspired by Kareem Carr:

We usually think of addition as an operation applied to a field like the rational numbers or the real numbers. And there is good reason for that — as Kareem says, “Mathematicians have done all the hard work of figuring out how to make calculations track with reality. They kept modifying and refining the number system until everything worked out. It took centuries of brilliant minds to do this!”

Now suppose we defined addition another (admittedly less useful) way, using a classic model organism: the nematode. To compute the sum of *x* and *y*, you combine groups of *x* and *y* nematodes and leave them for 24 hours. When you come back, you count up how many you have — and that’s the sum!

It turns out that, over the course of 24 hours, the nematodes pair up, and each pair has one offspring 50 percent of the time. (If you have an odd number of nematodes, they will still pair up, but one will be left out.) So if you want to compute 1+1, half the time you’ll get 2 and half the time you’ll get 3. If you compute 2+2, 25 percent of the time you get 4, 50 percent of the time you’ll get 5, and 25 percent of the time you’ll get 6.

While we’re at it, let’s define exponentiation for sums of nematodes. Raising a sum to a power means leaving that sum of nematodes for the number of days specified by the exponent.

With this number system, what is the expected value of (1+1)^{4}?

*Extra credit:* As *N* gets larger and larger, what does the expected value of (1+1)^{N} approach?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 David Daly 👏 of Glendale, Arizona, winner of last week’s Riddler Express.

Last week, you made it to the final round of the Riddler Rock, Paper, Scissors tournament.

The rules were simple: Rock beat scissors, scissors beat paper, and paper beat rock. Moreover, the game was “sudden death,” so the first person to win a single round was immediately declared the grand champion. (If both players chose the same object, then you simply played another round.)

Fortunately, your opponent was someone you had studied well. Based on the motion of their arm, you could tell whether they would (1) play rock or paper with equal probability, (2) play paper or scissors with equal probability or (3) play rock or scissors with equal probability. (Every round fell into one of these three categories.)

If you strategized correctly, what were your chances of winning the tournament?

Well, if you strategized like solver Carenne Ludeña, you won the tournament a whopping **100 percent** of the time. When your opponent played rock or paper with equal probability, you knew with certainty they wouldn’t play scissors, and that meant if you played paper you couldn’t lose. You also had a 50 percent chance of winning the round, i.e., whenever your opponent played rock.

Each of the three scenarios listed in the problem had a corresponding response. As we just said, when your opponent played rock or paper, you should play paper. When your opponent played paper or scissors, you should play scissors. And when your opponent played rock or scissors, you should play rock. With this strategy, you would win half the time, draw half the time, and lose … never.

At this point, many solvers recognized this meant you were guaranteed to win the tournament eventually, as the probability of drawing infinitely many games was vanishingly small. Solver Stephen Paisley formally demonstrated this with the geometric series 1/2 + 1/4 + 1/8 + 1/16 + … (your chances of having won in each successive round), which indeed sums to 1, or 100 percent.

Finally, as some readers observed, the original puzzle was slightly ambiguous as written. When you were told that your opponent would “play rock or paper with equal probability,” most solvers assumed that meant both probabilities were 50 percent, rather than being equal but less than 50 percent. While that was the intent of the puzzle, it wasn’t explicitly stated.

If these equal probabilities were close to 50 percent, then your strategy wouldn’t change (although your chances of winning the tournament would go down). For example, if you knew there was a 40 percent chance your opponent would play rock, a 40 percent chance they’d play paper and a 20 percent chance they’d play scissors, then your best bet would still be to play paper. But instead of always winning the tournament, your chances of victory would be 0.4 + 0.4^{2} + 0.4^{3} + …, or 2/3.

But once the equal probabilities dipped below one-third, you needed to adjust your strategy. Suppose you knew there was a 10 percent chance your opponent would play rock, a 10 percent chance they’d play paper and an 80 percent chance they’d play scissors. Now your best bet was to play rock, and your chances were 0.8 + 0.8(0.1) + 0.8(0.1)^{2} + 0.8(0.1)^{3} + …, or 8/9.

Any which way, it seems like you’re pretty good at rock, paper, scissors. My winning move against you would be not to play.

Congratulations to 👏 Rick 👏 of Cloverdale, California, winner of last week’s Riddler Classic.

Last week, the inaugural graduating class of Riddler High School, more than 100 students strong, received their diplomas. They were lined up along the circumference of the giant circular mosaic on the plaza in front of the school, while their principal, Dr. Olivia Rhodes, stood atop a tall stepladder in the middle.

Dr. Rhodes observed that every one of the *N* graduates — all except Val, the valedictorian — was standing in the wrong place. Moreover, Dr. Rhodes said that no two graduates were in the correct position *relative to each other*. (Two graduates would be in the correct position relative to each other if, for example, graduate A was supposed to be 17 positions counterclockwise of graduate B and was indeed 17 positions counterclockwise of graduate B — even if both graduates were in the wrong positions.)

But Val corrected Dr. Rhodes. Given the fact that there were *N* students, there must have been at least two who were in the correct position relative to each other.

Given that there were more than 100 graduates, what was the *minimum* number of graduates who were posing for the class photo?

That was a lot of information to take in. To summarize, there were *N* students arranged in a circle. Exactly one of them was in the correct position, while *N*−1 were not. Furthermore, this value of *N* *guaranteed* that at least two students were in the correct position relative to each other.

The puzzle’s submitter, Dave Moran, solved this by looking at how far apart each student was from their correct position (say, in the clockwise direction) as a function of their correct position, *x*. Let’s call this function *E*(*x*). So because Val was correctly in the first position, *E*(1) = 0, meaning she was zero spots away from her correct position. But *E*(2), *E*(3), and so on, all the way to *E*(*N*), were nonzero, since all the other students were in the wrong position.

But what if *no students* had been in the correct position *relative to each other*? Well, imagine if there had been two students with correct positions *a* and *b*, such that *E*(*a*) = *E*(*b*). That would have meant they were shifted the same number of spots clockwise from their correct positions, and so they would have been in the correct position relative to each other! To *avoid* this situation, no two students could have had the same value of *E*(*x*).

Now recall that there were *N* students. That meant *E*(*x*) could take on exactly *N* distinct values, from 0 to *N*−1. And because no two students could have the same value of *E*(*x*), every value from 0 to *N*−1 had to be taken by one and only one student.

We’re getting there — but first, a slight detour. Solver Eric Dallal looked at the sum of all the errors, *E*(1) + *E*(2) + *E*(3) + … + *E*(N). If you started with a correct arrangement of the entire class, this sum would be zero. You could also swap different pairs of students over and over again to get any desired overall arrangement of the class. But every time you swapped two students, the sum either didn’t change, or it increased or decreased by *N*. And that meant this sum had to be a multiple of *N*.

Okay, we’re in the home stretch. For there to have been *no students* who were in the correct position relative to each other, we needed the sum of the errors — that is, the sum of the numbers from 0 to *N*−1, or *N*(*N*−1)/2 — to be a multiple of *N*. This was true whenever *N* was odd.

To summarize, it was only possible to arrange students so that no two were in the correct position relative to each other when *N* was odd.

Want an example of such an arrangement? Well, you’re in luck, thanks to solver Inga. Consider a class with 101 students — an odd number. Val was in the correct position, but suppose everyone else was reflected across the line connecting Val and Dr. Rhodes. If the salutatorian was supposed to be directly clockwise from Val, they now found themselves in position 101, meaning *E*(2) was 101−2, or 99. Working your way around the circle, you’d see that each student indeed had a unique value of *E*(*x*).

At last, we return to the original question that was posed. What was the smallest class size such that at least two students *had* to have been in the correct relative position? That would be the smallest even number greater than 100, and so the answer was **102**.

In addition to this week’s winner, Val from the inaugural graduating class really knew her number theory. The other seniors, including Zach, need to step up their game.

Email Zach Wissner-Gross at riddlercolumn@gmail.com

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{16} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

Congratulations, you’ve made it to the final round of the Riddler Rock, Paper, Scissors.

The rules are simple: Rock beats scissors, scissors beat paper, and paper beats rock. Moreover, the game is “sudden death,” so the first person to win a single round is immediately declared the grand champion. If there’s a tie, meaning *both* players choose the same object, then you simply play another round.

Fortunately, your opponent is someone you’ve studied well. Based on the motion of their arm, you can tell whether they will (1) play rock or paper with equal probability, (2) play paper or scissors with equal probability or (3) play rock or scissors with equal probability. (Every round falls into one of these three categories.)

If you strategize correctly, what are your chances of winning the tournament?

The solution to this Riddler Express can be found in the following week’s column.

From Dave Moran comes a perplexing puzzle of pomp and circumstance:

The inaugural graduating class of Riddler High School, more than 100 students strong, is lined up along the circumference of the giant circular mosaic on the plaza in front of the school. The principal, Dr. Olivia Rhodes, stands atop a tall stepladder in the center of the circle, preparing to take the panoramic class photo.

Suddenly, she shouts, “Hey, class, every single one of you *N* graduates — except Val, our valedictorian — is standing in the wrong place! Remember, you’re supposed to stand in order of your class rank, starting with Val directly in front of me, and going counterclockwise all the way to Zach, who should then be next to Val.” (Poor Zach was ranked last in the graduating class at Riddler High.)

Dr. Rhodes continues, “Not only are almost all of you in the wrong positions, but no two of you are even in the correct position *relative to each other*.” Two graduates would be in the correct position relative to each other if, for example, graduate A was supposed to be 17 positions counterclockwise of graduate B and was indeed 17 positions counterclockwise of graduate B — even though both graduates were in the wrong positions.

But Val speaks up: “Dr. Rhodes, that can’t be right. There must be at least two of us who are in the correct position relative to each other.”

Dr. Rhodes looks carefully around the circle of graduates below her and admits, “Of course, our brilliant Val is quite right. I do now see that there are two of you who are correctly positioned relative to each other.”

Given that there are more than 100 graduates, what is the *minimum* number of graduates who are posing for the class photo?

The solution to this Riddler Classic can be found in the following week’s column.

Congratulations to 👏 Joe Maloney 👏 of Atlanta, Georgia, winner of last week’s Riddler Express.

Last week, Riddler Township was having its quadrennial presidential election. Each of the town’s 10 “shires” was allotted a certain number of electoral votes: two, plus one additional vote for every 10 citizens (rounded to the nearest 10).

The names and populations of the 10 shires are summarized in the table below.

Shire | Population | Electoral votes |
---|---|---|

Oneshire | 11 | 3 |

Twoshire | 21 | 4 |

Threeshire | 31 | 5 |

Fourshire | 41 | 6 |

Fiveshire | 51 | 7 |

Sixshire | 61 | 8 |

Sevenshire | 71 | 9 |

Eightshire | 81 | 10 |

Nineshire | 91 | 11 |

Tenshire | 101 | 12 |

Under this sort of electoral system, it was quite possible for a presidential candidate to lose the popular vote and still win the election.

With two candidates running for president of Riddler Township, and every citizen voting for one or the other, what is the *lowest* percentage of the popular vote that a candidate could get while still winning the election?

To win with as few votes as possible, a candidate needed a majority of the 75 electoral votes, meaning they needed at least 38 votes. They also needed just over half the popular vote in any shire they won, while losing the entire popular vote in the shires they lost.

Meanwhile, the less populous shires also offered greater leverage. For example, Oneshire, with its three electoral votes and a population of 11 people (i.e., a majority consisting of six people), offered 0.5 — three divided by six — electoral votes per supporter. Twoshire, with four electoral votes and a voting majority of 11 people, offered about 0.36 electoral votes per supporter. Meanwhile, at the other end of the spectrum, Tenshire offered less than 0.24 electoral votes per supporter.

That meant our unpopular winner wanted to win the less populous shires, accruing electoral votes without winning too many popular votes.

The fewest votes a candidate needed to win turned out to be 136 out of Riddler Township’s 560 total citizens, and so the smallest possible winning percentage of the popular vote was approximately **24.3 percent**.

There were several different ways to generate this electoral nightmare. For example, the winner might get six votes in Oneshire, 16 votes in Threeshire, 21 votes in Fourshire, 26 votes in Fiveshire, 31 votes in Sixshire and 36 votes in Sevenshire. In all, that is indeed 38 electoral votes against just 136 popular votes. This particular scenario was illustrated by Andrew Heairet:

That map of Riddler Township looks vaguely familiar, but I can’t quite place it.

But that wasn’t the only solution. Many solvers, like Nora Corrigan from Columbus, Mississippi and Caspian from Stockholm, Sweden found multiple ways to generate exactly 38 electoral votes and 136 popular votes. Here is the complete list of such scenarios:

- Slim victories in Oneshire, Threeshire, Fourshire, Fiveshire, Sixshire and Sevenshire
- Slim victories in Oneshire, Twoshire, Fourshire, Fiveshire, Sixshire and Eightshire
- Slim victories in Oneshire, Twoshire, Threeshire, Fiveshire, Sixshire and Nineshire
- Slim victories in Oneshire, Twoshire, Threeshire, Fiveshire, Sevenshire and Eightshire
- Slim victories in Oneshire, Twoshire, Threeshire, Fourshire, Sixshire and Tenshire
- Slim victories in Oneshire, Twoshire, Threeshire, Fourshire, Sevenshire and Nineshire

As it turned out, there were many ways for the electoral and popular votes to wildly disagree. I guess that’s the electoral college for you. Oh, and if you’d like your vote to *really* count in Riddler Township, it’s a good idea to live in Oneshire.

Congratulations to 👏 Steven Trautmann 👏 of Aurora, Colorado, winner of last week’s Riddler Classic.

Last week, you played a game of Riddler Pinball, which had an infinitely long wall and a circle whose radius was 1 inch and whose center was 2 inches from the wall. The wall and the circle were both fixed and never moved. A single pinball started 2 inches from the wall and 2 inches from the center of the circle.

To play, you flicked the pinball toward a spot of your choosing along the wall, specified by its distance *x* from the point on the wall that’s closest to the circle, as shown in the diagram below.

The goal of the game was simple: Get the ball to bounce as many times as possible.

If you aimed too far to the right (i.e., your value of *x* was too small), the pinball quickly bounced its way through the gap between the circle and the wall. But if you aimed too far to the left (i.e., your value of *x* is too big), the pinball quickly came back out the same side it went in.

Riddler Pinball was an unforgiving game — the slightest error tanked your chances of victory. But if you strategized *just* right, it was possible to do quite well.

What was the greatest number of bounces you could achieve? And, more importantly, what value of *x* got you the most bounces?

There was certainly a sweet spot that resulted in many, many bounces. For example, when *x* was 0.82248632494339, there were 43 bounces:

But before we go any further, let’s return to the first question that was posed: What was the greatest possible number of bounces?

As solver Phillip Bradbury observed, all the angles and points of impact changed monotonically with *x*. As noted above, smaller values of *x* made the pinball pass through, but larger values of *x* made the pinball bounce back out.

So then what happened *between* these two cases? Zooming in revealed a single point on the number line where the pinball neither passed through nor bounced back. But if it did *neither* of these things, what else could the ball possibly do? It would be stuck bouncing back and forth **infinitely many times**. And as *x* approached this point — whatever it was — the number of bounces went up, up, up. This was nicely illustrated by solvers Mark Girard and Dinesh Vatvani.

There were a few other ways to see how this was true. Jason Bellenger reasoned backwards, thinking about a ball that was bouncing straight up and down between the tip of the circle and the wall. If you perturbed it infinitesimally, but just right, it would ultimately bounce its way to the correct starting point. Playing these bounces in reverse would then give you an infinitely high-scoring pinball shot.

Laurent Lessard, meanwhile, was able to better visualize the multitude of bounces by plotting the logarithm of the *x*-coordinate, as shown below. Graphically, this had the effect of spreading out the bounces, leading him to correctly suspect the answer was infinity.

As it turned out, this reasoning was the easier part of the problem. *Finding* this precise value of *x* that got you infinitely many bounces was another matter entirely.

At this point, most solvers took a computational approach. Simulating the pinball as it bounced off the wall wasn’t too tricky — the ball simply reflected off the wall so that the angle it made with respect to the wall didn’t change. But simulating bounces off the circle was a little tougher. Again, the angles of incidence and reflection were equal, but now they were measured from the radius of the circle. A slight mess of trigonometry ensued.

Once you had a functioning pinball simulator up and running, then it was just a matter of trying different values of *x*, adding one digit at a time, and seeing which values gave you more bounces. Indeed, the answer was very close to **0.82248632494339** — the true value actually goes on for infinitely many decimal places.

Some solvers, like Dean Ballard and Joseph Wetherell, went to great lengths to crank out more of the solution’s decimal places. In fact, the answer is very, very close to:

0.82248632494339006569162637706984990078

134582582348831326968696909788932771367

According to Joseph, this precise value of *x* resulted in more than 200 bounces!

Meanwhile, solver Emma Knight boldly attempted a more analytic approach. However, ballooning floating point errors ultimately got in the way.

Finally, a few very clever readers noticed something glaring about this problem: There was a second solution! While the original statement of the riddle asked where *along the wall* the pinball should be aimed, it was also possible to hit the circle first and *still* achieve infinitely many bounces. Josh Silverman went as far as animating this alternate strategy:

When I previously teased Riddler Pinball, I asked what interesting mathematical questions could be posed about it. There were multiple calls to alter the geometry, such as with a noncircular bumper, or to scale it up to three dimensions. With all these great suggestions, I don’t think it will be long before we all play another round of Riddler Pinball.

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. Two puzzles are presented each week: the Riddler Express for those of you who want something bite-size and the Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,^{17} and you may get a shoutout in next week’s column. Please wait until Monday to publicly share your answers! If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.

From Peter Mowrey comes an elegant electoral enigma:

Riddler Township is having its quadrennial presidential election. Each of the town’s 10 “shires” is allotted a certain number of electoral votes: two, plus one additional vote for every 10 citizens (rounded to the nearest 10).

The names and populations of the 10 shires are summarized in the table below.

Shire | Population | Electoral votes |
---|---|---|

Oneshire | 11 | 3 |

Twoshire | 21 | 4 |

Threeshire | 31 | 5 |

Fourshire | 41 | 6 |

Fiveshire | 51 | 7 |

Sixshire | 61 | 8 |

Sevenshire | 71 | 9 |

Eightshire | 81 | 10 |

Nineshire | 91 | 11 |

Tenshire | 101 | 12 |

As you may know, under this sort of electoral system, it is quite possible for a presidential candidate to lose the popular vote and still win the election.

If there are two candidates running for president of Riddler Township, and every single citizen votes for one or the other, then what is the *lowest* percentage of the popular vote that a candidate can get while still winning the election?

Riddler Pinball is a game with an infinitely long wall and a circle whose radius is 1 inch and whose center is 2 inches from the wall. The wall and the circle are both fixed and never move. A single pinball starts 2 inches from the wall and 2 inches from the center of the circle.

To play, you flick the pinball toward a spot of your choosing along the wall, specified by its distance *x* from the point on the wall that’s closest to the circle, as shown in the diagram below.

The goal of the game is simple: Get the ball to bounce as many times as possible.

(Note: This is a geometry problem, not a physics problem. In other words, assume the system is frictionless and that all collisions are perfectly elastic.)

Let’s take a look at some games to see how they play out.

If you aim too far to the right (i.e., your value of *x* is too small), the pinball will quickly bounce its way through the gap between the circle and the wall. That’s what happened in the game below, when *x* was 0.75 inches, resulting in a rather pedestrian four bounces.

But if you aim too far to the left (i.e., your value of *x* is too big), the pinball will quickly come back out the same side it went in. That’s what happened in the next game, when *x* was 0.9 inches, again yielding just four bounces.

As you can see, Riddler Pinball is an unforgiving game — the slightest error can tank your chances of victory. But if you strategize *just* right, it’s possible to do quite well.

What’s the greatest number of bounces you can achieve? And, more importantly, what value of *x* gets you the most bounces?

Congratulations to 👏 Rick Schubert 👏 of San Diego, California, winner of last week’s Riddler Express.

Last week, you set your sights on breaking baseball records, albeit in a shortened season. Your true batting average was .350, meaning you had a 35 percent chance of getting a hit with every at-bat. If you had four at-bats per game, what were your chances of batting at least .400 over the course of the 60-game season?

In a 60-game season, with four at-bats per game, there were 240 total at-bats. Forty percent of 240 was 96, so to bat at least .400 you needed at least 96 hits. Since each at-bat was independent, you could use the binomial distribution to determine the probability of each number of hits.

For example, the probability of getting exactly 96 hits in 240 at-bats was equal to 0.35^{96} (i.e., getting a hit in 96 at-bats) times 0.65^{144} (i.e., *not* getting a hit in the remaining 144 at-bats), times the number of ways you can order 96 hits among 240 at-bats (240 choose 96).

Again, that was just the probability of getting *exactly* 96 hits. To find the probability of *at least* 96 hits, you had to add up the probabilities of getting 96 hits, 97 hits, 98 hits, and so on, up to (the very unlikely) case of 240 hits. Alternatively, you could have added up the probabilities of getting between zero and 95 hits, and then subtracted this from 1. Whether you did this by hand, by spreadsheet or by computer code, the correct answer was **6.1 percent**.

It turned out that it was 16 times easier to bat .400 over 60 games than it was over a full slate of 162 games, where the probability was just **0.38 percent**.

The puzzle’s submitter, Taylor Firman, extended this further, looking at the probability of *anyone in the league* batting .400, based on each player’s own career batting average. For a 60-game season, Taylor found that this probability was about 3.4 percent, while it was just 0.002 percent for a 162-game season. So if no one catches Ted Williams this year, forget about it.

For extra credit, you were asked to find your chances of getting a hit in at least 56 consecutive games within the 60-game season, tying or breaking Joe DiMaggio’s record. It was helpful to first calculate the probability of keeping the streak alive — that is, getting at least one hit in a game. Again using the binomial distribution, this probability was 82.15 percent. But while it was likely you’d get a hit in any given game, pulling that off 56 times *in a row* was another story.

As Mark Girard explained, there were several distinct ways to achieve a streak of at least 56 games:

- Getting a hit in games 1 to 56
- Not getting a hit in game 1 and getting a hit in games 2 to 57
- Not getting a hit in game 2 and getting a hit in games 3 to 58
- Not getting a hit in game 3 and getting a hit in games 4 to 59
- Not getting a hit in game 4 and getting a hit in games 5 to 60

Each of these five cases was very unlikely. And together, they were still very unlikely. Overall, your chances of having a 56-game hitting streak in a 60-game season stood at just **0.0028 percent**. And remember, that was assuming you were a lifetime .350 hitter! Over a 162-game season, these chances improved by about tenfold, to **0.033 percent**. It would appear that Joe DiMaggio’s record is quite safe.

Finally, for *extra* extra credit, you had to find your chances of both batting at least .400 *and* getting a hit in at least 56 games. This was especially tricky, since these events were *not* independent. In other words, if you hit .400, you were more likely to have a very long hitting streak, and vice versa.

Paul Wright worked it out analytically, starting with streaks of 56, 57, 58, 59 or 60 games, each of which had their own corresponding probabilities of occurring. Then, for each streak, you knew that the games *within* the streak had one, two, three or four hits, while the remaining handful of games could also have zero hits. Putting the streak and non-streak games together, the probability of batting at least .400 *and* having a hitting streak of at least 56 games was about **0.0027 percent**. This was very close to the answer for the extra credit, meaning If you tied or broke Joe DiMaggio’s record in 60 games, then it was *very* likely that you’d also bat at least .400.

Finally, Angela Zhou calculated your chances of reaching both milestones over a 162-game season. It wasn’t likely.

Congratulations to 👏 Jim Boyce 👏 of Kensington, Connecticut, winner of last week’s Riddler Classic.

Last week, the tortoise and the hare were about to begin a 10-mile race along a “stretch” of road. The tortoise was driving a car that traveled 60 miles per hour, while the hare was driving a car that traveled 75 miles per hour. (For the purposes of this problem, you were asked to assume that both cars accelerated from 0 miles per hour to their cruising speed instantaneously.)

The hare did a quick mental calculation and realized if it waited until two minutes had passed, they would cross the finish line at the exact same moment. And so, when the race began, the tortoise drove off while the hare patiently waited.

But one minute into the race, after the tortoise had driven 1 mile, something extraordinary happened. The road turned out to be magical and instantaneously stretched by 10 miles! As a result of this stretching, the tortoise was now *2* miles ahead of the hare, who remained at the starting line.

At the end of every subsequent minute, the road stretched by 10 miles. With this in mind, the hare did some more mental math.

How long after the race began should the hare have waited so that both the tortoise and the hare crossed the finish line at the same exact moment?

At first, it might have seemed like neither the tortoise nor the hare would ever finish the race. How could they, when they drove a mile or so per minute, while the road stretched *10* miles per minute? The key was to realize that the road stretched *uniformly*, which meant it could carry each car along as it stretched.

To better understand this, let’s take a closer look at the tortoise over time. After one minute, it traveled 1 mile, or 10 percent of the total distance. Then, the road stretched to a length of 20 miles. But because the stretching was uniform, the tortoise was still 10 percent of the way across, meaning it was 2 miles down the road. After another minute, it was 3 miles down the road. Then, after the road stretched to a length of 30 miles, the tortoise was 4.5 miles down the road.

But rather than focusing on the distance the tortoise traveled, you should have zeroed in on the *fraction* of the total distance it covered each minute. In the first minute, the tortoise drove 1/10 of the total distance. In the second minute, it drove 1/20. In the third minute, it drove 1/30, In the fourth minute, it was 1/40. After *N* minutes, it drove 1/10 · (1/1 + 1/2 + 1/3 + 1/4 + … + 1/*N*). As many solvers noticed, that sum inside the parentheses is none other than the *N*^{th}harmonic number! Once this sum exceeded 10, the tortoise was guaranteed to have finished the race. This happened 12,367 minutes — more than eight days — into the competition.

Calculating this was no small feat. But alas, much like the idling hare, you still had your work cut out for you.

The tortoise didn’t finish *exactly *12,367 minutes into the race — the sum of the first 12,367 terms of the harmonic series is in fact *greater than* 10. The precise time turned out to be approximately 12,366.47 minutes.

But as solvers Hector Pefo and Josh Silverman cleverly observed, you didn’t *need* to calculate this exact time. As long as the hare started running the moment the tortoise completed 20 percent of the race, they’d finish together. That’s because the hare would then travel a distance that was 25 percent longer over the same amount of time, perfectly balancing the fact that the hare traveled 25 percent faster.

At this point, you just needed to determine when the tortoise had finished 20 percent of the race. After just four minutes, when the race was 40 miles long, the tortoise had traveled about 8.33 miles — just a shade over 20 percent of the way. Since the tortoise drove 1 mile per minute, it must have passed the 20 percent mark a third of a minute (i.e., 20 seconds) prior. And so the hare took off exactly **3 minutes and 40 seconds** into the race.

You can see this epic tie in all its glory, courtesy of Allen Gu:

Solver Hypergeometricx extended the puzzle further, looking at what would happen if the road stretched *continuously* over time, rather than discretely every minute, finding that the hare would then wait *e*^{2}−1 minutes (about 6 minutes and 23 seconds) into the race.

And the moral of all this? Slow and stretchy wins the race.

Email Zach Wissner-Gross at riddlercolumn@gmail.com.

]]>