Depending on your philosophical views on time and calendars and so on, today is something like the 4.5 billionth Pi Day that Earth has witnessed. But that long history is nothing compared to the infinity of pi itself.

A refresher for those of you who have forgotten your seventh-grade math lessons^{1}: Pi, or the Greek letter , is a mathematical constant equal to the ratio of a circle’s circumference to its diameter — C/d. It lurks in every circle, and equals approximately 3.14. (Hence Pi Day, which takes place on March 14, aka 3/14.)

But the simplicity of its definition belies pi’s status as the most fascinating, and most studied, number in the history of the world. While treating pi as equal to 3.14 is often good enough, the number really continues on forever, a seemingly random series of digits ambling infinitely outward and obeying no discernible pattern — 3.14159265358979…. That’s because it’s an irrational number, meaning that it cannot be represented by a fraction of two whole numbers (although approximations such as 22/7 can come close).

But that hasn’t stopped humanity from furiously chipping away at pi’s unending mountain of digits. We’ve been at it for millennia.

People have been interested in the number for basically as long we’ve understood math. The ancient Egyptians, according to a document that also happens to be the world’s oldest collection of math puzzles, knew that pi was something like 3.1. A millennium or so later, an estimate of pi showed up in the bible: The Old Testament, in 1 Kings, seems to imply that pi equals 3: “And he made a molten sea, ten cubits from the one brim to the other: it was round all about … and a line of thirty cubits did compass it round about.”

Archimedes, the greatest mathematician of antiquity, got as far as 3.141 by around 250 B.C. Archimedes approached his calculation of pi geometrically, by sandwiching a circle between two straight-edged regular polygons. Measuring polygons was easier than measuring circles, and Archimedes measured pi-like ratios as the number of the polygons’ sides increased, until they closely resembled circles.

Meaningful improvement on Archimedes’s method wouldn’t come for hundreds of years. Using the new technique of integration, mathematicians like Gottfried Leibniz, one of the fathers of calculus, could prove such elegant equations for pi as:

\begin{equation*}\frac{\pi}{4}=1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}-\ldots\end{equation*}

The right-hand side, just like pi, continues forever. If you add and subtract and add and subtract all those simple fractions, you’ll inch ever closer to pi’s true value. The problem is that you’ll inch *very, very slowly*. To get just 10 correct digits of pi, you’d have to add about 5 billion fractions together.

But more efficient formulas were discovered. Take this one, from Leonhard Euler, probably the greatest mathematician ever, in the 18th century:

\begin{equation*}\frac{\pi^2}{6}=\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+\ldots\end{equation*}

And Srinivasa Ramanujan, a self-taught mathematical genius from India, discovered the totally surprising and bizarre equation below in the early 1900s. Each additional term in this sum adds eight correct digits to an estimate of pi:

\begin{equation*}\frac{1}{\pi}=\frac{2\sqrt{2}}{9801}\sum_{k=0}^{\infty}\frac{(4k)!(1103+26390k)}{(k!)^4 396^{4k}}\end{equation*}

Much like with the search for large prime numbers, computers blasted this pi-digit search out of Earth orbit and into deep space starting in the mid-1900s. ENIAC, an early electronic computer and the only computer in the U.S. in 1949, calculated pi to over 2,000 places, nearly doubling the record.

As computers got faster and memory became more available, digits of pi began falling like dominoes, racing down the number’s infinite line, impossibly far but also never closer to the end. Building off of Ramanujan’s formula, the mathematical brothers Gregory and David Chudnovsky calculated over 2 billion digits of pi in the early 1990s using a homemade supercomputer housed in a cramped and sweltering Manhattan apartment. They’d double their tally to 4 billion digits after a few years.

The current record now stands at over 22 trillion digits — thousands of times more than the Chudnovskys’ home-brewed supercomputer — worked out after 105 days of computation on a Dell server using a freely available program called y-cruncher. That program, which uses both the Ramanujan and Chudnovsky formulas, has been used to find record numbers of digits of not only pi, but other endless, irrational numbers, including e, , and the golden ratio.

But maybe 22 trillion digits is just a bit of overkill. NASA’s Jet Propulsion Laboratory uses only *15 *digits of pi for its highest-accuracy calculations for interplanetary navigation. Heck, Isaac Newton knew that many digits 350 years ago. “A value of to 40 digits would be more than enough to compute the circumference of the Milky Way galaxy to an error less than the size of a proton,” a group of researchers wrote in a useful history of the number. So why would we ever need 22 trillion digits?

Sure, we’ve learned a bit of math theory while digging deep into pi: about fast Fourier transforms and that pi is probably a so-called normal number. But the more satisfying answer seems to me to have nothing to do with math. Maybe it has to do with what President John F. Kennedy said about building a space program. We do things like this “not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills.”

But there’s one major difference: The moon is not infinitely far away; we can actually get there. Maybe this famous quote about chess is more apt: “Life is not long enough for chess — but that is the fault of life, not of chess.”

Pi is too long for humankind. But that is the fault of humankind, not of pi. Happy Pi Day.