Math Thematic

92 readers
6 users here now

Sharing of different mathematic elements, stories, archives of all kinds.

founded 3 months ago
MODERATORS
1
15
submitted 1 day ago* (last edited 1 day ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

Pythagoras was many wonderful things. A delirious mystic. A benevolent cult leader. A bean-hating vegetarian. A real person (maybe).

One thing he was not: the guy who gave us the Pythagorean Theorem.

So why does he get his name on it? I cry foul. I cry “no more.” I cry, “Let us band together and vote on a better name for this ancient theorem! Not because it will actually result in a name change, but because it’s a fun debate!”

Who’s with me?!

I submit for your consideration the following names, from Hambrecht and the other clever folks in his thread:

  1. The Three Squares Theorem. Although we perceive it as a claim about numbers, for most mathematical cultures, this was a claim about shapes. To wit: if you affix squares to the sides of a right triangle, the two smaller areas add up to equal the largest.

  1. The Babylonian Formula. Give credit where credit is due! As Hambrecht says, this name “has the hint of far-away times and places… Through millenia and continents, this piece of math connects us to strange, alien people, yet so much our equals.” He calls it “fuel for children’s imaginations.” Even more important, as an astute observer points out: it can be abbreviated as “Baby Formula.”

  1. The Distance Theorem. The theorem’s most ubiquitous use is in finding distances, especially in higher dimensions.

  1. The Huey Lewis Theorem. Proposed (or, pun-posed) by Susan Burns, because, and I quote: “It’s hyp to be square…or is that b-squared?”

  1. The Adrakhonic Theorem, because that’s what it’s called in Neal Stephenson’s novel Anathem (which I just added to my reading list).

  1. Squaring the Triangle. Olaf Doschke’s suggestion, with a ring of the famously impossible “squaring the circle.”

  1. The Sum of Squares Theorem. Descriptive, clear.

  1. Garfield’s Theorem. Because if we’re just naming it after a random dude who supplied a proof, why not pick an assassinated U.S. president?

  1. Theorem 3-4-5. After the most famous Pythagorean triple.

  1. Euclid, Book I, Proposition 47. “Like chapter and verse in the mathematical bible,” explains George Jelliss.

  1. The Hypotenuse Theorem. Because it’s all about that longest side.

  1. The Right Theorem. Because it’s all about that right angle. (Also, because it’s right.)

  1. The Distance/Area Theorem. Because it’s all about multiple things at once.

  1. The Benjamin Watson Theorem. Because of this heroic, historic tackle, brought to my attention by Fawn Nguyen in her appearance on My Favorite Theorem.

Now, we could leave it there. We could say, “This has been a fruitful discussion. Let’s call it a day!” We could say, “Obviously a random blog post isn’t going to succeed in renaming the most famous theorem in mathematics, so let’s go home and eat raisins and watch sitcom reruns like the human mediocrities we are.”

But I say no! I say it’s time for a referendum!

What say you, good people of the internet? What is the best name for this fundamental theorem of geometry?

Other ideas are, of course, welcome in the comments below.

2
 
 

Amalia Pica makes sculptures, installations, performances, and drawings that address a correspondingly broad array of themes. She favors found objects and commonplace materials to create her pieces, whose concerns range from language and communication, to history and politics, or to the ways in which our childhood experiences shape our adult imaginations.

As a primary school student in Argentina, Pica was taught set theory as expressed in Venn diagrams, though, as she notes, only a few years before she would not have been. She recalls that the ban of set theory by Argentina’s dictatorship in the 1970s occurred just as group assembly was also deemed subversive. Pica speculates that set theory was prohibited because it was seen as the mathematical expression of a gathering. With this work, she literally shines a light on the absurdity of the injunction. A caption on the wall provides historical context for the work.

https://www.coleccioncisneros.org/collections/gallery/contemporary/artwork/venn-diagrams-under-the-spotlight

Pica's interest in the relationship between text and image is evident in Venn Diagrams (under the Spotlight), which consists of two colored circles of light cast from theater spotlights to form a Venn diagram. The Argentine government banned this diagram from being taught in classrooms in the 1970s, as it was thought to be an incendiary model of social collaboration. "The two circles of light are nothing but forms until the caption situates them historically, cluing you to their perception as subversive in the context of Argentine dictatorship in the 1970s. I’m interested in the ideas that we project onto images and objects: how they resist as much as accommodate them."

https://en.wikipedia.org/wiki/Amalia_Pica

A Venn diagram is a widely used diagram style that shows the logical relation between sets, popularized by John Venn (1834–1923) in the 1880s. The diagrams are used to teach elementary set theory, and to illustrate simple set relationships in probability, logic, statistics, linguistics and computer science. A Venn diagram uses simple closed curves on a plane to represent sets. The curves are often circles or ellipses.

History

Venn diagrams were introduced in 1880 by John Venn in a paper entitled "On the Diagrammatic and Mechanical Representation of Propositions and Reasonings" in the Philosophical Magazine and Journal of Science, about the different ways to represent propositions by diagrams. The use of these types of diagrams in formal logic, according to Frank Ruskey and Mark Weston, predates Venn but are "rightly associated" with him as he "comprehensively surveyed and formalized their usage, and was the first to generalize them".

Diagrams of overlapping circles representing unions and intersections, such as Borromean rings, were already in frequent use in the Middle Ages. However, the extent to which these types of diagrams can be considered precursors to Venn diagrams is disputed. Euler diagrams, which are similar to Venn diagrams but do not necessarily contain all possible unions and intersections, were named after the mathematician Leonhard Euler in the 18th century. However, these diagrams, which are considered the precursors of Venn diagrams, can also be clearly traced back to the 16th century. Pioneers in this tradition of Euler diagrams included Erhard Weigel (1625–1699) and his students Johann Christoph Sturm (1635-1703) and Gottfried Wilhelm Leibniz (1646–1716). Christian Weise (1642–1708) is also worth mentioning, whose student Johann Christian Lange worked intensively on these diagrams. Euler further developed these diagrams, and Immanuel Kant (1724–1804) and his students popularized them in the 19th century.

Venn did not use the term "Venn diagram" and referred to the concept as "Eulerian Circles". He became acquainted with Euler diagrams in 1862 and wrote that Venn diagrams did not occur to him "till much later", while attempting to adapt Euler diagrams to Boolean logic. In the opening sentence of his 1880 article Venn wrote that Euler diagrams were the only diagrammatic representation of logic to gain "any general acceptance".

Venn viewed his diagrams as a pedagogical tool, analogous to verification of physical concepts through experiment. As an example of their applications, he noted that a three-set diagram could show the syllogism: 'All A is some B. No B is any C. Hence, no A is any C.'

Charles L. Dodgson (Lewis Carroll) includes "Venn's Method of Diagrams" as well as "Euler's Method of Diagrams" in an "Appendix, Addressed to Teachers" of his book Symbolic Logic (4th edition published in 1896). The term "Venn diagram" was later used by Clarence Irving Lewis in 1918, in his book A Survey of Symbolic Logic.

In the 20th century, Venn diagrams were further developed. David Wilson Henderson showed, in 1963, that the existence of an n-Venn diagram with n-fold rotational symmetry implied that n was a prime number. He also showed that such symmetric Venn diagrams exist when n is five or seven. In 2002, Peter Hamburger found symmetric Venn diagrams for n = 11 and in 2003, Griggs, Killian, and Savage showed that symmetric Venn diagrams exist for all other primes. These combined results show that rotationally symmetric Venn diagrams exist, if and only if n is a prime number.

Venn diagrams and Euler diagrams were incorporated as part of instruction in set theory, as part of the new math movement in the 1960s. Since then, they have also been adopted in the curriculum of other fields such as reading. With the work of Sun-Joo Shin, Venn diagrams have been recognized as a logical system equivalent to symbolic logic. Similar methods were then adopted in mathematics and subsequently in computer science.

https://en.wikipedia.org/wiki/Venn_diagram

3
 
 

When doing research in computer vision or image processing, it's useful to have a test image or two. Writing programs that reduce noise, alter brightness, or enhance edges is all very well and good, but without test images, we can't know if they work. Early on in vision science, the acquisition of images was hard, and there were a handful of images everyone used. This was partly due to expediency (not everyone had access to a scanner) and partly due to comparability (we want to be able to see the results of each algorithm on the same image or set of images). Today, nearly everyone has a digital camera as part of the device in their pocket, in the 70s and 80s such devices simply didn't exist.

At the very beginning of the discipline that's now become computer vision, sometime in the early 70s - probably early 1973 - a researcher was looking for a test image. Alexander Sawchuk (now a professor at the University of Southern California) scanned one for this researcher: it was the centrefold of a Playboy magazine, and that scan has now entered into our scientific culture in a way the originators could never have imagined. The woman is wearing a floppy hat, and is gazing over her bare shoulder towards the camera. As a 512 pixel squared image, the woman's body is cropped at the shoulders; the suggestion of nudity is there, but the image itself is decidedly safe for work. Her name is Lena Söderberg. The researcher’s name is lost to internet history, as is the name of the person who actually brought the porn to work.

Lena (sometimes spelled Lenna, which is the anglicisation of her name used by Playboy) has been called “The First Lady of the Internet”. Her image has been printed on a chip, shrunk to the size of a human hair, blurred, sharpened, and undoubtedly enhanced. She appeared on the cover of the journal Optical Engineering in 1991, and the model herself was invited to a conference of the Image Science and Technology society in 1997. Her picture is on the wall, tastefully framed, in at least one reputable computer vision lab. For computer vision researchers, her image is everywhere.

In defence of the use of Lena, Hutchinson said “... the image mixes areas of light and dark, fuzzy and sharp, detailed and flat—providing a stiff test for an image processing algorithm”. (I am unsure whether the word stiff in that sentence is a deliberate double-entendre. In a sense, I hope it is: if we're going to have scholarly articles about the use of pornography in our science, let them at least try to be funny.) Interestingly, Hutchinson's article talks about issues with Lena, from the copyright (Playboy were, for a time, not best pleased by the widespread distribution of one of their images) to the equalities issues. Many journal editors, whilst uncomfortable with the idea of such images appearing in their journals, were even less comfortable with the idea of banning such images.

But, as Hutchinson's article was published in 2001, we can surely assume that things have changed – that was over ten years ago. Our discipline is becoming more aware of the issues of minorities: the fact that women in technology can feel isolated is something that everyone working in the field probably knows by now. The usage of Lena must have gone down. The controversial nature of the image (there are discussion threads on the topic stretching back years, on some social media platforms) must make people think twice. Surely people realise that women, as a minority in the field, might be put off by statements like “... the Lena image is a picture of an attractive woman. It is not surprising that the (mostly male) image processing research community gravitated toward an image that they found attractive.”

Right?

Wrong. At ISISPA2013, the IEEE International Symposium on Image and Signal Processing and Analysis, during a single day of the talks I saw her six times. She's included in the standard OpenCV download (at least twice – as a test image in the both the c++ and python directories). On the machine I'm using to type this blog post, despite never having used Lena in a piece of research, and never having deliberately downloaded her, I've just done a search, and I'm amazed. I've got 18 copies (excluding cached thumbnails and the directory of files I've gathered whilst researching this article). It seems that whenever you download a piece of vision code, Lena still comes along for the ride.

As a vision researcher, I'm used to being the odd one out: at the first workshop I ever spoke at I was the only woman in the room. Generally, that doesn't bother me. I'm not entirely sure what I think about Lena though – having decided she would be a good topic for an article, I read up on the history, and to be honest, I still find it a bit bizarre. But despite my avowedly feminist stance, I’m somehow unable to get that annoyed about it.

The fact that there's a historic Playboy image at pretty much every conference I go to, and on the walls of my colleague's labs, and downloaded with every single image processing library I use, well... on the one hand, it's part of that drip-drip-drip of strangeness that comes from working in a male-dominated field, where the topics of conversation and the general attitude can be a little disconcerting. But on the other hand, with changing cultural attitudes, and the effect the internet has had on pornography, the entire centrefold (yes, you can easily find it online if you look) seems very tame indeed by today's standards. And the crop that is used in image processing research is, well... I've developed quite an affection for the picture. It's one of the quirks of computer science. So when I was asked what picture we should use to illustrate this blog post, there was only one choice.

Of course, we really had to use a Lena image on this page (although subject to Gaussian Blur with a 60px by 60px kernel). You can probably still see that it’s just a shoulder and a head in a floppy hat: what’s all the fuss about? Well, we all know what it represents and we all know where it comes from. And sometimes, there’s more to an image than meets the eye.

https://www.software.ac.uk/blog/how-photo-playboy-became-part-scientific-culture

In image processing, a Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function (named after mathematician and scientist Carl Friedrich Gauss).

It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination.

Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function. This is also known as a two-dimensional Weierstrass transform. By contrast, convolving by a circle (i.e., a circular box blur) would more accurately reproduce the bokeh effect.

Since the Fourier transform of a Gaussian is another Gaussian, applying a Gaussian blur has the effect of reducing the image's high-frequency components; a Gaussian blur is thus a low-pass filter.

A halftone print rendered smooth through Gaussian blur

https://en.wikipedia.org/wiki/Gaussian_blur

4
 
 

We present the Rhythm Circle, an interactive open-source web environment that allows users to generate and manipulate rhythms using the circular representation. In this environment, three rhythms can be looped simultaneously, starting and ending at the same time via three concentric circles, each corresponding to a drum element: a snare drum, a kick drum and a hi-hat. The rotation speed, corresponding to the tempo, is adjustable, and there is an option to export the generated rhythms in midi format. The platform includes preset rhythms, offering binary and ternary patterns that correspond, in this case, to circle subdivisions into 16 and 12 equal parts. One of the advantages of this web environment is the possibility of transforming the different rhythms, not only by activating or deactivating the onsets on the three respective circles, but also by applying musical transformations. The first musical transformation is the change of the subdivision of a circle, for example, from 16 to 17. The second musical transformation is the rotation of a circle, which corresponds to a time shifting of the musical rhythm. A usage counter was implemented, and in just a few months, the platform recorded over 30,000 visits from users worldwide

https://hal.science/hal-05054439/file/2025_ICMC.pdf

I recently developed the Rhythm Circle web application, an open-source environment designed to generate and manipulate rhythms using the circular representation.

This provides an original visualization of rhythm and might enable musicians, both professional and amateur, to explore new musical concepts based on the circular representation.

https://www.paullascabettes.com/home

5
 
 

Circles in a Circle” is a compact and closed composition. Kandinsky began a thoughtful study of the circle as an artistic unit starting from this painting. In his letter to Galka Scheyer he wrote, “it is the first picture of mine to bring the theme of circles to the foreground.” The outer black circle, as if the second frame for a picture, encourages us to focus on the interaction between the inside circles, and two intersecting diagonal stripes enhance the effect, adding a perspective to the composition.

1923

Geometric abstraction

Oil on canvas

38.9 × 37.6" (98.7 × 95.6 cm)

https://www.wassilykandinsky.net/work-247.php

A circle is a shape consisting of all points in a plane that are at a given distance from a given point, the centre. The distance between any point of the circle and the centre is called the radius. The length of a line segment connecting two points on the circle and passing through the centre is called the diameter. A circle bounds a region of the plane called a disc.

The circle has been known since before the beginning of recorded history. Natural circles are common, such as the full moon or a slice of round fruit. The circle is the basis for the wheel, which, with related inventions such as gears, makes much of modern machinery possible. In mathematics, the study of the circle has helped inspire the development of geometry, astronomy and calculus.

Prehistoric people made stone circles and timber circles, and circular elements are common in petroglyphs and cave paintings. Disc-shaped prehistoric artifacts include the Nebra sky disc and jade discs called Bi.

The Egyptian Rhind papyrus, dated to 1700 BCE, gives a method to find the area of a circle. The result corresponds to ⁠256/81⁠ (3.16049...) as an approximate value of π.

Book 3 of Euclid's Elements deals with the properties of circles. Euclid's definition of a circle is:

A circle is a plane figure bounded by one curved line, and such that all straight lines drawn from a certain point within it to the bounding line, are equal. The bounding line is called its circumference and the point, its centre.

— Euclid, Book I, Elements

In Plato's Seventh Letter there is a detailed definition and explanation of the circle. Plato explains the perfect circle, and how it is different from any drawing, words, definition or explanation. Early science, particularly geometry and astrology and astronomy, was connected to the divine for most medieval scholars, and many believed that there was something intrinsically "divine" or "perfect" that could be found in circles.

In 1880 CE, Ferdinand von Lindemann proved that π is transcendental, proving that the millennia-old problem of squaring the circle cannot be performed with straightedge and compass.

With the advent of abstract art in the early 20th century, geometric objects became an artistic subject in their own right. Wassily Kandinsky in particular often used circles as an element of his compositions.

https://en.wikipedia.org/wiki/Circle

6
1
submitted 3 weeks ago* (last edited 3 weeks ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

Marguerite's Theorem (French: Le Théorème de Marguerite) is 2023 French-Swiss drama film co-written and directed by Anna Novion [fr]. It is about a female mathematics student at ENS whose career is upended when an error is discovered in her work.[4][5] It premiered on 22 May 2023 at the 76th Cannes Film Festival. It was distributed in France on 1 November 2023.

Marguerite is a young and brilliant mathematician, the only girl in her class at the ENS, entirely devoted to her passion. The day an error is discovered in her thesis, she is devastated. In a dizzy spell, she leaves the school, wiping out the past. She then dives into the real world, discovers autonomy, befriends the young Noa, and has sex for the first time. Matured by her experiences, it is in this new momentum that she manages to find a correct proof of her theorem.

https://en.wikipedia.org/wiki/Marguerite%27s_Theorem

Goldbach's conjecture is one of the oldest and best-known unsolved problems in number theory and all of mathematics. It states that every even natural number greater than 2 is the sum of two prime numbers.

Letter from Goldbach to Euler dated 7 June 1742 (Latin–German)

The conjecture has been shown to hold for all natural numbers less than 4×10^18^, but remains unproven despite considerable effort.

https://en.wikipedia.org/wiki/Goldbach%27s_conjecture

According to Hardy (1999, p. 19), "It is comparatively easy to make clever guesses; indeed there are theorems, like 'Goldbach's Theorem,' which have never been proved and which any fool could have guessed." Faber and Faber offered a $1000000 prize to anyone who proved Goldbach's conjecture between March 20, 2000 and March 20, 2002, but the prize went unclaimed and the conjecture remains open.

https://mathworld.wolfram.com/GoldbachConjecture.html

Le Théorème de Marguerite (Marguerite’s Theorem), as explained by Anna Novion

What is the starting point of your feature film?

I remember a period when I was seriously ill. I had to stay home for a long time to get better. I was around 20 and remember being isolated from the world. When I had recovered, I felt that a distance had been created with the carefree attitude of young people of my age. When I start writing a feature film, I always rely on an emotion I’ve felt and the idea is to transpose it. The atmosphere of elite schools and the way students live in a vacuum, devoted to their work, seemed a good way of talking about this isolation.

Why did you choose to focus on the world of mathematics?

I first met with literary scholars from elite institutions. But they didn’t strike me as all that isolated. On the contrary, they were rather open to the world. I found mathematics students far more inspiring. And then there was my encounter with Ariane Mézard, which was decisive. She’s one of the rare French women mathematicians and the two of us really hit it off. The way she presented mathematics to me really provoked something within me. She discussed it in a really artistic way. She spoke to me of everything that animates me in my profession.

Which is?

Passion, necessity, difficulty, tenacity, relentlessness… I realized that there was a real parallel to be drawn between mathematics and artistic creation. What connects mathematics and directing is the risk and the passion that sometimes leads us to work for years without knowing if our work is going to amount to anything. It’s a very personal film that evokes my relationship to creation. I also wanted to talk about what it’s like to be a woman in a masculine milieu. I felt this pressure connected to the fact of being a sort of exception that pushes us to have to prove that we belong. Marguerite, my character, considers herself as a kind of anomaly. She feels this competition all the more so as she is the only woman.

How did you prepare for the film?

I spent four months at the École Normale Supérieure school meeting with mathematicians. I didn’t want them to say that I had just skimmed over the subject. With this ambition: how to make mathematicians at work captivating on the screen when you’re not from this world. The idea was to show to what point doing mathematics means working all the time.

How did you collaborate with Ella Rumpf?

She worked four hours a week over four months with Ariane Mézard. We quickly realized that it was useless for her to have the mathematics that she was going to write explained to her. She thus learned the formulas by heart. We needed it to appear extremely natural on the screen. It’s an American-style role, where we worked a lot on the posture of the character: her way of walking, her way of expressing herself, the speed of her speech and the singularity of her gaze on others.

Visually, light floods the crescendo of the film. Can you explain this aesthetic choice?

The character goes through many stages on the way to her flourishing. There are also several stages in the light and in the framing of shots. Her life is very structured at the beginning, so the shots are rather geometric and monochromatic. And little by little, with the irrational in her life that she’s going to discover, including the world of feelings, light and colour appear.

https://www.festival-cannes.com/en/2023/le-theoreme-de-marguerite-marguerite-s-theorem-as-explained-by-anna-novion/

7
 
 

This picture displays the process for the 64 first prime numbers:

{2,3,5,7,11,13,17,19,23,29,,31,37,41,43,47,53,59,61,67,71,,73,79,83,89,97,101,103,107,109,113,,127,131,137,139,149,151,157,163,167,173,,179,181,191,193,197,199,211,223,227,229,,233,239,241,251,257,263,269,271,277,281,,283,293,307,311}

-top row of the picture- with the following colors regarding the numbers:

0 = Dark Yellow, 1 = Cyan, 2 = Light Yellow,

when all other numbers -{3,5,7,11,...}- are Dark Red...

According to the the Gilbreath Conjecture the left-hand side column must be Cyan ('1') except the upper square that is Light Yellow ('2', the first prime number).

http://www.lactamme.polytechnique.fr/Mosaic/images/GILB.22.D/display.html

Gilbreath's conjecture is a conjecture in number theory regarding the sequences generated by applying the forward difference operator to consecutive prime numbers and leaving the results unsigned, and then repeating this process on consecutive terms in the resulting sequence, and so forth. The statement is named after Norman L. Gilbreath who, in 1958, presented it to the mathematical community after observing the pattern by chance while doing arithmetic on a napkin. In 1878, eighty years before Gilbreath's discovery, François Proth had published the same observations.

https://en.wikipedia.org/wiki/Gilbreath's_conjecture

Proth-Gilbreath Conjecture

Beat the Andrew Odlyzko Record G(Pi(1.1x10^14^))=635 (1993)

G(Pi(10^14^))=693 on october 5 2025 at 20:59:18

G(Pi(1.145x10^14^))=744 on 10/27/2025 at 12:10:26

1-Definition:

This conjecture was stated in 1958 by Norman L. Gilbreath but published earlier in 1878 by François Proth. It is related to the prime numbers and to the sequences generated by taking the absolute value of the difference between each prime number and its successor and then repeating this process ad infinitum:

The conjecture states that the first value of each line is 1 (except the first one where it is a 2 -the only even prime number-) and was studied by Andrew Odlyzko in 1993. He did check it for all prime numbers less than 10^13^.

On sunday 10/05/2025 20:45 (Paris time, France) I did succeed to check it up to 10^14^ and on tuesday 10/07/2025 02:25 pm (East Time), Simon Plouffe (Canada) did the same. Moreover he did confirm the maximal value (693) of the G(Pi(x)) function with x ∈ [2,10^14^] that was anticipated on 09/25/2025.

2-The Theory:

Obviously one cannot check the Proth-Gilbreath Conjecture for there is an infinity of prime numbers. Only a demonstration can solve this unless a counter-example is discovered, that is a line not starting with a '1' (except the first one).

Let p~n~ be the prime numbers:

p~1~ = 2

p~2~ = 3

p~3~ = 5

etc...

Let's define the suite d~k~(n):

d~0~(n) = pn for all n such as n > 0

d~k~(n) = |d~k-1~(n) - d~k-1~(n+1)| for all k such as k > 0 and for all n such as n > 0

Then one must check that:

d~k~(1) = 1 for all k such as k > 0

Due to the finite limits of computers, it is impossible to exhaustively check this property. Fortunately Andrew Odlyzko noticed that if for a certain N there exists K such that:

d~K~(1) = 1

d~K~(n) ∈ {0,2} for all n such as 0 < n < N+1

then:

d~k~(1) = 1 for all k such as K-1 < k < N+K

Let's call G(N) the smallest k (if it exists) such that:

d~j~(1) = 1, 0 < j < k+1

d~k~(n) ∈ {0,2} for all n such as 0 < n < N+1

A trivial reasoning shows that G(N) does exist for all N and that the process can be stopped as soon as there are only '0's, '1's and '2's on the current line of rank k. For example:

More at https://www.lactamme.polytechnique.fr/Mosaic/descripteurs/GilbreathConjecture.01.Ang.html

Verification and attempted proofs

Several sources write that, as well as observing the pattern of Gilbreath's conjecture, François Proth released what he believed to be a proof of the statement that was later shown to be flawed. However, Zachary Chase disputes this, writing that although Proth called the observation a "theorem", there is no evidence that he published a proof, or false proof, of it.

[...] but the conjecture remains an open problem. Instead of evaluating n rows, Odlyzko evaluated 635 rows and established that the 635th row started with a 1 and continued with only 0s and 2s for the next n numbers. This implies that the next n rows begin with a 1.

Simon Plouffe has announced a computational verification for the primes up to 10^14^.

https://en.wikipedia.org/wiki/Gilbreath's_conjecture

8
 
 

In probability theory, Buffon's needle problem is a question first posed in the 18th century by Georges-Louis Leclerc, Comte de Buffon :

The a needle lies across a line, while the b needle does not.

Suppose we have a floor made of parallel strips of wood, each the same width, and we drop a needle onto the floor. What is the probability that the needle will lie across a line between two strips?

Buffon's needle was the earliest problem in geometric probability to be solved; it can be solved using integral geometry. The solution for the sought probability p, in the case where the needle length l is not greater than the width t of the strips, is p = 2/π * l/t .

This can be used to design a Monte Carlo method for approximating the number π, although that was not the original motivation for de Buffon's question. The seemingly unusual appearance of π in this expression occurs because the underlying probability distribution function for the needle orientation is rotationally symmetric.

https://en.wikipedia.org/wiki/Buffon%27s_needle_problem

Needle : A really weird way to find Pi

Discovery of Calculus in the 17th century opened a new window for estimating ⁡π more precisely. Since then, we've seen plenty of methods to do it. Some of them are pretty weird but simple enough to understand. Buffon's Needle Problem is one of them. This problem was first posed by the French Naturalist Georges-Louis Leclerc, Comte de Buffon in 1733. It goes like this:

Suppose you are given a floor with equally spaced parallel lines on it. If we drop a needle on the floor, what is the probability that the needle will lie across a line?

Let's say you've thrown n needles (or a single needle for n times or whatever). m number of needles cross/intersect with a line (or m times if you have one needle). The probability P that a needle crosses the line:

P = m/n

Buffon's Needle Problem. The green needles are the ones crossing the parallel lines

It may seem a bit odd how ⁡π is related to this problem. Hang on tight. I'll explain.

To solve this problem, we'll require some basic knowledge of Probability and Integral Calculus. Let's assume that the spacing between two consecutive parallel lines is D, the length of the needle is L. Now, let the distance from the middle point of a needle on the floor to its closest line be x and the acute angle between the line and needle be θ (See figure above). All needles are of equal length.

Since we are considering x to be the smallest distance from the center of the needle to any one of the lines, it can vary only within 0 and D/2. And since θ is an acute angle, it can take any value between 0 and π/2. Using trigonometry, we can find out that the vertical component of length L is L/2 * sin(θ). The needle will cross the line if x is less than L/2 * sin(θ).

Okay. Now let's visualize possible outcomes in a graph. Let the horizontal axis be θ and the vertical axis be x. So a rectangle with sides π/2 and D/2 represents all possible outcomes in this experiment.

Horizontal axis refers to θ and vertical axis refers to x. The rectangle represents all possible outcomes

Now I am going to shade all those points in the rectangle that represent the events where a needle crosses a line according to the two conditions above. Think about it. It's just the area under the curve D/2 * sin(θ) where 0 ≤ θ ≤ π .

The shaded area indicates the probability that a needle will cross a line

The probability we are looking for would be the ratio of the area under the sine curve and the rectangle.

Area of the rectangle:

Area under the sine curve:

Ratio of the areas:

Rearranging, we get

In 1901, Italian mathematician Mario Lazzarini performed Buffon's needle experiment and, tossing a needle 3408 times, obtained π up to six decimal points correctly!

https://tahsin314.github.io/writings/maths/buffons_needle.html

Finally, since Buffon’s needle problem is interested in estimating π – and since I have mostly focused on the error in this method instead – I thought it might be fun to take a look at the the empirical distribution of $\hat{\pi}$ for one of my experimental setups. The following figure shows how this distribution varies as a function of the number of needles thrown for the l = h case. The data are plotted using a dot-plot, while box-plots have been overlaid on top.

https://www.paulkepley.com/2021-05-13-AsymptoticBuffon/

9
 
 

Here's an interesting quote from the correspondence of Isaac Newton:

This is from the 2nd letter that Newton wrote to Leibniz (via Oldenburg) in 1677. He was responding to some questions from Leibniz about his method of infinite series and came close to revealing his "fluxional method" (i.e., calculus), but then decided to conceal it in the form of an anagram. After describing his methods of tangents and handling maxima and minima, he wrote

The foundations of these operations is evident enough, in fact; but because I cannot proceed with the explanation of it now, I have preferred to conceal it thus: 6accdae13eff7i3l9n4o4qrr4s8t12ux. On this foundation I have also tried to simplify the theories which concern the squaring of curves, and I have arrived at certain general Theorems.

The anagram expresses, in Newton's terminology, the fundamental theorem of the calculus: "Data aequatione quotcunque fluentes quantitates involvente, fluxiones invenire; et vice versa", which means "Given an equation involving any number of fluent quantities to find the fluxions, and vice versa." Arranging the characters in his Latin sentence in alphabetical order (and assuming he counted the dipthong "ae" as a separate character, and u's and v's are counted as the same character), the number of occurrences of each character are as follows

This agrees with Newton's anagram

except that I count nine t's instead of eight. Possibly Newton's original Latin spelling used one fewer t's, although I can't see which one of them could plausibly be omitted. It could also be that the anagram has been incorrectly copied, but it agrees with the version in both Westfall's and Christianson's biographies, as well as the transcription of Newton's letter contained in Calinger's Classics of Mathematics. Another possibility is that Newton simply mis-counted. This isn't as implausible as it might seem at first, since there is a well-known psychological phenomenon of overlooking the second letter in short connective words (like the f in "of") when quickly counting the number of occurrences of a certain letter in a string of text. It's very easy, when counting the number of t's in Newton's Latin phrase, to neglect the "t" in the word "et".

Ironically, neither Leibniz nor Newton had published anything on calculus at the time this letter was exchanged, although both are believed to have been in possession of the calculus, so if Newton had just come right out with a complete and explicit statement of his calculus he would have placed Leibniz in a very difficult position, and would have established his own priority beyond doubt (since the letter passed through Oldenburg). Instead, Newton's very protectiveness and secrecy caused him to lose whatever unambiguous claim to priority he might have had (and led to an acrimonious priority dispute that embittered both his and Leibniz's later lives).

On a very deep level it was natural for Newton to express himself in anagrams, because he seems to have regarded "contrived obscurity" (in Domson's words) as an essential feature of God's design for the world, and Newton adopted this mode of operation in his own work. Recall that he said he had made the Principia "designedly abstruse" (to avoid being baited by "little smatterers" in mathematics). Even more pointedly, Newton spent many years attempting to interpret the prophesies in the Bible, which he believed were presented in deliberately disguised form so that their meaning could only be inferred by solving them like puzzles. Interestingly, he had disdain for people who tried to unravel prophesies of future events. In his view, this was misguided and doomed to failure. He believed the encoded prophesies were intended to be understood only after the fact. In his "Observations upon the Prophecies of the Apocalypse of St. John" he wrote

The folly of interpreters has been to foretell times and things by this Prophecy, as if God designed to make them Prophets. By this rashness they have not only exposed themselves, but brought Prophecy also into contempt. The design of God was much otherwise. He gave this [Revelations] and the Prophecies of the Old Testament, not to gratify men's curiosities by enabling them to foreknow things, but that after they were fulfilled they might be interpreted by the event, and [that] his own Providence, not the Interpreters, be then manifested thereby to the world. For the event of things predicted many ages before, will then be a convincing argument that the world is governed by Providence.

We might say that Newton saw the biblical prophesies serving exactly the same function as his anagram on fluxions and fluents, i.e., it is presented in a form that cannot be interpreted to reveal its meaning before the fact, but after the facts have transpired and the solution of the puzzle is found, it can serve as irrefutable evidence of the Providence of the creator.

https://www.mathpages.com/home/kmath414/kmath414.htm

10
 
 

In musical tuning and harmony, the Tonnetz (German for 'tone net') is a conceptual lattice diagram representing tonal space first described by Leonhard Euler in 1739. Various visual representations of the Tonnetz can be used to show traditional harmonic relationships in European classical music.

A modern rendering of the Tonnetz. The A minor triad is in dark blue, and the C major triad is in dark red. Interpreted as a torus, the Tonnetz has 12 nodes (pitches) and 24 triangles (triads).

Euler's Tonnetz

Euler's Tonnetz, pictured at left, shows the triadic relationships of the perfect fifth and the major third: at the top of the image is the note F, and to the left underneath is C (a perfect fifth above F), and to the right is A (a major third above F). Gottfried Weber, Versuch einer geordneten Theorie der Tonsetzkunst, discusses the relationships between keys, presenting them in a network analogous to Euler's Tonnetz, but showing keys rather than notes. The Tonnetz itself was rediscovered in 1858 by Ernst Naumann in his Harmoniesystem in dualer Entwickelung., and was disseminated in an 1866 treatise of Arthur von Oettingen. Oettingen and the influential musicologist Hugo Riemann (not to be confused with the mathematician Bernhard Riemann) explored the capacity of the space to chart harmonic modulation between chords and motion between keys. Similar understandings of the Tonnetz appeared in the work of many late-19th century German music theorists.

The appeal of the Tonnetz to 19th-century German theorists was that it allows spatial representations of tonal distance and tonal relationships. For example, looking at the dark blue A minor triad in the graphic at the beginning of the article, its parallel major triad (A-C♯-E) is the triangle right below, sharing the vertices A and E. The relative major of A minor, C major (C-E-G) is the upper-right adjacent triangle, sharing the C and the E vertices. The dominant triad of A minor, E major (E-G♯-B) is diagonally across the E vertex, and shares no other vertices. One important point is that every shared vertex between a pair of triangles is a shared pitch between chords - the more shared vertices, the more shared pitches the chord will have. This provides a visualization of the principle of parsimonious voice-leading, in which motions between chords are considered smoother when fewer pitches change. This principle is especially important in analyzing the music of late-19th century composers like Wagner, who frequently avoided traditional tonal relationships.

https://en.wikipedia.org/wiki/Tonnetz

Harmonic Trajectories in the Tonnetz

Introduction

The Neo-Riemannian Tonnetz is a way of visualizing musical relationships between chords. It was developed by music theorists to help understand how chords can transition smoothly from one to another.

Imagine a grid or network of interconnected points. Each point represents a different chord. The horizontal lines connect chords that are closely related, while the vertical lines connect chords that share similar tonal qualities.

The Tonnetz is based on the idea that chords can be transformed or changed into one another through small movements. These transformations are represented by diagonal lines on the Tonnetz. For example, a chord can be transformed into another chord by changing one note at a time, moving in a specific direction on the grid.

By studying the Tonnetz, musicians and theorists can analyze chord progressions and see how different chords are related to each other. It provides a visual representation of the harmonic possibilities and helps to explain the underlying structure of music.

In simple terms, the Neo-Riemannian Tonnetz is a grid that shows how chords in music are connected and can be transformed smoothly from one to another. It helps musicians and theorists understand how chords fit together and how they can create pleasing transitions in music.

https://emmanouil-karystinaios.github.io/post/tonnetz/

11
 
 

By combining theoretical abstraction with practical impact, Stéphane Mallat has left a lasting mark on mathematics and computer science. From the JPEG 2000 image compression standard to the mathematical foundations of artificial intelligence, he has shaped tools that have become essential. He is the 2025 recipient of the CNRS Gold Medal.

“We often imagine mathematics as a collection of abstract concepts that apply ‘from above’ onto reality. But more often than not, it works the other way around: real-world problems push us to invent new mathematical tools. And to shape them, one has to ‘get one’s hands dirty,’ building bridges between abstract theory and concrete questions from the world. That frontier, between the two, is precisely where I feel comfortable.”

The scientific work of the 62-year-old researcher – broad forehead topped with unruly hair, gentle blue eyes, and a warm smile – makes the point. His contributions have profoundly influenced the field of applied mathematics to signal processing. He is best known as the inventor of a key algorithm behind the JPEG 2000 compression format, and for pioneering the mathematical insights that help us understand deep learning models at the heart of modern artificial intelligence.

Stéphane Mallat, holder of the Chair of Data Science at the Collège de France and researcher at the École Normale Supérieure, member of the French Academy of Sciences and of the U.S. National Academy of Engineering, co-signatory of ten patents, and recipient of the CNRS Innovation Medal along with numerous other prestigious distinctions, has now been awarded France’s highest scientific distinction: the CNRS Gold Medal.

From an early age, Stéphane showed a passion for mathematics – “a bubble in which I felt at ease” – yet to him, they seemed too ethereal to imagine as a future career. As a child, he loved “building things, giving shape to ideas, like an engineer,” through woodworking.

"If I returned to mathematics, it was thanks to intuitions sparked by practical applications. That's when I realised the extraordinary power and beauty of abstract concepts, their ability to capture the essence of realities that, on the surface, look completely different.”

After excelling at École Polytechnique, he left for the University of Pennsylvania in the United States. There, in 1988, he completed a PhD in mathematics applied to image processing under the guidance of Ruzena Bajcsy – “a pioneer in the field” – at a time when digital technology was booming.

An image of 1000x1000 pixels contains a million numerical values; each pixel is a number between 0 (black) and 255 (white). How to extract information from such an avalanche of bytes? His PhD supervisor proposed trying to do so by changing image resolution.

Throughout his PhD, and during the subsequent eight years at the prestigious Courant Institute in New York, he focused on uncovering the principles governing the extraction of information from various types of digital data – images, sounds, electrocardiograms – with a central objective: to represent large-scale data as a superposition of a minimal number of elementary structures.

“This is somewhat akin to constructing a house from Lego blocks, using the fewest possible bricks while retaining the ability to define the shape of these elementary components,” he explains. The question of sparse representation, reminiscent of the principle of simplicity underlying Ockham’s razor in philosophy, arises across all fields.

For instance in music, a polyphonic melody consists of a succession of elementary blocks that are the notes, each with its own pitch and duration. “With an image, being sparse means focusing on significant variations, such as a contour or abrupt change in colour. In mathematics, the goal is to capture the essence of the problem – the pursuit of sparsity – while freeing oneself from the context of specific applications, in order to discover general solutions that can later have a wide range of applications." From the very beginning of his career, Stéphane Mallat set out in search of these fundamental structures capable of representing any type of data sparsely. Serendipity would eventually guide him toward these elementary building blocks.

One summer, while on the beach, a friend mentioned the work of the mathematician Yves Meyer on “wavelets.” In mathematics, a wavelet is a curve that oscillates over a small domain and then vanishes. Intrigued, Mallat obtained Meyer’s paper, which showed, among other things, that any complex curve can be represented as a superposition of very particular wavelets. The mathematical problem raised by Yves Meyer was to determine whether it was possible to construct other types of wavelets capable of producing sparser decompositions^[1]^.

“I found a solution to this mathematical question based on the image processing problem posed by Ruzena Bajcsy,” explains Stéphane Mallat. “In image processing, wavelets can be interpreted as details that progressively increase the resolution of an image. Following this approach, I introduced the theory of multiresolution analyses, which provides a framework for constructing all mathematical wavelets. In this way, the intuition derived from image processing led me to the solution of the mathematical problem, but it was the mathematical abstraction that enabled me to understand how to compute the ‘wavelet transform.’” This is a fast algorithm, known as the Mallat's algorithm, capable of rewriting any digital data – such as an image composed of millions of pixels – as a superposition of a much smaller number of wavelets, each representing a local variation within the image.

“While Bajcsy was my mentor on the applications side, Meyer was undoubtedly the one on abstraction, and I move back and forth from one to the other.”

Mallat’s powerful algorithm, which is capable of rapidly compressing images without any no loss of information, was central to the many applications that emerged around the turn of the millennium, including the JPEG 2000 image compression standard. Under Mallat’s leadership, the mathematical language of wavelets generated a global standard used not only in software, but also in numerous medical, meteorological, and astronomical databases.

Already celebrated and recognised worldwide as a scientist whose work commands attention, the bridge-builder continues his rapid ascent. He aims to further advance the sparsity in data representation of which he is the architect. “In writing, using a limited vocabulary, one can certainly express complex ideas, but this comes with the risk of resorting to long circumlocutions and, ultimately, producing approximations. To create shorter, more impactful sentences, one must enrich the vocabulary. This is why I introduced the concept of a ‘mathematical dictionary,’ comprising a large number of elementary building blocks, more specialised than wavelets.”

Back in France, where he served as the Director of the Mathematics Department at Polytechnique beginning in 1998, he applied these results by building dictionaries of bandlets, to more effectively represent images and the geometry of contours. This work ultimately prompted him to make a significant change in his professional life.

In 2001, he founded the start-up Let It Wave with three of his former doctoral students. “Almost overnight, I went from being an academic to a CEO, and I discovered an entirely new world: marketing, negotiating funding rounds, concern that the venture would abruptly stop for lack of subsidies… It was exhilarating, and in some ways similar to research: entrepreneurs also need to be excited like children about an idea they believe will revolutionise the world, even if it might collapse in a fortnight. They have a vision and are never jaded, which, in my view is an essential quality. But by moving into this world, I realised how much I missed research and teaching.” Too much concreteness, not enough abstraction.

So after profitably selling Let It Wave, he returned to Polytechnique in 2007, where he introduced entrepreneurship classes for students, a way of passing the torch of the builder. “Yet as a researcher, I went through a dry spell. I had no desire to repeat what I had done before, all the ideas I had in mind seemed already explored. I was in doubt, wondering whether at 45 years of age I was too old to take up research again, to invent new mathematics or algorithms.” And then the horizon brightened. In 2008, he discovered Yann LeCun’s results on deep neural networks. “I knew enough about image processing applications to realise that these computer programs inspired by the human brain, did not merely represent incremental progress, but they constituted a genuine paradigm shift.”

Mallat plunged headfirst into the world of artificial intelligence, with a clear objective: to develop mathematical models to understand the remarkable performance of neural networks. These networks learn to answer a question by analysing data, for example identifying the animal in an image. During their training, they are provided with millions of examples, each paired with the correct answer – the name of the animal corresponding to each image. Much like a student practicing exercises, the network learns by adjusting its internal parameters to make fewer mistakes. “But how does it manage to provide so many correct answers for new images it has never seen? It’s a mystery, because these problems are highly complex. What type of information has it learned to extract from the data? I observed that these neural networks initially compute a ‘wavelet transform’. This reminded me of the results of neurophysiologists, who have also identified ‘wavelet transforms’ in the primary areas of our visual cortex, as well as in the cochlea in the ear.”

Building on his expertise and interdisciplinary vision, Mallat showed that a neural network builds hierarchical representations. It separates the largest structures, for example the coarse outline of a face in an image, and represents finer components relative to the broader ones. For instance, the eyes relative to the face, and the pupil relative to the eye. “The wavelet transform is a first step in constructing this hierarchy,” he explains.

By elucidating this mechanism, he laid the mathematical foundations for deep learning models, which underpin many AI systems today. “But the deeper one goes into the network layers, the more sophisticated the structures the network detects. Certain neurons activate for very specific features, such as a melody or a face. It is as if these deep layers represented data with very rich and highly specialised ‘mathematical dictionaries,’ whose properties remain poorly understood by scientists.” All these results, he emphasizes, were achieved collectively: “In science, one almost never moves forward alone. Throughout my career, I have worked extensively with my doctoral students and numerous collaborators. They have supported me in formulating the right questions, sharing both successes and setbacks. Each, in their own way, has brought fundamental contributions.”

Do these artificial intelligences, which Mallat is still studying today, pose a threat to our societies? “They bring remarkable advances, for example in medicine, but like any technology, they also carry risks – for privacy, and because of their potential military use,” he points out. "It is therefore crucial to control and regulate them, but that is not solely the responsibilityof governments. Each of us is confronted with this revolution, and will need to adapt to take advantage of the best it offers, while avoiding the pitfalls. This requires understanding AI, and not mythologizing it. It is with this goal in mind that I created MathAData^[2]^, a high school teaching programme for mathematics directly linked to solving practical AI problems. We can see that middle and high school students are much more motivated to learn maths when they understand it lies at heart of major issues and the tools of their everyday lives.”

How do you, Stéphane Mallat, spend your time when not navigating oceans of data, or building bridges between great ideas and reality? “I love to dance. Tango, rock-and roll… sometimes on the banks of the Seine. When I dance, I’m in another world, that of the music and my partner. I disconnect.” After all, don’t builders sometimes need to take a breather?


Footnotes

  1. Or, in mathematical language: “build new orthogonal wavelet bases”.

  2. mathadata.fr/en

12
11
Numbers in a Spiral (sh.itjust.works)
submitted 1 month ago* (last edited 1 month ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

painter

Johnson, Crockett

Description

Some of Crockett Johnson's paintings reflect relatively recent research. Mathematicians had long been interested in the distribution of prime numbers. At a meeting in the early 1960s, physicist

Stanislaw Ulam

Stanisław Marcin Ulam (13 April 1909 – 13 May 1984) was a Polish and American mathematician, nuclear physicist and computer scientist. He participated in the Manhattan Project, originated the Teller–Ulam design of thermonuclear weapons, discovered the concept of the cellular automaton, invented the Monte Carlo method of computation, and suggested nuclear pulse propulsion. In pure and applied mathematics, he proved a number of theorems and proposed several conjectures.

https://en.wikipedia.org/wiki/Stanis%C5%82aw_Ulam

of the Los Alamos Scientific Laboratory in New Mexico passed the time by jotting down numbers in grid. One was at the center, the digits from 2 to 9 around it to form a square, the digits from 10 to 25 around this, and the spiral continued outward.

Circling the prime numbers, Ulam was surprised to discover that they tended to lie on lines. He and several colleagues programmed the MANIAC computer to compute and plot a much larger number spiral, and published the result in the American Mathematical Monthly in 1964. News of the event also created sufficient stir for Scientific American to feature their image on its March 1964 cover. Martin Gardner wrote a related column in that issue entitled “The Remarkable Lore of the Prime Numbers.”

The painting is #77 in the series. It is unsigned and undated, and has a wooden frame painted white.

date made

ca 1965

Object Name

painting

Physical Description

masonite (substrate material) wood (frame material)

Measurements

overall: 82 cm x 85 cm x 1.3 cm; 32 5/16 in x 33 7/16 in x 1/2 in


The Ulam spiral or prime spiral is a graphical depiction of the set of prime numbers, devised by mathematician Stanisław Ulam in 1963 and popularized in Martin Gardner's Mathematical Games column in Scientific American a short time later. It is constructed by writing the positive integers in a square spiral and specially marking the prime numbers.

Ulam spiral of size 201×201. Black dots represent prime numbers. Diagonal, vertical, and horizontal lines with a high density of prime numbers are clearly visible.

For comparison, a spiral with random odd numbers colored black (at the same density of primes in a 200x200 spiral).

Ulam and Gardner emphasized the striking appearance in the spiral of prominent diagonal, horizontal, and vertical lines containing large numbers of primes. Both Ulam and Gardner noted that the existence of such prominent lines is not unexpected, as lines in the spiral correspond to quadratic polynomials, and certain such polynomials, such as Euler's prime-generating polynomial x^2^ − x + 41, are believed to produce a high density of prime numbers. Nevertheless, the Ulam spiral is connected with major unsolved problems in number theory such as Landau's problems. In particular, no quadratic polynomial has ever been proved to generate infinitely many primes, much less to have a high asymptotic density of them, although there is a well-supported conjecture as to what that asymptotic density should be.

The Ulam spiral is constructed by writing the positive integers in a spiral arrangement on a square lattice:

and then marking the prime numbers:

In the figure, primes appear to concentrate along certain diagonal lines. In the 201×201 Ulam spiral shown above, diagonal lines are clearly visible, confirming the pattern to that point. Horizontal and vertical lines with a high density of primes, while less prominent, are also evident. Most often, the number spiral is started with the number 1 at the center, but it is possible to start with any number, and the same concentration of primes along diagonal, horizontal, and vertical lines is observed. Starting with 41 at the center gives a diagonal containing an unbroken string of 40 primes (starting from 1523 southwest of the origin, decreasing to 41 at the origin, and increasing to 1601 northeast of the origin), the longest example of its kind.

Explanation

Diagonal, horizontal, and vertical lines in the number spiral correspond to polynomials of the form

f(n) = 4n^2^ + bn + c

where b and c are integer constants. When b is even, the lines are diagonal, and either all numbers are odd, or all are even, depending on the value of c. It is therefore no surprise that all primes other than 2 lie in alternate diagonals of the Ulam spiral. Some polynomials, such as 4n^2^ + 8n + 3, while producing only odd values, factorize over the integers (4n^2^ + 8n + 3) = (2n + 1)(2n + 3) and are therefore never prime except possibly when one of the factors equals 1. Such examples correspond to diagonals that are devoid of primes or nearly so.

To gain insight into why some of the remaining odd diagonals may have a higher concentration of primes than others, consider 4n^2^ + 6 n + 1 and 4n^2^ + 6 n + 5. Compute remainders upon division by 3 as n takes successive values 0, 1, 2, .... For the first of these polynomials, the sequence of remainders is 1, 2, 2, 1, 2, 2, ..., while for the second, it is 2, 0, 0, 2, 0, 0, .... This implies that in the sequence of values taken by the second polynomial, two out of every three are divisible by 3, and hence certainly not prime, while in the sequence of values taken by the first polynomial, none are divisible by 3. Thus it seems plausible that the first polynomial will produce values with a higher density of primes than will the second. At the very least, this observation gives little reason to believe that the corresponding diagonals will be equally dense with primes. One should, of course, consider divisibility by primes other than 3. Examining divisibility by 5 as well, remainders upon division by 15 repeat with pattern 1, 11, 14, 10, 14, 11, 1, 14, 5, 4, 11, 11, 4, 5, 14 for the first polynomial, and with pattern 5, 0, 3, 14, 3, 0, 5, 3, 9, 8, 0, 0, 8, 9, 3 for the second, implying that only three out of 15 values in the second sequence are potentially prime (being divisible by neither 3 nor 5), while 12 out of 15 values in the first sequence are potentially prime (since only three are divisible by 5 and none are divisible by 3).

While rigorously-proved results about primes in quadratic sequences are scarce, considerations like those above give rise to a plausible conjecture on the asymptotic density of primes in such sequences, which is described in the next section.

Variants

Klauber triangle with prime numbers generated by Euler's polynomial x^2^  −  x  +  41 highlighted

Sacks spiral

Ulam spiral of size 150×150 showing both prime and composite numbers

Hexagonal number spiral with prime numbers in green and more highly composite numbers in darker shades of blue

Number spiral with 7503 primes visible on regular triangle

Ulam spiral with 10 million primes

https://en.wikipedia.org/wiki/Ulam_spiral

13
 
 

Sangaku or san gaku (Japanese: 算額, lit. 'calculation tablet') are Japanese geometrical problems or theorems on wooden tablets which were placed as offerings at Shinto shrines or Buddhist temples during the Edo period by members of all social classes.

A sangaku dedicated to Konnoh Hachimangu (Shibuya, Tokyo) in 1859.

A sangaku dedicated at Emmanji Temple in Nara

The sangaku were painted in color on wooden tablets (ema) and hung in the precincts of Buddhist temples and Shinto shrines as offerings to the kami and buddhas, as challenges to the congregants, or as displays of the solutions to questions. Many of these tablets were lost during the period of modernization that followed the Edo period, but around nine hundred are known to remain.

Fujita Kagen (1765–1821), a Japanese mathematician of prominence, published the first collection of sangaku problems, his Shimpeki Sampo (Mathematical problems Suspended from the Temple) in 1790, and in 1806 a sequel, the Zoku Shimpeki Sampo.

During this period Japan applied strict regulations to commerce and foreign relations for western countries so the tablets were created using Japanese mathematics, developed in parallel to western mathematics. For example, the connection between an integral and its derivative (the fundamental theorem of calculus) was unknown, so sangaku problems on areas and volumes were solved by expansions in infinite series and term-by-term calculation.

https://en.wikipedia.org/wiki/Sangaku

Of the world's countless customs and traditions, perhaps none is as elegant, nor as beautiful, as the tradition of sangaku, Japanese temple geometry. From 1639 to 1854, Japan lived in strict, self-imposed isolation from the West. Access to all forms of occidental culture was suppressed, and the influx of Western scientific ideas was effectively curtailed. During this period of seclusion, a kind of native mathematics flourished.

Devotees of math, evidently samurai, merchants and farmers, would solve a wide variety of geometry problems, inscribe their efforts in delicately colored wooden tablets and hang the works under the roofs of religious buildings. These sangaku, a word that literally means mathematical tablet, may have been acts of homage--a thanks to a guiding spirit--or they may have been brazen challenges to other worshipers: Solve this one if you can! For the most part, sangaku deal with ordinary Euclidean geometry. But the problems are strikingly different from those found in a typical high school geometry course. Circles and ellipses play a far more prominent role than in Western problems: circles within ellipses, ellipses within circles. Some of the exercises are quite simple and could be solved by first-year students. Others are nearly impossible, and modern geometers invariably tackle them with advanced methods, including calculus and affine transformations

https://www.cut-the-knot.org/pythagoras/Sangaku.shtml

The tablet was called a SANGAKU which means a mathematics tablet in Japanese. Many skilled geometers dedicated a SANGAKU in order to thank the god for the discovery of a theorem. The proof of the proposed theorem was rarely given. This was interpreted as a challenge to other geometers, "See if you can prove this."

http://www.wasan.jp/english/

More at http://www.wasan.jp/index.html

14
 
 

The icosian game is a mathematical game invented in 1856 by Irish mathematician William Rowan Hamilton. It involves finding a Hamiltonian cycle on a dodecahedron, a polygon using edges of the dodecahedron that passes through all its vertices. Hamilton's invention of the game came from his studies of symmetry, and from his invention of the icosian calculus, a mathematical system describing the symmetries of the dodecahedron.

Hamilton sold his work to a game manufacturing company, and it was marketed both in the UK and Europe, but it was too easy to become commercially successful. Only a small number of copies of it are known to survive in museums. Although Hamilton was not the first to study Hamiltonian cycles, his work on this game became the origin of the name of Hamiltonian cycles. Several works of recreational mathematics studied his game. Other puzzles based on Hamiltonian cycles are sold as smartphone apps, and mathematicians continue to study combinatorial games based on Hamiltonian cycles.

Game play

A Hamiltonian cycle on a dodecahedron

Planar view of the same cycle

The game's object is to find a three-dimensional polygon made from the edges of a regular dodecahedron, passing exactly once through each vertex of the dodecahedron. A polygon visiting all vertices in this way is now called a Hamiltonian cycle.) In a two-player version of the game, one player starts by choosing five consecutive vertices along the polygon, and the other player must complete the polygon.

Édouard Lucas describes the shape of any possible solution, in a way that can be remembered by game players. A completed polygon must cut the twelve faces of the dodecahedron into two strips of six pentagons. As this strip passes through each of its four middle pentagons, in turn, it connects through two edges of each pentagon that are not adjacent, making either a shallow left turn or a shallow right turn through the pentagon. In this way, the strip makes two left turns and then two right turns, or vice versa.

One version of the game took the form of a flat wooden board inscribed with a planar graph with the same combinatorial structure as the dodecahedron (a Schlegel diagram), with holes for numbered pegs to be placed at its vertices. The polygon found by game players was indicated by the consecutive numbering of the pegs. Another version was shaped as a "partially flattened dodecahedron", a roughly hemispherical dome with the pentagons of a dodecahedron spread on its curved surface and a handle attached to its flat base. The vertices had fixed pegs. A separate string, with a loop at one end, was wound through these pegs to indicate the polygon.

The game was too easy to play to achieve much popularity, although Hamilton tried to counter this impression by giving an example of an academic colleague who failed to solve it. David Darling suggests that Hamilton may have made it much more difficult for himself than for others, by using his theoretical methods to solve it instead of trial and error.

https://en.wikipedia.org/wiki/Icosian_game

Sir William Rowan Hamilton (4 August 1805 – 2 September 1865) was an Irish mathematician, physicist, and astronomer who made numerous major contributions to algebra, classical mechanics, and optics. His theoretical works and mathematical equations are considered fundamental to modern theoretical physics, particularly his reformulation of Lagrangian mechanics. His research included the analysis of geometrical optics, Fourier analysis, and quaternions, the last of which made him one of the founders of modern linear algebra.

https://en.wikipedia.org/wiki/William_Rowan_Hamilton

A graph having a Hamiltonian cycle, i.e., on which the Icosian game may be played, is said to be a Hamiltonian graph. While the skeletons of all the Platonic solids and Archimedean solids (i.e., the Platonic graphs and Archimedean graphs, respectively) are Hamiltonian, the same is not necessarily true for the skeletons of the Archimedean duals, as shown by Coxeter (1946) and Rosenthal (1946) for the rhombic dodecahedron (Gardner 1984, p. 98).

Wolfram (2022) analyzed the icosian game as a multicomputational process, including through the use of multiway and branchial graphs. In particular, the multiway graph for the icosian game begins as illustrated above.

https://mathworld.wolfram.com/IcosianGame.html

The Original Icosian Game

In 1857 Sir William Rowan Hamilton invented the Icosian game. In a world based on the dodecahedral graph, a traveler must visit 20 cities, without revisiting any of them. Today, when the trip makes a loop through all the vertices of the graph, it is called a Hamiltonian tour (or cycle). When the first and last vertices in a trip are not connected, it is called a Hamiltonian path (or trail). The first image shown is a tour; the second is a path.

Hamiltonian cycles gained popularity in 1880, when P. G. Tait made the conjecture: “Every cubic polyhedron has a Hamiltonian cycle through all its vertices”. Cubic means that three edges meet at every vertex. Without the cubic requirement, there are smaller polyhedra that are not Hamiltonian. The simplest counterexample is the rhombic dodecahedron. Every edge connects one of six valence-four vertices to one of eight valence-three vertices. The six valence-four vertices would need to occupy every other vertex in the length-14 tour. Six items cannot fill seven slots, so this is impossible.

Any noncubic graph can be made cubic by placing a small disk over the exceptions.

The word “polyhedral” implies that the graph must be 3-connected. If a line is drawn to disconnect the map, it must pass through at least three borders. Central Europe is not 3-connected, since a line through Spain will disconnect Portugal. France, the Vatican, and various islands also make the shape of Europe nonpolyhedral.

Tait’s method turns a Hamiltonian cycle on a cubic polyhedral graph into a four-coloring, by the following method.

  1. Alternately color the edges of the Hamiltonian cycle blue and purple. Color the other edges red.

  2. Throw out thin edges, and color the resulting polygon blue.

  3. Throw out dashed edges, and color the resulting polygon(s) red.

  4. Overlay the two colorings to get a four-coloring.

For 66 years, Tait’s conjecture held. In 1946, W. G. Tutte found the first counterexample, now known as Tutte’s graph. Since then, some smaller cubic polyhedral non-Hamiltonian graphs have been found, with the smallest such graph being the Barnette-Bosák-Lederberg graph, found in 1965. Seven years earlier, Lederberg had won the Nobel Prize in Medicine.

https://www.mathematica-journal.com/2010/02/05/the-icosian-game-revisited/#Dalgety

Tutte's fragment

The key to this counter-example is what is now known as Tutte's fragment [...].

If this fragment is part of a larger graph, then any Hamiltonian cycle through the graph must go in or out of the top vertex (and either one of the lower ones). It cannot go in one lower vertex and out the other.

The counterexample

The fragment can then be used to construct the non-Hamiltonian Tutte graph, by putting together three such fragments as shown in the picture.

The "compulsory" edges of the fragments, that must be part of any Hamiltonian path through the fragment, are connected at the central vertex; because any cycle can use only two of these three edges, there can be no Hamiltonian cycle.

The resulting Tutte graph is 3-connected and planar, so by Steinitz' theorem it is the graph of a polyhedron. In total it has 25 faces, 69 edges and 46 vertices. It can be realized geometrically from a tetrahedron (the faces of which correspond to the four large faces in the drawing, three of which are between pairs of fragments and the fourth of which forms the exterior) by multiply truncating three of its vertices.

https://en.wikipedia.org/wiki/Tait%27s_conjecture

15
 
 

In mathematics, a series is, roughly speaking, an addition of infinitely many terms, one after the other. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures in combinatorics through generating functions. The mathematical properties of infinite series make them widely applicable in other quantitative disciplines such as physics, computer science, statistics and finance.

Among the Ancient Greeks, the idea that a potentially infinite summation could produce a finite result was considered paradoxical, most famously in Zeno's paradoxes. Nonetheless, infinite series were applied practically by Ancient Greek mathematicians including Archimedes, for instance in the quadrature of the parabola. The mathematical side of Zeno's paradoxes was resolved using the concept of a limit during the 17th century, especially through the early calculus of Isaac Newton. The resolution was made more rigorous and further improved in the 19th century through the work of Carl Friedrich Gauss and Augustin-Louis Cauchy, among others, answering questions about which of these sums exist via the completeness of the real numbers and whether series terms can be rearranged or not without changing their sums using absolute convergence and conditional convergence of series.

Greek mathematician Archimedes produced the first known summation of an infinite series with a method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π.

Mathematicians from the Kerala school were studying infinite series c. 1350 CE.

In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series.

https://en.wikipedia.org/wiki/Series_(mathematics)

Infinite series results from us wanting to know how the sum of a series behaves when the terms are infinitely many. We can write any series we see in its infinite series form, so it’s no surprise that infinite series has physics, biology, and engineering.

Infinite series represents the successive sum of a sequence of an infinite number of terms that are related to each other based on a given pattern or relation.

Isn’t it amazing how, through the advancement of mathematics, it is now possible for us to predict the sum of a series made of an endless number of terms?

What is an infinite series?

As our introduction says, infinite series represents the sum of the infinite number of terms formed by a sequence. Below are examples of infinite series:

1/2 + 1/4 + 1/6 + 1/8 + 1/10 +…

  • This is an example of an infinite harmonic series, where the denominator increases by 2 as the series increase.

3+9+27+81+243+…

  • This is an example of an infinite geometric series, where the next term is determined by multiplying 3 to the previous term.

These examples give us an idea of what makes up an infinite series, so let’s go ahead and formally define infinite series. In the next section, we’ll learn how we can express them in terms of sigma notation.

Infinite series definition

Let’s say, we have a finite sequence that consists terms of {𝑎~1~,𝑎~2~,…,𝑎~𝑛−1~,𝑎~𝑛~}, so the sum of its finite series can be expressed as 𝑎~1~ +𝑎~2~ +… +𝑎~𝑛−1~ +𝑎~𝑛~.

The only difference for infinite series, the terms extend beyond 𝑎~𝑛~, so the infinite series will be of the form 𝑎~1~ +𝑎~2~ +𝑎~3~ +…

or

How to find the sum of an infinite series?

At first, it may feel counter-intuitive to think that we can predict the sum of an infinite series. But thanks to limits and calculus, we’re able to create a systematic process to find the sum of a given infinite series.

But first, let’s take a look at this visual representation of an infinite geometric series.

This is a good example of how we can find the sum of infinite series. That’s because as we continue to add more terms (so take half of the previous area), we’ll see that when combined altogether, the total area of the shaded region will fill up almost the entire square’s region.

Any guess on the sum of the infinite series,

then? Visually, since the regions will eventually make up the entire square, the sum of the infinite series is 1.

But how do we confirm this mathematically? Before we dive right into the process of determining the sum of infinite series, let’s find out how to find the sum of a certain portion from a given infinite series.

How to find partial sum of infinite series?

The partial sum of an infinite series is simply the sum of a certain number of terms from the series. For example, the series 1/2 +1/4 + 1/8 is simply a part of the infinite series 1/2 + 1/4 + 1/8 + ...

This means that the partial sum of the first three terms of the infinite series shown above is equal to 1/2 + 1/4 + 1/8 = 7/8

How to find the infinite series’ sum based on its partial sum?

You might be wondering why we’re talking about partial sums when we’re supposedly dealing with the sums of infinite series. That’s because when we want to find the sum of an infinite series, we’ll need the expression of its partial sum.

Let’s say we have an infinite series,

, so its partial sum for the first 𝑛 terms will be

  • If the partial sum, 𝑆~𝑛~, converges, the infinite series, 𝑆, is expected to converge as well. In fact, lim⁡ 𝑛→∞⁢ 𝑆~𝑛~ will represent the sum of the infinite series.

  • If the partial sum, 𝑆~𝑛~, diverges, the infinite series, 𝑆, is expected to diverge as well. In fact, it will not be possible for us to predict the sum of the series when the partial sum diverges.

Why don’t we go ahead and observe the following geometric series and see what happens with their partial sum and infinite series’s sum?

Starting with the series, 1/3 + 1/9 + 1/81 +…, we can see that the common ratio is 1/3 and the next terms will be smaller and will be approaching 0.

The partial sum of the series of 𝑛 terms will be equal to

where 𝑎 = 1/3 and 𝑟 = 1/3.

Let’s take a look at the limit of 𝑆~𝑛~ as 𝑛 approaches infinity.

Since the series converges to 1/2, the sum of the series is equal to 1/2.

What happens when the common ratio is greater than 1? Let’s see how the series, 2 +4 +8 +16 +… behaves to answer that question.

This times, we have 𝑟 =2 and 𝑎 =2.

Conceptually, we’re expecting the series to diverge, and that’s because as we add more terms, the partial sum drastically increases as well. We can confirm this guess by taking the limit of 𝑆~𝑛~ as it approaches infinity.

Since lim𝑛→∞⁡ 𝑆𝑛 =∞, the infinite series’ diverges and will not have a fixed value.

Noticed how when the terms increase throughout the infinite series, the series diverges? That’s a helpful observation and something we need to keep in mind each time.

An important condition for the infinite series,

, to be convergent, lim𝑛→∞⁡ 𝑎~𝑛~ must be equal to 0. This means that terms have to become smaller as the terms progress for the infinite series to be convergent.

https://www.storyofmathematics.com/infinite-series/

Yuktibhāṣā (Malayalam: യുക്തിഭാഷ, lit. 'Rationale'), [...] is a major treatise on mathematics and astronomy, written by the Indian astronomer Jyesthadeva of the Kerala school of mathematics around 1530. The treatise, written in Malayalam, is a consolidation of the discoveries by Madhava of Sangamagrama, Nilakantha Somayaji, Parameshvara, Jyeshtadeva, Achyuta Pisharati, and other astronomer-mathematicians of the Kerala school. It also exists in a Sanskrit version, with unclear author and date, composed as a rough translation of the Malayalam original.

Front and back cover of the Palm-leaf manuscripts of the Yuktibhasa, composed by Jyesthadeva in 1530

The work contains proofs and derivations of the theorems that it presents. Modern historians used to assert, based on the works of Indian mathematics that first became available, that early Indian scholars in astronomy and computation lacked in proofs, but Yuktibhāṣā demonstrates otherwise.

Some of its important topics include the infinite series expansions of functions; power series, including of π and π/4; trigonometric series of sine, cosine, and arctangent; Taylor series, including second and third order approximations of sine and cosine; radii, diameters and circumferences.

Yuktibhāṣā mainly gives rationale for the results in Nilakantha's Tantra Samgraha. It is considered an early text to give some ideas related to calculus like Taylor and infinite series of some trigonometric functions, predating Newton and Leibniz by two centuries. however they did not combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the powerful problem-solving tool we have today. The treatise was largely unnoticed outside India, as it was written in the local language of Malayalam. In modern times, due to wider international cooperation in mathematics, the wider world has taken notice of the work. For example, both Oxford University and the Royal Society of Great Britain have given attribution to pioneering mathematical theorems of Indian origin that predate their Western counterparts.

Yuktibhāṣā contains most of the developments of the earlier Kerala school, particularly Madhava and Nilakantha. The text is divided into two parts – the former deals with mathematical analysis and the latter with astronomy. Beyond this, the continuous text does not have any further division into subjects or topics, so published editions divide the work into chapters based on editorial judgment.

Pages from the Yuktibhasa

The first four chapters of the contain elementary mathematics, such as division, the Pythagorean theorem, square roots, etc. Novel ideas are not discussed until the sixth chapter on circumference of a circle. Yuktibhāṣā contains a derivation and proof for the power series of inverse tangent, discovered by Madhava. In the text, Jyesthadeva describes Madhava's series in the following manner:

The first term is the product of the given sine and radius of the desired arc divided by the cosine of the arc. The succeeding terms are obtained by a process of iteration when the first term is repeatedly multiplied by the square of the sine and divided by the square of the cosine. All the terms are then divided by the odd numbers 1, 3, 5, .... The arc is obtained by adding and subtracting respectively the terms of odd rank and those of even rank. It is laid down that the sine of the arc or that of its complement whichever is the smaller should be taken here as the given sine. Otherwise the terms obtained by this above iteration will not tend to the vanishing magnitude.

The text also contains Madhava's infinite series expansion of π which he obtained from the expansion of the arc-tangent function.

Using a rational approximation of this series, he gave values of the number π as 3.14159265359, correct to 11 decimals, and as 3.1415926535898, correct to 13 decimals.

The text describes two methods for computing the value of π. First, obtain a rapidly converging series by transforming the original infinite series of π. By doing so, the first 21 terms of the infinite series

The text describes two methods for computing the value of π. First, obtain a rapidly converging series by transforming the original infinite series of π. By doing so, the first 21 terms of the infinite series

https://en.wikipedia.org/wiki/Yuktibh%C4%81%E1%B9%A3%C4%81

Madhava (Born 1350 Died 1425 )was a mathematician from South India. He made some important advances in infinite series including finding the expansions for trigonometric functions.

All the mathematical writings of Madhava have been lost, although some of his texts on astronomy have survived. However his brilliant work in mathematics has been largely discovered by the reports of other Keralese mathematicians such as Nilakantha who lived about 100 years later.

Madhava discovered the series equivalent to the Maclaurin expansions of sin x, cos x, and arctan⁡x around 1400, which is over two hundred years before they were rediscovered in Europe. Details appear in a number of works written by his followers such as Mahajyanayana prakara which means Method of computing the great sines. In fact this work had been claimed by some historians such as Sarma to be by Madhava himself but this seems highly unlikely and it is now accepted by most historians to be a 16th century work by a follower of Madhava.

https://mathshistory.st-andrews.ac.uk/Biographies/Madhava/

16
 
 

The number π is a mathematical constant, approximately equal to 3.14159, that is the ratio of a circle's circumference to its diameter. It appears in many formulae across mathematics and physics, and some of these formulae are commonly used for defining π, to avoid relying on the definition of the length of a curve.

The number π is an irrational number, meaning that it cannot be expressed exactly as a ratio of two integers, although fractions such as 22/7 are commonly used to approximate it. Consequently, its decimal representation never ends, nor enters a permanently repeating pattern. It is a transcendental number, meaning that it cannot be a solution of an algebraic equation involving only finite sums, products, powers, and integers. The transcendence of π implies that it is impossible to solve the ancient challenge of squaring the circle with a compass and straightedge. The decimal digits of π appear to be randomly distributed, but no proof of this conjecture has been found.

For thousands of years, mathematicians have attempted to extend their understanding of π, sometimes by computing its value to a high degree of accuracy. Ancient civilizations, including the Egyptians and Babylonians, required fairly accurate approximations of π for practical computations. Around 250 BC, the Greek mathematician Archimedes created an algorithm to approximate π with arbitrary accuracy. In the 5th century AD, Chinese mathematicians approximated π to seven digits, while Indian mathematicians made a five-digit approximation, both using geometrical techniques. The first computational formula for π, based on infinite series, was discovered a millennium later. The earliest known use of the Greek letter π to represent the ratio of a circle's circumference to its diameter was by the Welsh mathematician William Jones in 1706. The invention of calculus soon led to the calculation of hundreds of digits of π, enough for all practical scientific computations. Nevertheless, in the 20th and 21st centuries, mathematicians and computer scientists have pursued new approaches that, when combined with increasing computational power, extended the decimal representation of π to many trillions of digits. These computations are motivated by the development of efficient algorithms to calculate numeric series, as well as the human quest to break records. The extensive computations involved have also been used to test the correctness of new computer processors.

Because it relates to a circle, π is found in many formulae in trigonometry and geometry, especially those concerning circles, ellipses and spheres. It is also found in formulae from other topics in science, such as cosmology, fractals, thermodynamics, mechanics, and electromagnetism. It also appears in areas having little to do with geometry, such as number theory and statistics, and in modern mathematical analysis can be defined without any reference to geometry. The ubiquity of π makes it one of the most widely known mathematical constants inside and outside of science. Several books devoted to π have been published, and record-setting calculations of the digits of π often result in news headlines.

Definition

The circumference of a circle is slightly more than three times as long as its diameter. The exact ratio is called π.

π is commonly defined as the ratio of a circle's circumference C to its diameter d:

π = C/d

The ratio C/d is constant, regardless of the circle's size. For example, if a circle has twice the diameter of another circle, it will also have twice the circumference, preserving the ratio C/d.

In modern mathematics, this definition is not fully satisfactory for several reasons. Firsly, it lacks a rigorous definition of the length of a curved line. Such a definition requires at least the concept of a limit, or, more generally, the concepts of derivatives and integrals. Also, diameters, circles and circumferences can be defined in Non-Euclidean geometries, but, in such a geometry, the ratio ⁠ C / d need not to be a constant, and need not to equal to π. Also, there are many occurrences of π in many branches of mathematics that are completely independent from geometry, and in modern mathematics, the trend is to built geometry from algebra and analysis rather than independently from the other branches of mathematics.

https://en.wikipedia.org/wiki/Pi

Archimedes’ Method of Approximating Pi

Since the true value of pi could not be measured directly, Archimedes developed a geometric technique using polygons to establish upper and lower bounds for its value. His method relied on inscribing and circumscribing regular polygons around a circle and calculating their perimeters. By progressively increasing the number of sides, he was able to narrow the range within which pi must lie. This approach was a precursor to the concept of limits, which later became a fundamental idea in calculus.

The Inscribed and Circumscribed Polygon Method

In his work Measurement of a Circle Archimedes considered a circle with diameter d and radius r. He inscribed a regular hexagon inside the circle and circumscribed another hexagon outside it. By calculating the perimeters of these polygons, he obtained lower and upper estimates for the circumference of the circle. Since the ratio of the circumference to the diameter is pi (C / d = pi), these perimeters provided bounds for pi.

He then systematically increased the number of sides of the polygons, doubling them from 6-sided to 12-sided, 24-sided, 48-sided, and finally 96-sided polygons. As the number of sides increased, the perimeters of the inscribed and circumscribed polygons became closer to the true circumference of the circle, refining the estimate of pi.

Using this method, Archimedes established the following inequality:

223/71 < pi < 22/7

This meant that pi was approximately 3.1408 < pi < 3.1429, a remarkably accurate estimate for the time.

Mathematical Process Behind Archimedes’ Approximation

To derive these values, Archimedes used the Pythagorean theorem and properties of similar triangles to calculate the side lengths of the polygons. By repeatedly applying trigonometric relationships (though without the formal notation used today), he determined the perimeters of each successive polygon. His method can be broken down as follows:

  1. For an inscribed n-sided polygon:
  • The perimeter Pi provides a lower bound for the circle’s circumference.

  • Formula: Pi = n * s~i~, where s~i~ is the side length.

  1. For a circumscribed n-sided polygon:
  • The perimeter P~c~ gives an upper bound for the circumference.

  • Formula: P~c~ = n * s~c~, where sc is the side length.

  1. Refining the estimate:
  • Archimedes doubled the number of sides, recalculating the new perimeters iteratively.

  • The values of P~i~ and P~c~ converged toward the true circumference of the circle, P~i~ < C < P~c~.

By the time he reached a 96-sided polygon, his estimates were precise to two decimal places. This level of accuracy was unprecedented and remained the best approximation of pi for nearly 1,000 years.

Circle circumscribed and inscribed by a square where n=4.

The Limitations of Archimedes’ Approach

Archimedes' method had several inherent limitations. First, the computational intensity of his approach increased significantly as the number of sides in his polygons grew. Without the tools of modern algebra or trigonometry, he had to rely solely on geometric reasoning, making the process increasingly complex. Additionally, his method could only provide an approximation of pi rather than an exact value. Since pi is an irrational number that cannot be expressed as a finite fraction, Archimedes' approach was necessarily limited in its precision. Another challenge was the laborious nature of manual computation. Each successive step required extensive geometric derivations, making further refinements impractical beyond a certain point. Despite these limitations, Archimedes' work demonstrated a systematic method for refining numerical approximations and laid the foundation for future mathematical advancements.

Implications of Archimedes’ Work on Pi

Archimedes' method of approximating pi was groundbreaking, not only for its accuracy but also for its influence on the development of mathematical techniques. His approach established a systematic way of refining numerical approximations, which later became essential in calculus and numerical analysis. His work remained the most accurate estimate of pi for over a millennium and laid the foundation for future mathematicians to further refine the calculation of pi.

Archimedes’ method set the stage for many mathematicians across different cultures to refine and improve the approximation of pi. In the 3rd century CE, the Chinese mathematician Liu Hui built upon Archimedes' technique and extended it to a 3072-sided polygon, achieving a more precise approximation of pi at 3.14159. Two centuries later, Zu Chongzhi improved on this result, determining that pi was approximately 355/113 (3.1415929), an extraordinarily precise fraction that remained the most accurate estimate for over a thousand years.

In the Islamic Golden Age, mathematicians such as Al-Khwarizmi and Al-Kashi expanded on these ideas using decimal notation and further refinements of the polygonal method. The Renaissance period saw renewed interest in Archimedes' approach, with European scholars like Ludolph van Ceulen extending the method to polygons with millions of sides. This allowed for calculations of pi accurate to more than 30 decimal places. Despite these advancements, Archimedes’ geometric method remained the dominant approach for approximating pi until the development of calculus in the 17th century.

https://discover.hubpages.com/education/how-archimedes-calculated-pi-the-revolutionary-polygon-method-explained

Ludolph van Ceulen (8 January 1540 – 31 December 1610) was a German-Dutch mathematician from Hildesheim known for the Ludolphine number, his calculation of the mathematical constant pi to 35 digits.

Ludolph van Ceulen spent a major part of his life calculating the numerical value of the mathematical constant π, using essentially the same methods as those employed by Archimedes some seventeen hundred years earlier. He published a 20-decimal value in his 1596 book Van den Circkel ("On the Circle"), which was published before he moved to Leiden, and he later expanded this to 35 decimals.

Van Ceulen's 20 digits is more than enough precision for any conceivable practical purpose. Even if a circle was perfect down to the atomic scale, the thermal vibrations of the molecules of ink would make most of those digits physically meaningless. Future attempts to calculate π to ever greater precision have been driven primarily by curiosity about the number itself.

https://en.wikipedia.org/wiki/Ludolph_van_Ceulen

The above image is the title page of Vanden Circkel, a book about the circle and π by Ludolph Van Ceulen (1540–1610). Published in 1596 in Dutch, it contains the longest decimal approximation of π at the time—20 decimal places. In fact, below the portrait of Van Ceulen, the engraving on the title page has a circle with diameter of 10^20^. Across the top semicircle is “314159265358979323846 te cort” (too short), and “314159265358979323847 te lanck” (too long) is along the bottom semicircle. Later, Van Ceulen would determine π to 35 decimal places. A modified Latin version of the work was published in 1619, images of which can also be found on Convergence here and here.

Part of what little is known of Van Ceulen’s life before 1578 comes from the Preface of Vanden Circkel. Starting in 1566, he earned a living as a mathematics teacher, and in 1580 he opened his first fencing school. A few years later Archimedes’ method of approximating π was translated from the Greek for him, and Van Ceulen proceeded to use the technique to improve on approximations of π, publishing Vanden Circkel in 1596. Below are images from folio 1 and folio 7.

Chapter 21 is devoted to analyzing a work of Joseph Justice Scaliger (1540–1609) called Cyclometrica Elementa (Elements of Circle Measurement), which had several incorrect results, including a “proof” that the area of a circle is equal to 6/5 of the area of an inscribed regular hexagon, which results in π=(9/5)√3 or approximately 3.117691454. Van Ceulen doesn’t mention Scaliger by name, but rather calls him a “highly learned man”. Below is Folio 63a.

https://old.maa.org/press/periodicals/convergence/mathematical-treasure-van-ceulen-s-vanden-circkel

Van Ceulen is famed for his calculation of π to 35 places which he did using polygons with 2^62^ sides. Having published 20 places of π in his book of 1596, the more accurate results were only published after his death. In 1615 his widow Adriana Simondochter published a posthumous work by Van Ceulen entitled De arithmetische en geometrische fondamenten. This contained his computation of 33 decimal places for π. The complete 35 decimal place approximation was only published in 1621 in Snell's Cyclometricus. Having spent most of his life computing this approximation, it is fitting that the 35 places of π were engraved on Van Ceulen's tombstone. In fact Van Ceulen had purchased a grave in the Pieterskerk on 11 November 1602 but, after Van Ceulen's death on 31 December 1610, his widow Adriana exchanged this grave for another, still in the Pieterskerk, and it was in this second grave that Van Ceulen was buried on 2 January 1611. The tombstone gave both Van Ceulen's lower bound of 3.14159265358979323846264338327950288 and his upper bound of 3.14159265358979323846264338327950289. However, the original tombstone disappeared around 1800 to be replaced by a replica two hundred years later. The original text on the tombstone was known since it had been recorded in a guidebook of 1712 and after that reprinted in many articles. Vajta writes :

On July 5, 2000 a very special ceremony took place in the St Pieterskerk (St Peter's Church) at Leiden, the Netherlands. A replica of the original tombstone of Ludolph Van Ceulen was placed into the Church since the original disappeared. ... It was therefore a tribute to the memory of Ludolph Van Ceulen, when on Wednesday 5 July, 2000 prince Willem-Alexander (heir to the throne), unveiled the memorial tombstone in the St Peter's Church, in Leiden.

In Germany π was called the "Ludolphine number" for a long time.

https://mathshistory.st-andrews.ac.uk/Biographies/Van_Ceulen/

17
 
 

The Game of Life is a cellular automaton devised by the british mathematician John Horton Conway in 1970. It was popularised by Martin Gardner in his October 1970 column of "Mathematical Games" in the "Scientific American" magazine [6]. The article garnered more response than any other of his previous articles in the magazine, including Gardners famous article on Hexaflexagons.

A notable property of the special rule set used by Conway's "Game of Life" is it's Turing completeness. The Turing completeness is a property that describes that a programming language, a simulation or a logical system is in principle suitable to solve every computing problem. The programming of the "Game of Life" would be done with patterns, which then interact with each other in the simulation. LifeWiki has a large archive of such patterns for Game of Life. A selection of those is implemented in the applet shown below.

What is a Cellular Automaton?

A Cellular automaton is a discrete model that consists of a regular grid of cells wherein each cell is in a finite state. The inital state of the cellular automate is selected by assigning a state to each cell. The simulation then progresses in discreet time steps. The state of a cell at timestep t depends only on the state of nearby cells at timestep t-1 and a set of rules specific to the automate.

Rules of the Game of Life

In the Game of Life each grid cell can have either one of two states: dead or alive. The Game of Life is controlled by four simple rules which are applied to each grid cell in the simulation domain:

  • A live cell dies if it has fewer than two live neighbors.

  • A live cell with two or three live neighbors lives on to the next generation.

  • A live cell with more than three live neighbors dies.

  • A dead cell will be brought back to live if it has exactly three live neighbors.

Merrill Sherman/Quanta Magazine

Boundary Conditions

Cellular automata often use a toroidal topology of the simulation domain. This means that opposing edges of the grid are connected. The rightmost column is the neighbor of the leftmost column and the topmost row is the neighbor of the bottommost row and vice versa. This allows the unrestricted transfer of state information across the boundaries.

Opposing edges of the grid are connected to form a toroidal topology of the simulation domain

Cells beyond the grid boundary are always treated as if they were dead.

Another type of boundary condition treats nonexisting cells as if they all had the same state. In the Game of Life this would mean that nonexisting cells are treated as if they were dead (as opposed to the second state "alive"). The advantage of this boundary condition in the Game of Life is that it prevents gliders from wrapping around the edges of the simulation domain. This will prevent the destruction of a glider gun by the gliders it produces (see text below below for details about what gliders are).

https://beltoforion.de/en/game_of_life/

John Horton Conway FRS (26 December 1937 – 11 April 2020) was an English mathematician. He was active in the theory of finite groups, knot theory, number theory, combinatorial game theory and coding theory. He also made contributions to many branches of recreational mathematics, most notably the invention of the cellular automaton called the Game of Life.

https://en.wikipedia.org/wiki/John_Horton_Conway

Origins

Conway was interested in a problem presented in the 1940s by renowned mathematician John von Neumann, who tried to find a hypothetical machine that could build copies of itself and succeeded when he found a mathematical model for such a machine with very complicated rules on a rectangular grid. The Game of Life emerged as Conway's successful attempt to simplify von Neumann's ideas.

The game made its first public appearance in the October 1970 issue of Scientific American, in Martin Gardner's "Mathematical Games" column, under the title of The fantastic combinations of John Conway's new solitaire game "life". From a theoretical point of view, it is interesting because it has the power of a universal Turing machine: that is, anything that can be computed algorithmically can be computed within Conway's Game of Life. Gardner wrote:

" The game made Conway instantly famous, but it also opened up a whole new field of mathematical research, the field of cellular automata ... Because of Life's analogies with the rise, fall and alterations of a society of living organisms, it belongs to a growing class of what are called 'simulation games' (games that resemble real life processes) "

https://conwaylife.com/wiki/Conway's%20Game%20of%20Life

The Game of Life (an example of a cellular automaton) is played on an infinite two-dimensional rectangular grid of cells. Each cell can be either alive or dead. The status of each cell changes each turn of the game (also called a generation) depending on the statuses of that cell's 8 neighbors. Neighbors of a cell are cells that touch that cell, either horizontal, vertical, or diagonal from that cell.

The initial pattern is the first generation. The second generation evolves from applying the rules simultaneously to every cell on the game board, i.e. births and deaths happen simultaneously. Afterwards, the rules are iteratively applied to create future generations. For each generation of the game, a cell's status in the next generation is determined by a set of rules. These simple rules are as follows:

  • If the cell is alive, then it stays alive if it has either 2 or 3 live neighbors

  • If the cell is dead, then it springs to life only in the case that it has 3 live neighbors

There are, of course, as many variations to these rules as there are different combinations of numbers to use for determining when cells live or die. Conway tried many of these different variants before settling on these specific rules. Some of these variations cause the populations to quickly die out, and others expand without limit to fill up the entire universe, or some large portion thereof. The rules above are very close to the boundary between these two regions of rules, and knowing what we know about other chaotic systems, you might expect to find the most complex and interesting patterns at this boundary, where the opposing forces of runaway expansion and death carefully balance each other. Conway carefully examined various rule combinations according to the following three criteria:

  • There should be no initial pattern for which there is a simple proof that the population can grow without limit.

  • There should be initial patterns that apparently do grow without limit.

  • There should be simple initial patterns that grow and change for a considerable period of time before coming to an end in the following possible ways:

    1. Fading away completely (from overcrowding or from becoming too sparse)

    2. Settling into a stable configuration that remains unchanged thereafter, or entering an oscillating phase in which they repeat an endless cycle of two or more periods.

Example Patterns

Using the provided game board(s) and rules as outline above, the students can investigate the evolution of the simplest patterns. They should verify that any single living cell or any pair of living cells will die during the next iteration.

Some possible triomino patterns (and their evolution) to check:

Here are some tetromino patterns (NOTE: The students can do maybe one or two of these on the game board and the rest on the computer):

Some example still lifes:

Square

Boat

Loaf

Ship

The following pattern is called a "glider." The students should follow its evolution on the game board to see that the pattern repeats every 4 generations, but translated up and to the left one square. A glider will keep on moving forever across the plane.

Another pattern similar to the glider is called the "lightweight space ship." It too slowly and steadily moves across the grid.

Early on (without the use of computers), Conway found that the F-pentomino (or R-pentomino) did not evolve into a stable pattern after a few iterations. In fact, it doesn't stabilize until generation 1103.

The F-pentomino stabilizes (meaning future iterations are easy to predict) after 1,103 iterations. The class of patterns which start off small but take a very long time to become periodic and predictable are called Methuselahs. The students should use the computer programs to view the evolution of this pattern and see how/where it becomes stable. The "acorn" is another example of a Methuselah that becomes predictable only after 5206 generations.

Alan Hensel compiled a fairly large list of other common patterns and names for them, available at radicaleye.com/lifepage/picgloss/picgloss.html.

Activity - Two-Player Game of Life

To call Conway's Game of Life a game is to stretch the meaning of the word "game", but there is an fun adaptation that can produce a competitive and strategic activity for multiple players.

The modification made is that now the live cells come in two colors (one associated with each player). When a new cell comes to life, the cell takes on the color of the majority of its neighbors. (Since there must be three neighbors in order for a cell to come to life, there cannot be a tie. There must be a majority)

Players alternate turns. On a player's turn, he or she must kill one enemy cell and must change one empty cell to a cell of their own color. They are allowed to create a new cell at the location in which they killed an enemy cell.

After a player's turn, the Life cells go through one generation, and the play moves to the next player. There is always exactly one generation of evolution between separate players' actions.

The initial board configuration should be decided beforehand and be symmetric. A player is eliminated when they have no cells remaining of their color.

This variant of life can well be adapted to multiple players. However, with more than two players, it is possible that a newborn cell will have three neighbors belonging to three separate players. In that case, the newborn cell is neutral, and does not belong to anyone.

https://pi.math.cornell.edu/~lipa/mec/lesson6.html

Many patterns in the Game of Life eventually become a combination of still lifes, oscillators, and spaceships; other patterns may be called chaotic. A pattern may stay chaotic for a very long time until it eventually settles to such a combination.

The Game of Life is

undecidable

In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether an arbitrary program eventually halts when run.

, which means that given an initial pattern and a later pattern, no algorithm exists that can tell whether the later pattern is ever going to appear. Given that the Game of Life is Turing-complete, this is a corollary of the

halting problem

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs. The problem comes up often in discussions of computability since it demonstrates that some functions are mathematically definable but not computable.

: the problem of determining whether a given program will finish running or continue to run forever from an initial input.

Conway's Game of Life: Mathematics and Construction by Nathaniel Johnson and Dave Greene provides a linear exposition of the questions, results, and techniques behind the game. It functions as a companion to the website where one can download the ebook.

The material requires no formal background and is appropriate for its target audience of early undergraduate students. A few topics such as counting, number theory, and algorithm analysis appear, but generally at an elementary level and the key concepts are briefly reviewed in the appendices. There are proofs, but they are generally careful deductions using few mathematical tools. It is a perfect topic to hand to a curious undergraduate mathematics or computer science student and let them go.

There are a lot of objects that need names and the vocabulary can be a bit overwhelming. Many of the names are descriptive – gliders, volcanoes, sparks – but there are also Snarks, Sir Robins, and David Hilberts to navigate. This is the reality of the subject and not a complaint about the book. There is no glossary, but the index is good and there are additional resources online.

One needs to be able to zoom in and out in real-time as the configurations evolve to see how small-scale changes impact large-scale behavior. The authors do a fine job of using color diagrams with consistent coloring and iconography – indeed, the book is visually impressive independent of the content – but there is no substitute for going to the website and noodling with it there. The ebook can be downloaded at the website (for free, with optional donation), allowing for the two to be used in parallel easily. There are also a few print-on-demand options.

The book is organized into three parts, each with four chapters. The first part, “Classical Topics,” covers the fundamental structures and their properties. “Circuitry and Logic” examines techniques for putting these structures together into circuits that exhibit more elaborate and precise behaviors. Finally, “Constructions” develops how these circuits can establish some more general properties of the Game of Life itself: universal computation, wherein we can simulate a universal computer, and universal construction, which establishes a sense in which the Game of Life can create and position its own components. Chapters close with notes and plenty of exercises. There are appendices with some mathematical preliminaries, technical details, and selected exercise solutions. Further material is available on the website, which has plenty of tools for simulating the game, finding specific results, and identifying new investigations that a newcomer could engage in almost immediately.

https://maa.org/book-reviews/conways-game-of-life-mathematics-and-construction/

18
15
submitted 1 month ago* (last edited 1 month ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

Perhaps one of the smartest and most compelling shorts around, ALTERNATIVE MATH, a nine minute American piece directed by David Maddox, is a deeply layered and remarkably sophisticated pieces of intelligent comedy.

Our heroine is a veteran grade school teacher trying to explain to her student that 2+2=4. The child however, believes the answer is 22. So do his parents. How dare this teacher censor their child and restrict his learning. What kind of professional does this? The child’s parents are out for blood and soon our heroine is trapped in a vicious media onslaught and a school board demanding her resignation.

What makes this film so special is that it functions on so many layers. It works comically due to it’s wonderfully executed reducto-absurdum, but just a little bit deeper we find an allegory for our modern world carrying a concerning warning. What happens when beliefs are taken too such a degree that basic knowledge is questioned? What happens to a population when the right to free speech becomes more important than the recognition of fact? There is a frightening undertone in ALTERNATIVE MATH that speaks to a greater and more terrible world lurking in a reality not too far away from our own.

Of course, this allegory is one that comes gift-wrapped clearly and politely in the bow comedy for an audience can unwrap it with glee, not fear. Perhaps this is one of the best reasons to see ALTERNATIVE MATH, a film with heart, humanity and humor, as well as deeper philosophical undertones. A family film to be enjoyed by teacher and student alike.

https://festivalreviews.org/2018/01/29/film-review-alternative-math-usa-comedy/

19
 
 

Maurits Cornelis Escher (17 June 1898 – 27 March 1972) was a Dutch graphic artist who made woodcuts, lithographs, and mezzotints, many of which were inspired by mathematics. Despite wide popular interest, for most of his life Escher was neglected in the art world, even in his native Netherlands. He was 70 before a retrospective exhibition was held. In the late twentieth century, he became more widely appreciated, and in the twenty-first century he has been celebrated in exhibitions around the world.

His work features mathematical objects and operations including impossible objects, explorations of infinity, reflection, symmetry, perspective, truncated and stellated polyhedra, hyperbolic geometry, and tessellations. Although Escher believed he had no mathematical ability, he interacted with the mathematicians George Pólya, Roger Penrose, and Donald Coxeter, and the crystallographer Friedrich Haag, and conducted his own research into tessellation.

https://en.m.wikipedia.org/wiki/M._C._Escher

Reptiles depicts a desk upon which is a two-dimensional drawing of a tessellated pattern of reptiles and hexagons, Escher's 1939 Regular Division of the Plane. The reptiles at one edge of the drawing emerge into three-dimensional reality, come to life and appear to crawl over a series of symbolic objects (a book on nature, a geometer's triangle, a dodecahedron, a pewter bowl containing a box of matches and a box of cigarettes) to eventually re-enter the drawing at its opposite edge. Other objects on the desk are a potted cactus and yucca, a ceramic flask with a cork stopper next to a small glass of liquid, a book of JOB cigarette rolling papers, and an open handwritten note book of many pages. Although only the size of small lizards, the reptiles have protruding crocodile-like fangs, and the one atop the dodecahedron has a dragon-like puff of smoke billowing from its nostrils.

Once a woman telephoned Escher and told him that she thought the image was a "striking illustration of reincarnation".

The critic Steven Poole commented that one of Escher's "enduring fascinations" was "the contrast between the two-dimensional flatness of a sheet of paper and the illusion of three-dimensional volume that can be created with certain marks" when space and flatness exist side by side and are "each born from and returning to the other, the black magic of the artistic illusion made creepily manifest."

https://en.m.wikipedia.org/wiki/Reptiles_(M._C._Escher)

On 19 August 1960 he gave a lecture in Cambridge, during which he said of this print:

'On the page of an opened sketchbook a mosaic of reptiles can be seen, drawn in three colours. Now let them prove themselves to be living creatures. One of them extends his paw out over the edge of the sketchbook, frees himself fully and starts on his path of life. First he climbs onto a book, walks further up across a smooth triangle and finally reaches the summit on the horizontal plane of a dodecahedron. He has a breather, tired but satisfied, and he moves down again. Back to the surface, the ‘flat lands’, in which he resumes his position as a symmetrical figure. I was later told that this story perfectly sums up the theory of reincarnation.'

The reference to reincarnation must have brought a smile to his face, as he always laughed about other people’s interpretations. He also listened in amusement when people stated that the word ‘Job’ on the packet in the bottom left was a reference to the Book of Job in the Bible. Nothing was further from the truth. Escher had lived in Belgium for several years and Job was a popular brand of cigarette paper there.

Because he could not print a lithograph himself, he stayed at his printer Dieperink in Amsterdam for a few days. To his friend Bas Kist he wrote that he had to do ‘a lot of tinkering’ on the stone ‘before a definitive set of copies’ could be produced.

Escher himself called what the reptiles are freeing themselves from ‘a sketchbook’, but it is of course one of his own design sketchbooks. In 1939 he created Regular division drawing nr 25, featuring these reptiles. What is remarkable and interesting about this periodic drawing is the presence of three different rotation points, where three heads meet and three ‘knees’ meet. If you copy the figure onto transparent paper and put a pin through both pieces of paper, in one of these rotation points, you can turn the transparent one 120 degrees and the figures will cover the ones below completely.

https://escherinhetpaleis.nl/en/about-escher/escher-today/reptiles-in-wartime?lang=en

The Mathematical Side of M. C. Escher

While the mathematical side of Dutch graphic artist M. C. Escher (1898– 1972) is often acknowledged, few of his admirers are aware of the mathematical depth of his work. Probably not since the Renaissance has an artist engaged in mathematics to the extent that Escher did, with the sole purpose of understanding mathematical ideas in order to employ them in his art. Escher consulted mathematical publications and interacted with mathematicians. He used mathematics (especially geometry) in creating many of his drawings and prints. Several of his prints celebrate mathematical forms. Many prints provide visual metaphors for abstract mathematical concepts; in particular, Escher was obsessed with the depiction of infinity. His work has sparked investigations by scientists and mathematicians. But most surprising of all, for several years Escher carried out his own mathematical research, some of which anticipated later discoveries by mathematicians. And yet with all this, Escher steadfastly denied any ability to understand or do mathematics. His son George explains:

Father had difficulty comprehending that the working of his mind was akin to that of a mathematician. He greatly enjoyed the interest in his work by mathematicians and scientists, who readily understood him as he spoke, in his pictures, a common language. Unfortunately, the specialized language of mathematics hid from him the fact that mathematicians were struggling with the same concepts as he was.Scientists, mathematicians and M. C. Escher approach some of their work in similar fashion. They select by intuition and experience a likely-looking set of rules which defines permissible events inside an abstract world. Then they proceed to explore in detail the consequences of applying these rules. If well chosen, the rules lead to exciting discoveries, theoretical developments and much rewarding work. [18, p.4]

In Escher’s mind, mathematics was what he encountered in schoolwork—symbols, formulas, and textbook problems to solve using prescribed techniques. It didn’t occur to him that formulating his own questions and trying to answer them in his own way was doing mathematics.

https://www.ams.org/journals/notices/201006/rtx100600706p.pdf

by Matthew Everett and Jeffrey Mancuso

Rendering competition in Pat Hanrahan's CS 348b class: Image Synthesis Techniques in the Spring quarter of 2001.

https://graphics.stanford.edu/courses/cs348b-competition/cs348b-01/escher/

20
4
Beads, Not Bytes (www.mathematik.uni-marburg.de)
 
 

An abacus (pl. abaci or abacuses), also called a counting frame, is a hand-operated calculating tool which was used from ancient times, in the ancient Near East, Europe, China, and Russia, until largely replaced by handheld electronic calculators, during the 1980s, with some ongoing attempts to revive their use. An abacus consists of a two-dimensional array of slidable beads (or similar objects). In their earliest designs, the beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation.

Bi-quinary coded decimal-like abacus representing 1,352,964,708

Any particular abacus design supports multiple methods to perform calculations, including addition, subtraction, multiplication, division, and square and cube roots. The beads are first arranged to represent a number, then are manipulated to perform a mathematical operation with another number, and their final position can be read as the result (or can be used as the starting number for subsequent operations).

In the ancient world, abacuses were a practical calculating tool. It was widely used in Europe as late as the 17th century, but fell out of use with the rise of decimal notation and algorismic methods. Although calculators and computers are commonly used today instead of abacuses, abacuses remain in everyday use in some countries. The abacus has an advantage of not requiring a writing implement and paper (needed for algorism) or an electric power source. Merchants, traders, and clerks in some parts of Eastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring system in non-electronic table games. Others may use an abacus due to visual impairment that prevents the use of a calculator. The abacus is still used to teach the fundamentals of mathematics to children in many countries such as Japan and China.

History

Mesopotamia

The Sumerian abacus appeared between 2700 and 2300 BC. It held a table of successive columns which delimited the successive orders of magnitude of their sexagesimal (base 60) number system.

Some scholars point to a character in Babylonian cuneiform that may have been derived from a representation of the abacus. It is the belief of Old Babylonian scholars, such as Ettore Carruccio, that Old Babylonians "seem to have used the abacus for the operations of addition and subtraction; however, this primitive device proved difficult to use for more complex calculations".

Egypt

Greek historian Herodotus mentioned the abacus in Ancient Egypt. He wrote that the Egyptians manipulated the pebbles from right to left, opposite in direction to the Greek left-to-right method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, there are no known illustrations of this device.

Persia

At around 600 BC, Persians first began to use the abacus, during the Achaemenid Empire. Under the Parthian, Sassanian, and Iranian empires, scholars concentrated on exchanging knowledge and inventions with the countries around them – India, China, and the Roman Empire – which is how the abacus may have been exported to other countries.

Greece

The earliest archaeological evidence for the use of the Greek abacus dates to the 5th century BC. Demosthenes (384–322 BC) complained that the need to use pebbles for calculations was too difficult. A play by Alexis from the 4th century BC mentions an abacus and pebbles for accounting, and both Diogenes and Polybius use the abacus as a metaphor for human behavior, stating "that men that sometimes stood for more and sometimes for less" like the pebbles on an abacus. The Greek abacus was a table of wood or marble, pre-set with small counters in wood or metal for mathematical calculations. This Greek abacus was used in Achaemenid Persia, the Etruscan civilization, Ancient Rome, and the Western Christian world until the French Revolution.

The Salamis Tablet, found on the Greek island Salamis in 1846 AD, dates to 300 BC, making it the oldest counting board discovered so far. [...].

Rome

The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles (Latin: calculi) were used. Marked lines indicated units, fives, tens, etc. as in the Roman numeral system.

Writing in the 1st century BC, Horace refers to the wax abacus, a board covered with a thin layer of black wax on which columns and figures were inscribed using a stylus.

Medieval Europe

The Roman system of 'counter casting' was used widely in medieval Europe, and persisted in limited use into the nineteenth century. Wealthy abacists used decorative minted counters, called jetons.

Due to Pope Sylvester II's reintroduction of the abacus with modifications, it became widely used in Europe again during the 11th century It used beads on wires, unlike the traditional Roman counting boards, which meant the abacus could be used much faster and was more easily moved.

China

The earliest known written documentation of the Chinese abacus dates to the 2nd century BC.

The prototype of the Chinese abacus appeared during the Han dynasty, and the beads are oval. The Song dynasty and earlier used the 1:4 type or four-beads abacus similar to the modern abacus including the shape of the beads commonly known as Japanese-style abacus.

In the early Ming dynasty, the abacus began to appear in a 1:5 ratio. The upper deck had one bead and the bottom had five beads. In the late Ming dynasty, the abacus styles appeared in a 2:5 ratio. The upper deck had two beads, and the bottom had five.

Various calculation techniques were devised for Suanpan enabling efficient calculations. Some schools teach students how to use it.

The similarity of the Roman abacus to the Chinese one suggests that one could have inspired the other, given evidence of a trade relationship between the Roman Empire and China.

India

The Abhidharmakośabhāṣya of Vasubandhu (316–396), a Sanskrit work on Buddhist philosophy, says that the second-century CE philosopher Vasumitra said that "placing a wick (Sanskrit vartikā) on the number one (ekāṅka) means it is a one while placing the wick on the number hundred means it is called a hundred, and on the number one thousand means it is a thousand". It is unclear exactly what this arrangement may have been. Around the 5th century, Indian clerks were already finding new ways of recording the contents of the abacus. Hindu texts used the term śūnya (zero) to indicate the empty column on the abacus.

Japan

In Japan, the abacus is called soroban (lit. "counting tray"). It was imported from China in the 14th century. It was probably in use by the working class a century or more before the ruling class adopted it, as the class structure obstructed such changes. The 1:4 abacus, which removes the seldom-used second and fifth bead, became popular in the 1940s.

The four-bead abacus spread, and became common around the world. Improvements to the Japanese abacus arose in various places. In China, an abacus with an aluminium frame and plastic beads has been used. The file is next to the four beads, and pressing the "clearing" button puts the upper bead in the upper position, and the lower bead in the lower position.

The abacus is still manufactured in Japan, despite the proliferation, practicality, and affordability of pocket electronic calculators. The use of the soroban is still taught in Japanese primary schools as part of mathematics, primarily as an aid to faster mental calculation. Using visual imagery, one can complete a calculation as quickly as with a physical instrument.

Korea

The Chinese abacus migrated from China to Korea around 1400 AD. Koreans call it jupan (주판), supan (수판) or jusan (주산). The four-beads abacus (1:4) was introduced during the Goryeo Dynasty. The 5:1 abacus was introduced to Korea from China during the Ming Dynasty.

Native America

Representation of an Inca quipu

A yupana as used by the Incas

Some sources mention the use of an abacus called a nepohualtzintzin in ancient Aztec culture. This Mesoamerican abacus used a 5-digit base-20 system. The word Nepōhualtzintzin Nahuatl comes from Nahuatl, formed by the roots; Ne – personal -; pōhual or pōhualli Nahuatl – the account -; and tzintzin Nahuatl – small similar elements. Its complete meaning was taken as: counting with small similar elements. Its use was taught in the Calmecac to the temalpouhqueh Nahuatl, who were students dedicated to taking the accounts of skies, from childhood.

The device featured 13 rows with 7 beads, 91 in total. This was a basic number for this culture. It had a close relation to natural phenomena, the underworld, and the cycles of the heavens. One Nepōhualtzintzin (91) represented the number of days that a season of the year lasts, two Nepōhualtzitzin (182) is the number of days of the corn's cycle, from its sowing to its harvest, three Nepōhualtzintzin (273) is the number of days of a baby's gestation, and four Nepōhualtzintzin (364) completed a cycle and approximated one year. When translated into modern computer arithmetic, the Nepōhualtzintzin amounted to the rank from 10 to 18 in floating point, which precisely calculated large and small amounts, although round off was not allowed.

The rediscovery of the Nepōhualtzintzin was due to the Mexican engineer David Esparza Hidalgo, who in his travels throughout Mexico found diverse engravings and paintings of this instrument and reconstructed several of them in gold, jade, encrustations of shell, etc. Very old Nepōhualtzintzin are attributed to the Olmec culture, and some bracelets of Mayan origin, as well as a diversity of forms and materials in other cultures.

Sanchez wrote in Arithmetic in Maya that another base 5, base 4 abacus had been found in the Yucatán Peninsula that also computed calendar data. This was a finger abacus, on one hand, 0, 1, 2, 3, and 4 were used; and on the other hand 0, 1, 2, and 3 were used. Note the use of zero at the beginning and end of the two cycles.

The quipu of the Incas was a system of colored knotted cords used to record numerical data, like advanced tally sticks – but not used to perform calculations. Calculations were carried out using a yupana (Quechua for "counting tool"; see figure) which was still in use after the conquest of Peru. The working principle of a yupana is unknown, but in 2001 Italian mathematician De Pasquale proposed an explanation. By comparing the form of several yupanas, researchers found that calculations were based using the Fibonacci sequence 1, 1, 2, 3, 5 and powers of 10, 20, and 40 as place values for the different fields in the instrument. Using the Fibonacci sequence would keep the number of grains within any one field at a minimum.

Russia

The Russian abacus, the schoty (counting), usually has a single slanted deck, with ten beads on each wire (except one wire with four beads for quarter-ruble fractions). 4-bead wire was introduced for quarter-kopeks, which were minted until 1916. The Russian abacus is used vertically, with each wire running horizontally. [...]

The Russian abacus was in use in shops and markets throughout the former Soviet Union, and its usage was taught in most schools until the 1990s. Even the 1874 invention of mechanical calculator, Odhner arithmometer, had not replaced them in Russia. According to Yakov Perelman, some businessmen attempting to import calculators into the Russian Empire were known to leave in despair after watching a skilled abacus operator. Likewise, the mass production of Felix arithmometers since 1924 did not significantly reduce abacus use in the Soviet Union. The Russian abacus began to lose popularity only after the mass production of domestic microcalculators in 1974.

The Russian abacus was brought to France around 1820 by mathematician Jean-Victor Poncelet, who had served in Napoleon's army and had been a prisoner of war in Russia. To Poncelet's French contemporaries, it was something new. Poncelet used it, not for any applied purpose, but as a teaching and demonstration aid. The Turks and the Armenian people used abacuses similar to the Russian schoty. It was named a coulba by the Turks and a choreb by the Armenians.

Neurological analysis

Learning how to calculate with the abacus may improve capacity for mental calculation. Abacus-based mental calculation (AMC), which was derived from the abacus, is the act of performing calculations, including addition, subtraction, multiplication, and division, in the mind by manipulating an imagined abacus. It is a high-level cognitive skill that runs calculations with an effective algorithm. People doing long-term AMC training show higher numerical memory capacity and experience more effectively connected neural pathways. They are able to retrieve memory to deal with complex processes. AMC involves both visuospatial and visuomotor processing that generate the visual abacus and move the imaginary beads. Since it only requires that the final position of beads be remembered, it takes less memory and less computation time.

https://en.m.wikipedia.org/wiki/Abacus

Abacus, which represents numbers via a visuospatial format, is a traditional device to facilitate arithmetic operations. Skilled abacus users, who have acquired the ability of abacus-based mental calculation (AMC), can perform fast and accurate calculations by manipulating an imaginary abacus in mind. Due to this extraordinary calculation ability in AMC users, there is an expanding literature investigating the effects of AMC training on cognition and brain systems. This review study aims to provide an updated overview of important findings in this fast-growing research field. Here, findings from previous behavioral and neuroimaging studies about AMC experts as well as children and adults receiving AMC training are reviewed and discussed. Taken together, our review of the existing literature suggests that AMC training has the potential to enhance various cognitive skills including mathematics, working memory and numerical magnitude processing. Besides, the training can result in functional and anatomical neural changes that are largely located within the frontal-parietal and occipital-temporal brain regions. Some of the neural changes can explain the training-induced cognitive enhancements. Still, caution is needed when extend the conclusions to a more general situation. Implications for future research are provided.

https://pmc.ncbi.nlm.nih.gov/articles/PMC7492585/

Presentation of methods for building a mathematical universe in children that gives meaning to addition and subtraction by rooting them in basic concepts of geometry and logic.

Introduction

Numbers are fascinating, and mathematics is often identified with calculation. Strategies for performing calculations have been refined over time. First came abacuses, devices with several rows of movable pieces used for arithmetic calculations. Then came tables of values, which slowly evolved into graph tables or nomograms, i.e., a network of lines or points giving a result by simple reading or by a basic manipulation process. This expertise expanded considerably until the mid-twentieth century in the fields of physics, finance, and architecture, and the epistemological study of the underlying processes gave this discipline the name nomography.

These empirical mechanical or graphical tools, based on clever mathematical processes, required hours of intensive practice during which constant verification of units and consistency of results was essential. They quickly fell into disuse in the 1980s with the rise of computers and the development of digitization and its methods of analysis. Teachers, freed from the responsibility of teaching calculation, were thus able to focus their attention on developing other approaches and concepts.

However, due to the tragic principle of communicating vessels in human intelligence, the downstream expansion of the field of possibilities offered to science has had the effect, upstream, of disrupting the level of calculation among students. The gradual disappearance of certain crafts or their evolution, which goes hand in hand with that of familiar elements of the mechanistic era that promote learning through manipulation and observation (pendulum, balance, etc.), may also have contributed to this decline. It is therefore essential to clearly distinguish between the value of digital methods in engineering and the value of mastering basic arithmetic, which is acquired in the early years.

This presentation, divided into three parts, aims to refocus children's attention on a few fundamental objects, whose mathematical interest and richness they will discover through long-term observation and manipulation. You will encounter abacuses and nomograms, unusual and aesthetic objects that arouse curiosity and make you want to handle or examine them. The educational benefits include reconciling calculation and geometric vision in order to develop children's mathematical intuition empirically from an early age. This article also proposes a vertical reflection on the elementary operations induced by these objects, i.e., analyzing the angle of approach to elementary operations that these objects offer and their ability to accompany children from a naive representation to a more abstract model. In this first part, we will present processes that enable children to construct a mathematical universe that gives meaning to addition and subtraction by rooting them in elementary concepts of geometry and logic.

Translation of the introduction to the articles written by Ivan Riou

Des abaques pour reprendre le contrôle des opérations I

Des abaques pour reprendre le contrôle des opérations II

How to Use an Abacus

Counting

Adding and Subtracting

Multiplying

Dividing

https://www.wikihow.com/Use-an-Abacus

21
 
 

Si r1 et r2 sont les rayons de courbure principaux d'une surface en un certain point, on peut distinguer le troisième cas suivant pour la mesure gaussienne de la courbure :

(1/r1) . (1/r2) < 0 Les cercles de courbure se trouvent sur les côtés opposés du plan de contact.

Concepteur :

D'après les originaux réalisés à la Großherzoglich technischen Hochschule de Karlsruhe sous la direction du conseiller privé professeur Dr. Chr. Wiener, conçu par l'ingénieur C. Tesch, ancien assistant en géométrie descriptive à la technischen Hochschule de Karlsruhe.

Date de conception : 1894

Fabricant / Éditeur : [Martin Schilling]

Date de fabrication : [Premier quart du 20e siècle]

Lieu de fabrication : [Allemagne]

Dimensions & matériaux :

Hauteur : 22,5 cm ; Largeur : 13 cm ; Profondeur : 13 cm

Carton

22
4
submitted 2 months ago* (last edited 2 months ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

Logarithms represented at this time in so many ways both what was old and what was new. This relation looked back to reflect concerns of computation, but looked forward to nascent notions about mathematical functions. Although logarithms were primarily a tool for facilitating computation, they were but another of the crucial insights that directed the attention of mathematical scholars towards more abstract organizing notions. But one thing is very clear: the concept of logarithm as we understand it today as a function is quite different in many respects from how it was originally conceived. But eventually, through the work, consideration, and development of many mathematicians, the logarithm became far more than a useful way to compute with large unwieldy numbers. It became a mathematical relation and function in its own right.

In time, the logarithm evolved from a labor saving device to become one of the core functions in mathematics. Today, it has been extended to negative and complex numbers and it is vital in many modern branches of mathematics. It has an important role in group theory and is key to calculus, with its straightforward derivatives and its appearance in the solutions to various integrals. Logarithms form the basis of the Richter scale and the measure of pH, and they characterize the music intervals in the octave, to name but a few applications. Ironically, the logarithm still serves as a labor saving device of sorts, but not for the benefit of human effort! It is often used by computers to approximate certain operations that would be too costly, in terms of computer power, to evaluate directly, particularly those of the form x^n^.

https://old.maa.org/press/periodicals/convergence/logarithms-the-early-history-of-a-familiar-function-conclusion

Possibly, the first approaches to the subjects of logarithms, also including trigonometric functions, was described by the Scottish mathematician John Napier (1550–1617) in his 1614 work Mirifici logarithmorum canonis descriptio. However, the value e, now known as the Euler constant, was a later contribution by Jacob Bernoulli (1655–1705). In a short period of time, these contributions started being widely adopted as means to facilitate numerical calculations, especially of products, with the help of logarithmic tables. Interestingly, the mechanical device developed by Napier, and known as Napier’s bones constitutes a resource for calculation of products and quotients between values that is not based on the concept of logarithms. After preliminary related developments by the English mathematician Roger Cotes (1682–1716), the important result now amply known as the Euler’s formula was described by Leonard Euler (1707–1783) in 1748 in his two volume work Introduction in anaysin infinitorum. The concepts of logarithm and exponential functions, in particular, contributed substantially for establishing relationships with the concept and calculation of the powers and roots, including concerning complex values, especially thanks to developments by Augustin-Louis Cauchy (1789–1857), in his Cours d’analyze (1821). The Fourier series was developed mainly by Jean-Baptiste Joseph Fourier (1768–1830) as a means to solve the heat equation (diffusion) on a metal plate, which he described in his reference work Mémoire sur la propagation de la chaleur dans les corps solides (1807). The development of matrix algebra was to a great extent pioneered by the British mathematician Arthur Cayley (1821-1895), who also employed matrices as resources for addressing linear systems of equations. Cayley focus on pure mathematics included also important contributions to analytic geometry, group theory, as well as in graph theory. One of the first systematic approaches to the application of matrices to dynamics and differential equations has been developed in the book Elementary matrices and some applications to dynamics and differential equations, whose first 155 pages present a treatise on matrices, including infinite series of matrices and differential operators. The remainder of the book described the solution of differential equations by using matrices, as well as applications to dynamics of airplanees.

https://hal.science/hal-03845390v2/document

Overview of the exponential function

The exponential function is one of the most important functions in mathematics (though it would have to admit that the linear function ranks even higher in importance). To form an exponential function, we let the independent variable be the exponent. A simple example is the function f(x)=2^x^.

As illustrated in the above graph of f, the exponential function increases rapidly. Exponential functions are solutions to the simplest types of dynamical systems. For example, an exponential function arises in simple models of bacteria growth

An exponential function can describe growth or decay. The function

is an example of exponential decay. It gets rapidly smaller as x increases, as illustrated by its graph.

In the exponential growth of f(x), the function doubles every time you add one to its input x. In the exponential decay of g(x), the function shrinks in half every time you add one to its input x. The presence of this doubling time or half-life is characteristic of exponential functions, indicating how fast they grow or decay.

Parameters of the exponential function

As with any function, the action of an exponential function f(x) can be captured by the function machine metaphor that takes inputs x and transforms them into the outputs f(x).

The function machine metaphor is useful for introducing parameters into a function. The above exponential functions f(x) and g(x) are two different functions, but they differ only by the change in the base of the exponentiation from 2 to 1/2. We could capture both functions using a single function machine but dials to represent parameters influencing how the machine works.

We could represent the base of the exponentiation by a parameter b. Then, we could write f as a function with a single parameter (a function machine with a single dial): f(x)=b^x^.

When b=2, we have our original exponential growth function f(x), and when b=12, this same f turns into our original exponential decay function g(x). We could think of a function with a parameter as representing a whole family of functions, with one function for each value of the parameter.

We can also change the exponential function by including a constant in the exponent. For example, the function h(x)=2^3x^ is also an exponential function. It just grows faster than f(x)=2^x^ since h(x) doubles every time you add only 1/3 to its input x. We can introduce another parameter k into the definition of the exponential function, giving us two dials to play with. If we call this parameter k, we can write our exponential function f as f(x)=b^kx^.

It turns out that adding both parameters b and k to our definition of f is really unnecessary. We can still get the full range of functions if we eliminate either b or k. [...]. For example, you can see that the function f(x)=3^2x^ (k=2, b=3) is exactly the same as the function f(x)=9^x^ (k=1, b=9). In fact, for any change you make to k, you can make a compensating change in b to keep the function the same. [...].

Since it is silly to have both parameters b and k, we will typically eliminate one of them. The easiest thing to do is eliminate k and go back to the function f(x)=b^x^.

We will use this function a bit at first, changing the base b to make the function grow or decay faster or slower.

However, once you start learning some calculus, you'll see that it is more natural to get rid of the base parameter b and instead use the constant k to make the function grow or decay faster or slower. Except, we can't exactly get rid of the base b. If we set b=1, we'd have the boring function f(x)=1, or, if we set b=0, we'd have the even more boring function f(x)=0. We need to choose some other value of b.

If we didn't have calculus, we'd probably choose b=2, writing our exponential function as f(x)=2^kx^. Or, since we like the decimal system so well, maybe we'd choose b=10 and write our exponential function of f(x)=10^kx^. According to the above discussion, it shouldn't matter whether we use b=2 or b=10, as we can get the same functions either way (just with different values of k).

But, it turns out that calculus tells us there is a natural choice for the base b. Once you learn some calculus, you'll see why the most common base b throughout the sciences is the irrational number

e=2.718281828459045….

Fixing b=e, we can write the exponential functions as f(x)=e^kx^.

Using e for the base is so common, that e^x^ (“e to the x”) is often referred to simply as the exponential function.

To increase the possibilities for the exponential function, we can add one more parameter c that scales the function: f(x)=cb^kx^.

Since f(0)=cb^k0^=c, we can see that the parameter c does something completely different than the parameters b and k. We'll often use two parameters for the exponential function: c and one of b or k. For example, we might set k=1 and use f(x)=c^bx^ or set b=e and use f(x)=ce^kx^

https://mathinsight.org/exponential_function

The number e is a mathematical constant, approximately equal to 2.71828, that is the base of the natural logarithm and exponential function. It is sometimes called Euler's number, after the Swiss mathematician Leonhard Euler, though this can invite confusion with Euler numbers, or with Euler's constant, a different constant typically denoted γ. Alternatively, e can be called Napier's constant after John Napier. The Swiss mathematician Jacob Bernoulli discovered the constant while studying compound interest.

The first references to this constant were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms to the base e It is assumed that the table was written by William Oughtred. In 1661, Christiaan Huygens studied how to compute logarithms by geometrical methods and calculated a quantity that, in retrospect, is the base-10 logarithm of e, but he did not recognize e itself as a quantity of interest.

The constant itself was introduced by Jacob Bernoulli in 1683, for solving the problem of continuous compounding of interest. In his solution, the constant e occurs as the limit

where n represents the number of intervals in a year on which the compound interest is evaluated (for example, n = 12 for monthly compounding).

The first symbol used for this constant was the letter b by Gottfried Leibniz in letters to Christiaan Huygens in 1690 and 1691.

Leonhard Euler started to use the letter e for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, and in a letter to Christian Goldbach on 25 November 1731. The first appearance of e in a printed publication was in Euler's Mechanica (1736). It is unknown why Euler chose the letter e. Although some researchers used the letter c in the subsequent years, the letter e was more common and eventually became standard.

Euler proved that e is the sum of the infinite series

where n! is the

factorial

of n. The equivalence of the two characterizations using the limit and the infinite series can be proved via the binomial theorem.

https://en.m.wikipedia.org/wiki/E_(mathematical_constant)

The number e first comes into mathematics in a very minor way. This was in 1618 when, in an appendix to Napier's work on logarithms, a table appeared giving the natural logarithms of various numbers. However, that these were logarithms to base e was not recognised since the base to which logarithms are computed did not arise in the way that logarithms were thought about at this time. Although we now think of logarithms as the exponents to which one must raise the base to get the required number, this is a modern way of thinking. We will come back to this point later in this essay. This table in the appendix, although carrying no author's name, was almost certainly written by Oughtred. A few years later, in 1624, again e almost made it into the mathematical literature, but not quite. In that year Briggs gave a numerical approximation to the base 10 logarithm of e but did not mention e itself in his work.

The next possible occurrence of e is again dubious. In 1647 Saint-Vincent computed the area under a rectangular hyperbola. Whether he recognised the connection with logarithms is open to debate, and even if he did there was little reason for him to come across the number e explicitly. Certainly by 1661 Huygens understood the relation between the rectangular hyperbola and the logarithm. He examined explicitly the relation between the area under the rectangular hyperbola yx=1 and the logarithm. Of course, the number ee is such that the area under the rectangular hyperbola from 1 to e is equal to 1. This is the property that makes e the base of natural logarithms, but this was not understood by mathematicians at this time, although they were slowly approaching such an understanding.

Huygens made another advance in 1661. He defined a curve which he calls "logarithmic" but in our terminology we would refer to it as an exponential curve, having the form y=ka^x^. Again out of this comes the logarithm to base 10 of e, which Huygens calculated to 17 decimal places. However, it appears as the calculation of a constant in his work and is not recognised as the logarithm of a number (so again it is a close call but e remains unrecognised).

Further work on logarithms followed which still does not see the number e appear as such, but the work does contribute to the development of logarithms. In 1668 Nicolaus Mercator published Logarithmotechnia which contains the series expansion of log⁡(1+x). In this work Mercator uses the term "natural logarithm" for the first time for logarithms to base ee. The number e itself again fails to appear as such and again remains elusively just round the corner.

Perhaps surprisingly, since this work on logarithms had come so close to recognising the number e, when e is first "discovered" it is not through the notion of logarithm at all but rather through a study of compound interest. In 1683 Jacob Bernoulli looked at the problem of compound interest and, in examining continuous compound interest, he tried to find the limit of (1+1/n)^n^ as n tends to infinity. He used the binomial theorem to show that the limit had to lie between 2 and 3 so we could consider this to be the first approximation found to e. Also if we accept this as a definition of e, it is the first time that a number was defined by a limiting process. He certainly did not recognise any connection between his work and that on logarithms.

We mentioned above that logarithms were not thought of in the early years of their development as having any connection with exponents. Of course from the equation x = a^t^, we deduce that t = log⁡ x where the log is to base a, but this involves a much later way of thinking. Here we are really thinking of log as a function, while early workers in logarithms thought purely of the log as a number which aided calculation. It may have been Jacob Bernoulli who first understood the way that the log function is the inverse of the exponential function. On the other hand the first person to make the connection between logarithms and exponents may well have been James Gregory. In 1684 he certainly recognised the connection between logarithms and exponents, but he may not have been the first.

So much of our mathematical notation is due to Euler that it will come as no surprise to find that the notation e for this number is due to him. The claim which has sometimes been made, however, that Euler used the letter e because it was the first letter of his name is ridiculous. It is probably not even the case that the e comes from "exponential", but it may have just be the next vowel after "a" and Euler was already using the notation "a" in his work. Whatever the reason, the notation e made its first appearance in a letter Euler wrote to Goldbach in 1731.

Most people accept Euler as the first to prove that ee is irrational. Certainly it was Hermite who proved that ee is not an algebraic number in 1873.

https://mathshistory.st-andrews.ac.uk/HistTopics/e/

All exponential functions are proportional to their own derivative, but the exponential function base e alone is the special number so that the proportionality constant is 1, meaning e^t^ actually equals its own derivative.

If you look at the graph of e^t^, it has the peculiar property that the slope of a tangent line to any point on the graph equals the height of that point above the horizontal axis.

Examples of the slope of the tangent line for the exponential function.

So how does the exponential function help us find the derivatives of other exponential functions? Well, maybe you noticed that different exponentials look like horizontally scaled versions of each other. This is true for all exponential functions, but best seen with exponential functions with related bases.

This means that you can re-write one exponential in terms of another's base. For example, if we have an exponential function of base 2 and want to re-write the function in terms of base 4, it can be written like this.

2^x^=4^(1/2)x^

One way to see how to convert between two bases is to zoom in on the graph between 0 and 1 to see how fast the first base grows to to the value of the second base. In this case, base 4 grows twice as fast as base 2 and reaches the output of 2 in half the time. So to convert base 4 to base 2 we can multiply the input t of the base 4 function by the constant 1/2, which is the same as scaling 4^x^ by a factor of 2 in the horizontal direction.

So we've found a function, the exponential function of base e, with a really nice derivative property. Can we take any old exponential function and re-write it in terms of the exponential function? Or in other words, what constant do we multiply the input variable by to make the exponential function have the same output as another exponential function?

For example, let's try to re-write 2^t^ in terms of the exponential function.

e^ct^ = 2^t^

As before, we can zoom in on a plot of the two functions, and compare their behavior. Specifically, how long does it take the exponential function to grow to 2?

Well, looking at the graph, it takes about t=0.693... units which is exactly equal to the same proportionality constant we found before! If we multiply the input variable t in the exponential function by this constant, the exponential function has the same output as 2^t^.

e^(0.69314718056...)⋅t^ = 2^t^

This type of question we are asking leads us directly towards another function, the inverse of the exponential function, the natural logarithm function.

The existence of a function like this can answer the question of the mystery constants, and it’s because it gives a different way to think about functions that are proportional to their own derivative. There's nothing fancy here, this is simply the definition of the natural log, which asks the question "e to the what equals 2".

e^??^ = 2

And indeed, go plug in the natural log of 2 to a calculator, and you’ll find that it’s 0.6931..., the mystery constant we ran into earlier. And same goes for all the other bases, the mystery proportionality constant that pops up when taking derivatives and when re-writing exponential functions using e is the natural log of the base; the answer to the question "e to the what equals that base".

Importantly, the natural logarithm function gives us the missing tool we need to find the derivative of any exponential function. The key is to re-write the function and then use the chain rule. For example, what is the derivative of the function 3^t^? Well, let's re-write this function in terms of the exponential function using the natural logarithm to calculate the horizontally-scaling proportionality constant.

3^t^ = e^ln(3)t^

Then, we can calculate the derivative of e^ln⁡(3)t^ using the chain rule by. First, take the derivative of the outermost function, which due to the special nature of the exponential funtion is itself. Then, second, multiply this by the derivative of the inner function ln⁡(3)t, which is the constant ln⁡(3).

This is the same derivative we found using algebra above, since ln⁡(3)=1.09861228867...

The same technique can be used to find the derivative of any exponential function.

In fact, throughout applications of calculus, you rarely see exponentials written as some base to a power t. Instead you almost always write exponentials as e raised to some constant multiplied by t. It’s all equivalent; any function like 2^t^ or 3^t^ can be written as e^c⋅t^. The difference is that framing things in terms of the exponential function plays much more smoothly with the process of derivatives.

Why we care

I know this is all pretty symbol heavy, but the reason we care is that all sorts of natural phenomena involve a certain rate of change being proportional to the thing changing.

For example, the rate of growth of a population actually does tend to be proportional to the size of the population itself, assuming there isn’t some limited resource slowing that growth down. If you put a cup of hot water in a cool room, the rate at which the water cools is proportional to the difference in temperature between the room and the water. Or said differently, the rate at which that difference changes is proportional to itself. If you invest your money, the rate at which it grows is proportional to the amount of money there at any time.

In all these cases, where some variable’s rate of change is proportional to itself, the function describing that variable over time will be some exponential. And even though there are lots of ways to write any exponential function, it’s very natural to choose to express these functions as e^ct^, since that constant c in the exponent carries a very natural meaning: It’s the same as the proportionality constant between the size of the changing variable and the rate of change.

https://www.3blue1brown.com/lessons/eulers-number

23
8
Ruling the Logarithms (sliderulemuseum.com)
submitted 2 months ago* (last edited 2 months ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

In mathematics, the logarithm of a number is the exponent by which another fixed value, the base, must be raised to produce that number. For example, the logarithm of 1000 to base 10 is 3, because 1000 is 10 to the 3rd power: 1000 = 103 = 10 × 10 × 10. More generally, if x = by, then y is the logarithm of x to base b, written logb x, so log10 1000 = 3. As a single-variable function, the logarithm to base b is the inverse of exponentiation with base b.

Logarithms were introduced by John Napier in 1614 as a means of simplifying calculations. They were rapidly adopted by navigators, scientists, engineers, surveyors, and others to perform high-accuracy computations more easily. Using logarithm tables, tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition. This is possible because the logarithm of a product is the sum of the logarithms of the factors:

log~b~(xy) = log~b~ ⁡x + log~b~ ⁡y ,

provided that b, x and y are all positive and b ≠ 1. The slide rule, also based on logarithms, allows quick calculations without tables, but at lower precision.

The common logarithm of a number is the index of that power of ten which equals the number. Speaking of a number as requiring so many figures is a rough allusion to common logarithm, and was referred to by Archimedes as the "order of a number". The first real logarithms were heuristic methods to turn multiplication into addition, thus facilitating rapid computation.

https://en.m.wikipedia.org/wiki/Logarithm

John Napier of Merchiston Latinized as Ioannes Neper; 1 February 1550 – 4 April 1617), nicknamed Marvellous Merchiston, was a Scottish landowner known as a mathematician, physicist, and astronomer. He was the 8th Laird of Merchiston.

John Napier is best known as the discoverer of logarithms. He also invented the so-called

"Napier's bones"

Napier's bones is a manually operated calculating device created by John Napier of Merchiston, Scotland for the calculation of products and quotients of numbers. The method was based on lattice multiplication, and also called rabdology, a word invented by Napier. Napier published his version in 1617. It was printed in Edinburgh and dedicated to his patron Alexander Seton.

https://en.m.wikipedia.org/wiki/Napier%27s_bones

and popularised the use of the decimal point in arithmetic and mathematics.

Napier's birthplace, Merchiston Tower in Edinburgh, is now part of the facilities of Edinburgh Napier University. There is a memorial to him at St Cuthbert's Parish Church at the west end of Princes Street Gardens in Edinburgh.

https://en.m.wikipedia.org/wiki/John_Napier

John Napier was a Scottish scholar who is best known for his invention of logarithms, but other mathematical contributions include a mnemonic for formulas used in solving spherical triangles and two formulas known as Napier's analogies.

https://mathshistory.st-andrews.ac.uk/Biographies/Napier/

How to Write it

We write it like this:

log~2~(8) = 3

So these two things are the same:

The number we multiply is called the "base", so we can say:

  • "the logarithm of 8 with base 2 is 3"
  • or "log base 2 of 8 is 3"
  • or "the base-2 log of 8 is 3"

Notice we are dealing with three numbers:

  • the base: the number we are multiplying (a "2" in the example above)
  • how often to use it in a multiplication (3 times, which is the logarithm)
  • The number we want to get (an "8")

Example: What is log~5~(625) ... ?

We are asking "how many 5s need to be multiplied together to get 625?"

5 × 5 × 5 × 5 = 625, so we need 4 of the 5s

Answer: log~5~(625) = 4

https://www.mathsisfun.com/algebra/logarithms.html

Before Logarithms:

The late sixteenth century saw unprecedented development in many scientific fields; notably, observational astronomy, long-distance navigation, and geodesy science, or efforts to measure and represent the earth. These endeavors required much from mathematics. For the most part, their foundation was trigonometry, and trigonometric tables, identities, and related calculation were the subject of intensive enterprise. Typically, trigonometric functions were based on non-unity radii, such as R=10,000,000,

to ensure precise integer output.* Reducing the calculation burden that resulted from dealing with such large numbers for practitioners in these applied disciplines, and with it, the errors that inevitably crept into the results, became a prime objective for mathematicians. As a result, much energy and scholarly effort were directed towards the art of computation.

Accordingly, techniques that could bypass lengthy processes, such as long multiplications or divisions, were explored. Of particular interest were those that replaced these processes with equivalent additions and subtractions. One method originating in the late sixteenth century that was used extensively to save computation was the technique called prosthaphaeresis, a compound constructed from the Greek terms prosthesis (addition) and aphaeresis (subtraction). This relation transformed long multiplications and divisions into additions and subtractions via trigonometric identities, such as:

2cos(A)cos(B) = cos(A+B) + cos(A−B).

When one needed the product of two numbers x and y, for example, trigonometric tables would be consulted to find A and B such that:

x=cos(A) and y=cos(B).

With A and B determined, cos(A+B) and cos(A−B)

could be read from the table and half of the sum taken to find the original product in question. Thus the long multiplication of two numbers could be replaced by table look-up, addition, and halving. Such rules were recognized as early as the beginning of the sixteenth century by Johannes Werner in 1510, but their application specifically for multiplication first appeared in print in 1588 in a work by Nicolai Reymers Ursus (Thoren, 1988). Christopher Clavius extended the methods of prosthaphaeresis, of which examples can be found in his 1593 Astrolabium (Smith, 1959, p. 455).

Finally, with the scientific community focused on developing more powerful computational methods, the desire to capture symbolically essential mathematical ideas behind these developments was also growing. In the fifteenth and sixteenth centuries, mathematicians such as Nicolas Chuquet (c. 1430–1487) and Michael Stifel (c. 1487–1567) turned their attention to the relationship between arithmetic and geometric sequences while working to construct notation to express an exponential relationship. The focus on mathematical symbolism in centuries prior and the growing attention to notation–particularly the experimentation with different versions of exponent notation–played a critical role in the recognition and clarification of such a relationship. Now the mathematical connection between a geometric and an arithmetic sequence could be made all the more apparent by symbolically capturing these sequences as successive exponential powers of a given number and the exponents themselves, respectively (see Figure 6). The work on the relationships between sequences was mathematically important per se, but was equally significant for providing the inspiration for the development of the logarithmic relation.

* Note: Modern trigonometry is essentially based on triangles inscribed in a unit circle; that is, a circle with radius R=1. Early practitioners used circles with various values for the radius. The relationship between the modern sine function and a sine or half-chord in a circle of radius R is given by Sinθ = R sinθ, where the modern sine function has a lower case 's' and the pre-modern sine an upper case 'S'.

https://old.maa.org/press/periodicals/convergence/logarithms-the-early-history-of-a-familiar-function-before-logarithms-the-computational-demands-of

John Napier Introduces Logarithms

In such conditions, it is hardly surprising that many mathematicians were acutely aware of the issues of computation and were dedicated to relieving practitioners of the calculation burden. In particular, the Scottish mathematician John Napier was famous for his devices to assist with computation. He invented a well-known mathematical artifact, the ingenious numbering rods more quaintly known as “Napier's bones,” that offered mechanical means for facilitating computation. (For additional information on “Napier's bones,” see the article, “John Napier: His Life, His Logs, and His Bones” (2006).) In addition, Napier recognized the potential of the recent developments in mathematics, particularly those of prosthaphaeresis, decimal fractions, and symbolic index arithmetic, to tackle the issue of reducing computation. He appreciated that, for the most part, practitioners who had laborious computations generally did them in the context of trigonometry. Therefore, as well as developing the logarithmic relation, Napier set it in a trigonometric context so it would be even more relevant.

Napier first published his work on logarithms in 1614 under the title Mirifici logarithmorum canonis descriptio, which translates literally as A Description of the Wonderful Table of Logarithms. Indeed, the very title Napier selected reveals his high ambitions for this technique---the provision of tables based on a relation that would be nothing short of “wonder-working” for practitioners. As well as providing a short overview of the mathematical details, Napier gave technical expression to his concept. He coined a term from the two ancient Greek terms logos, meaning proportion, and arithmos, meaning number; compounding them to produce the word “logarithm.” Napier used this word as well as the designations “natural” and “artificial” for numbers and their logarithms, respectively, in his text.

Despite the obvious connection with the existing techniques of prosthaphaeresis and sequences, Napier grounded his conception of the logarithm in a kinematic framework. The motivation behind this approach is still not well understood by historians of mathematics. Napier imagined two particles traveling along two parallel lines. The first line was of infinite length and the second of a fixed length (see Figures 2 and 3). Napier imagined the two particles to start from the same (horizontal) position at the same time with the same velocity. The first particle he set in uniform motion on the line of infinite length so that it covered equal distances in equal times. The second particle he set in motion on the finite line segment so that its velocity was proportional to the distance remaining from the particle to the fixed terminal point of the line segment.

Figure 2. Napier's two parallel lines with moving particles (Image used courtesy of Landmarks of Science Series, NewsBank-Readex)

More specifically, at any moment the distance not yet covered on the second (finite) line was the sine and the traversed distance on the first (infinite) line was the logarithm of the sine. This had the result that as the sines decreased, Napier's logarithms increased. Furthermore, the sines decreased in geometric proportion, and the logarithms increased in arithmetic proportion. We can summarize Napier's explanation as follows (Descriptio I, 1 (p. 4); see Figure 3):

AC = log~nap~(γω) where γω = Sinθ~1~

AD = log~nap~(δω) where δω = Sinθ~2~

AE = log~nap~(ϵω) where ϵω= Sinθ~3~

and so on, so that, more generally: x = Sin(θ)

y = log~nap~(x)

where log~nap~ has been used to distinguish Napier's particular understanding of the logarithm concept from the modern one.

Figure 3. The relation between the two lines and the logs and sines

Napier generated numerical entries for a table embodying this relationship. He arranged his table by taking increments of arc θ minute by minute, then listing the sine of each minute of arc, and then its corresponding logarithm. However in terms of the way he actually computed these entries, he would have in fact worked in the opposite manner, generating the logarithms first and then choosing those that corresponded to a sine of an arc, which accordingly formed the argument. For example, he would have computed values that appear in the first column of Table 1 via the relation:

Table 1. Napier's logarithms

The values in the first column (in bold) that corresponded to the Sines of the minutes of arcs (third column) were extracted, along with their accompanying logarithms (column 2) and arranged in the table. The appropriate values from Table 1 can be seen in rows one to six of the last three columns in Figure 4. Napier tabulated his logarithms from 0∘ to 45∘ in minutes of arc, and by symmetry provided values for the entire first quadrant. The excerpt in Figure 4 gives the first half of the first degree and, by symmetry, on the right the last half of the eighty-ninth degree.

To complete the tables, Napier computed almost ten million entries from which he selected the appropriate values. Napier himself reckoned that computing this many entries had taken him twenty years, which would put the beginning of his endeavors as far back as 1594.

Figure 4. The first page of Napier's tables

(Image used courtesy of Landmarks of Science Series, NewsBank-Readex)

Napier frequently demonstrated the benefits of his method. For example, he worked through a problem involving the computation of mean proportionals, sometimes known as the geometric mean. He reviewed the usual way in which this would have been computed, and pointed out that his technique using logarithms not only finds the answer “earlier” (that is, faster!), but also uses only one addition and one division by two! He stated:

"Let the extremes 1000000 and 500000 bee given, and let the meane proportionall be sought: that commonly is found by multiplying the extreames given, one by another, and extracting the square root of the product. But we finde it earlier thus; We adde the Logarithme of the extreames 0 and 693147, the summe whereof is 693147 which we divide by 2 and the quotient 346573 shall be the Logar. of the middle proportionall desired. By which the middle proportionall 707107, and his arch 45 degrees are found as before.... found by addition onely, and division by two. (Book I, 5 (p. 25), as translated by Edward Wright)"

In order to find the mean proportional by traditional methods, Napier observed that one has to compute the product and then take the square root; that is:

√(1000000×500000) = √(500000000000) ≈ 707106.78

This method involves the multiplication of two large numbers and a lengthy square-root extraction. As an alternative, Napier proposed (with computations to 6 significant figures):

log~nap~(1000000)+log~nap~(500000)=0+693147=693147

693147÷2 = 346573 to 6 significant figures

⇒mean proportional = 707107, as required,

which he rightly deemed was much simpler to compute.

https://old.maa.org/press/periodicals/convergence/logarithms-the-early-history-of-a-familiar-function-john-napier-introduces-logarithms

Henry Briggs and the Common Logarithm

Shortly after Napier’s publication, English mathematician Henry Briggs (1561–1630) refined and popularized the concept of logarithms. Briggs collaborated with Napier and proposed the use of base-10 logarithms, also known as common logarithms. In 1617, Briggs published Logarithmorum Chilias Prima, containing the first table of base-10 logarithms.

Briggs’ base-10 system was more intuitive and practical for everyday calculations, as it aligned with the decimal system widely used in Europe. This refinement made logarithms accessible to a broader audience, including scientists, engineers, and navigators.

The Logarithmic Scale: Slide Rules and Early Calculators

One of the earliest applications of logarithms was the development of the slide rule. In 1622, English mathematician William Oughtred invented the circular slide rule, which utilized logarithmic scales for rapid calculations. By the mid-17th century, linear slide rules became common tools for scientists, engineers, and students.

The slide rule remained an essential computational device for over 300 years, until the advent of electronic calculators in the mid-20th century. Its reliance on logarithmic principles demonstrates the enduring utility of logarithms in simplifying calculations.

Logarithms in Astronomy and Navigation

Logarithms played a crucial role in advancing astronomy and navigation during the 17th and 18th centuries. Astronomers like Johannes Kepler and Isaac Newton relied on logarithmic tables to perform complex calculations related to planetary motion and celestial mechanics. By reducing the computational burden, logarithms enabled astronomers to make precise predictions and refine their models of the universe.

Navigators also benefited from logarithms, particularly in determining longitude and calculating distances at sea. The efficiency of logarithmic tables allowed mariners to improve their accuracy in charting courses and conducting explorations.

https://www.historymath.com/logarithms/

The Slide rule

This is a picture of a basic beginner’s slide rule for various math operations including mutiplication/division and square/squareroot:

Components of A Slide Rule

The slide rule is actually made of three bars that are fixed together. The sliding center bar is sandwiched by the outer bars which are fixed with respect to each other. The metal "window" is inserted over the slide rule to act as a place holder. A cursor is fixed in the center of the "window" to allow for accurate readings.

The scales (A-D) are labeled on the left-hand side of the slide rule. The number of scales on a slide rule vary depending on the number of mathematical functions the slide rule can perform. Multiplication and division are performed using the C and D scales. Square and square root are performed with the A and B scales. The numbers are marked according to a logarithmic scale. Therefore, the first number on the slide rule scale (also called the index) is 1 because the log of zero is one.

To know how it works please read the full page

Notice that on this scale the distance between the divisions is decreasing. This is a characteristic of a log scale. A logarithm relates one number to another number much like a mathematical function. The log of a number, to the base 10, is defined by:

The "magic" of the slide rule is actually based on a mathematical logarithmic relation:

These relations made it possible to perform multiplication and division using addition and subtraction. Before the slide rule, the product of two numbers were found by looking up their respective logs and adding them together, then finding the number whose log is the sum, also called the inverse log.

The slide rule made its first appearance in the late 17th century. The slide rule made it easier to utilize the log relations by developing a number line on which the displacement of the numbers were proportional to their logs. The slide rule eased the addition of the two logarithmic displacements of the numbers, thus assisting with multiplication and division in calculations.

LIMITING PHYSICS:

The accuracy of the calculations made with a slide rule depends on the accuracy with which the user can read the numbers off the scale. More divisions allow for more decimal places which means increased accuracy.

https://web.mit.edu/2.972/www/reports/slide_rule/slide_rule.html

24
 
 

According to George Gamow, chess was invented by Sissa ben Dahir, Wazir of the court of King Shiram. King Shiram loved the game so much that he offered Sissa any reward he could name. Perhaps trying to impress the king with his mathematical skills, Sissa asked for some rice,

one grain on the first square of the chessboard, two on the second, four on the third, eight on the fourth, and so on, each square's amount being the double of the previous square's.

How much rice did Shiram owe Sissa?

The last square would contain 2^63^ grains of rice. >This is a large number: 2^63^ = 9,223,372,036,854,775,808 Suppose Shiram had tried to stack the rice of this last square in a column, each grain lying on top of the one below it. A grain of rice is about 1 mm thick. How high a column of rice would Shiram have obtained? Would it be higher than Mt. Everest? Higher than the distance to the moon? To the sun?

Here is the answer

In fact, if he could have stacked them this way, Shiram would have obtained a column of rice one light year tall, one-quarter of the way to the nearest star after the sun. Obviously, Shiram could not give Sissa the reward he requested. What do you suppose was the outcome? Let's just say an important lesson is "Don't be a smart-alek."

.

The quickness of doubling is not just related to the history of chess. The most elementary population models postulate the growth rate is proportional to the population size (twice as many people means twice as many couples having babies, means twice as many babies). This led Thomas Malthus to predict population pressure problems, because Malthus argued populations grow more rapidly than their ability to produce food.

https://gauss.math.yale.edu/public_html/People/frame/Fractals/Chaos/Doubling/Doubling.html

The ancient Indian Brahmin mathematician Sissa (also spelt Sessa or Sassa and also known as Sissa ibn Dahir or Lahur Sessa) is a mythical character from India, known for the invention of chaturanga, the Indian predecessor of chess, and the wheat and chessboard problem he would have presented to the king when he was asked what reward he'd like for that invention.

Sissa, a Hindu Brahmin (in some legends from the village of Lahur), invents chess for an Indian king (named as Balhait, Shahram or Ladava in different legends, with "Taligana" sometimes named as the supposed kingdom he ruled in northern India) for educational purposes. In gratitude, the king asks Sissa how he wants to be rewarded. Sissa wishes to receive an amount of grain which is the sum of one grain on the first square of the chess board, and which is then doubled on every following square.

This request is now known as the wheat and chessboard problem, and forms the basis of various mathematical and philosophical questions.

Until the nineteenth century, the legend of Sissa was one of several theories about the origin of chess. Today it is mainly regarded as a myth because there is no clear picture of the origin of chaturanga (an ancient Indian chess game), and from which modern chess has developed.

The context of the mythical Sissa is described in detail in A History of Chess. There are many variations and inconsistencies, and therefore little can be confirmed historically. Nevertheless, the legend of Sissa is placed by most sources in a Hindu kingdom between 400 and 600 AD, in an era after the invasion of Alexander the Great. The myth is often told from a Persian and Islamic perspective.

However, the oldest known narrative believed to have been the basis for the legend of Sissa is from before the advent of Islam. It tells of Husiya, daughter of Balhait, a queen whose son is killed by a rebel, but of whom she does not initially hear the news. This news is subtly announced to her through the chess game that Sissa introduced to her.

https://en.m.wikipedia.org/wiki/Sissa_(mythical_brahmin)

The problem may be solved using simple addition. With 64 squares on a chessboard, if the number of grains doubles on successive squares, then the sum of grains on all 64 squares is: 1 + 2 + 4 + 8 + ... and so forth for the 64 squares. The total number of grains can be shown to be 2^64^−1 or 18,446,744,073,709,551,615 (eighteen quintillion, four hundred forty-six quadrillion, seven hundred forty-four trillion, seventy-three billion, seven hundred nine million, five hundred fifty-one thousand, six hundred and fifteen).

This exercise can be used to demonstrate how quickly exponential sequences grow, as well as to introduce exponents, zero power, capital-sigma notation, and geometric series. Updated for modern times using pennies and a hypothetical question such as "Would you rather have a million dollars or a penny on day one, doubled every day until day 30?", the formula has been used to explain compound interest. (Doubling would yield over one billion seventy three million pennies, or over 10 million dollars: 2^30^−1=1,073,741,823).

The problem appears in different stories about the invention of chess. One of them includes the geometric progression problem. The story is first known to have been recorded in 1256 by Ibn Khallikan. Another version has the inventor of chess (in some tellings Sessa, an ancient Indian minister) request his ruler give him wheat according to the wheat and chessboard problem. The ruler laughs it off as a meager prize for a brilliant invention, only to have court treasurers report the unexpectedly huge number of wheat grains would outstrip the ruler's resources. Versions differ as to whether the inventor becomes a high-ranking advisor or is executed.

https://en.m.wikipedia.org/wiki/Wheat_and_chessboard_problem

Let one grain of wheat be placed on the first square of a chessboard, two on the second, four on the third, eight on the fourth, etc. How many grains total are placed on an 8×8 chessboard? Since this is a geometric series, the answer for n squares is a Mersenne number. Plugging in n=8×8=64 then gives 2^64^-1=18446744073709551615.

https://mathworld.wolfram.com/WheatandChessboardProblem.html

The Death of Moore’s Law: What it means and what might fill the gap going forward

In 1965, engineer and businessman Gordon Moore observed a trend that would go on to define the unprecedented technological explosion we’ve experienced over the past fifty years. Noting that the number of transistors in an integrated circuit doubles about every two years, Moore laid out his eponymous law, which has since become the engine behind the growing computer science industry, making everything we now enjoy—cellphones, high-resolution digital imagery, household robots, computer animation, etc.—possible.

However, Moore’s Law was never meant to last forever. Transistors can only get so small and, eventually, the more permanent laws of physics get in the way. Already transistors can be measured on an atomic scale, with the smallest ones commercially available only 3 nanometers wide, barely wider than a strand of human DNA (2.5nm). While there’s still room to make them smaller (in 2021, IBM announced the successful creation of 2-nanometer chips), such progress has become prohibitively expensive and slow, putting reliable gains into question. And there’s still the physical limitation in that wires can’t be thinner than atoms, at least not with our current understanding of material physics.

https://cap.csail.mit.edu/death-moores-law-what-it-means-and-what-might-fill-gap-going-forward

Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship. It is an experience curve effect, a type of observation quantifying efficiency gains from learned experience in production.

A semi-log plot of transistor counts for microprocessors against dates of introduction, nearly doubling every two years

Industry experts have not reached a consensus on exactly when Moore's law will cease to apply. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, slightly below the pace predicted by Moore's law. In September 2022, Nvidia CEO Jensen Huang considered Moore's law dead, while Intel's then CEO Pat Gelsinger had the opposite view.

https://en.m.wikipedia.org/wiki/Moore%27s_law

25
14
No End in Sight (sh.itjust.works)
submitted 2 months ago* (last edited 2 months ago) by xiao@sh.itjust.works to c/Math_history@sh.itjust.works
 
 

Many of us recall the sense of wonder we felt upon learning that there is no biggest number; for some of us, that wonder has never quite gone away. It is obvious that, given any counting number, one can be added to it to give a larger number. But the implication that there is no limit to this process is perplexing.

The concept of infinity has exercised the greatest minds throughout the history of human thought. It can lead us into a quagmire of paradox from which escape seems hopeless. In the late 19th century, the German mathematician Georg Cantor showed that there are different degrees of infinity — indeed an infinite number of them — and he brought to prominence several paradoxical results that had a profound impact on the subsequent development of the subject.

Set Theory

Cantor was the inventor of set theory, which is a fundamental foundation of modern mathematics. A set is any collection of objects, physical or mathematical, actual or ideal. A particular number, say 4, is associated with all the sets having four elements. For any two of these sets, we can find a 1-to-1 correspondence, or bijection, between the elements of one set and those of the other. The number 4 is called the cardinality of these sets. Generalizing this argument, Cantor treated any two sets as being of the same size, or cardinality, if there is a 1-to-1 correspondence between them.

Bijection between two sets of cardinality 4.

But suppose the sets are infinite. As a concrete example, take all the natural numbers, 1, 2, 3, … as one set, and all the even numbers 2, 4, 6, … as the other. By associating any number n in the first set with 2n in the second, we have a perfect 1-to-1 correspondence. By Cantor’s argument, the two sets are the same size. But this is paradoxical, for the set of natural numbers contains all the even numbers and also all the odd ones so, in an intuitive sense, it is larger. The same paradoxical result had been deduced by Galileo some 250 years earlier.

Cantor carried these ideas much further, showing in particular that the set of all the real numbers (or all the points on a line) have a degree of infinity, or cardinality, greater than the counting numbers. He did this using an ingenious approach called the diagonal argument. This raised an issue, called the continuum hypothesis: is there a degree of infinity between these two? This question can not be answered within standard set theory.

Infinities without limit

Cantor introduced the concept of a power set: for any set A, the power set P(A) is the collection of all the subsets of A. Cantor proved that the cardinality of P(A) is greater than that of A. For finite sets, this is obvious; for infinite ones, it was startling. The result is now known as Cantor’s Theorem, and he used his diagonal argument in proving it. He thus developed an entire hierarchy of transfinite cardinal numbers. The smallest of these is the cardinality of the natural numbers, called Aleph-zero:

Aleph-zero, the cardinality of the natural numbers and the smallest transfinite number.

Cantor’s theory caused quite a stir; some of his mathematical contemporaries expressed dismay at its counter-intuitive consequences. Henri Poincaré, the leading luminary of the day, described the theory as a “grave disease” of mathematics, while Leopold Kronecker denounced Cantor as a renegade and a “corrupter of youth”. This hostility may have contributed to the depression that Cantor suffered through his latter years. But David Hilbert championed Cantor’s ideas, famously predicting that “no one will drive us from the paradise that Cantor has created for us”.

https://thatsmaths.com/2014/07/31/degrees-of-infinity/

Cantor's paradise is an expression used by David Hilbert (1926, page 170) in describing set theory and infinite cardinal numbers developed by Georg Cantor. The context of Hilbert's comment was his opposition to what he saw as L. E. J. Brouwer's reductive attempts to circumscribe what kind of mathematics is acceptable; see Brouwer–Hilbert controversy.

"From the paradise that Cantor created for us no-one shall be able to expel us." Hilbert (1926, p. 170), in a lecture given in Münster to Mathematical Society of Westphalia on 4 June 1925

https://en.m.wikipedia.org/wiki/Cantor%27s_paradise

Georg Ferdinand Ludwig Philipp Cantor (3 March [O.S. 19 February] 1845 – 6 January 1918) was a mathematician who played a pivotal role in the creation of set theory, which has become a fundamental theory in mathematics. Cantor established the importance of one-to-one correspondence between the members of two sets, defined infinite and well-ordered sets, and proved that the real numbers are more numerous than the natural numbers. Cantor's method of proof of this theorem implies the existence of an infinity of infinities. He defined the cardinal and ordinal numbers and their arithmetic. Cantor's work is of great philosophical interest, a fact he was well aware of.

https://en.m.wikipedia.org/wiki/Georg_Cantor

Take the set of natural numbers {1, 2, 3, 4, … }. How many members are there in the set? Infinitely many, right? OK, now take the set of real numbers. How many members are there in the set? Infinitely many, right? So far so good.

Here is where it starts to get tricky. The number of members of a set is called its cardinality. If the cardinality of the natural numbers is infinity, and the cardinality of the real numbers is infinity, then do these two sets have the same cardinality? Are there the same amount of natural numbers as real numbers?

Whatever your intuition is about that last question, your intuition will hardly do. We need to be methodical about this. So now lets begin the proof [...].

Suppose we have devised some way to list all the real numbers between 0 and 1. This list will naturally be infinitely long, and we can write each entry in an infinitely-long decimal form. Here is how it might start, and note that I have marked some digits in bold.

The digits in bold run down the diagonal of this list. Use them to construct a new real number between 0 and 1.

.1531190918...

Like all the other numbers on the list, this number will have an infinite number of digits; this is because the list is infinitely long.

Now increase each individual digit by 1. If the digit is 9, make it 0:

.2642201029...

Is this new number on the original list?

On one hand, it must be, because the list is infinitely long and contains all the real numbers between 0 and 1. The new number is a real number between 0 and 1.

On the other hand, if we work through the number and the list methodically, we will see that it cannot be on the list. Is the new number the first number on the list? No, because the first digit of the number differs from the first digit of the first entry. Is the new number the second number on the list? No, because the second digit of the number differs from the second digit of the second entry. Is the new number the _n_th number on the list? No, because the _n_th digit of the number differs from the _n_th digit of the _n_th entry. Therefore, the new number, a real number between 0 and 1, cannot appear on an infinitely-long list of real numbers between 0 and 1.

We have contradicted ourselves, and that concludes the proof. It is impossible, even in principle, to denumerate the real numbers between 0 and 1. There are not just infinitely many reals between 0 and 1—there are uncountably many. There are so many that they cannot all be placed in correspondence with the natural numbers (i.e., given a spot on an infinitely-long list). Fun, right?

You may be wondering if there is a correspondence between the cardinality of the set of natural numbers and the cardinality of the set of real numbers. You’re in luck; there is! The cardinality of the set of natural numbers is of course infinity, but it is a kind of infinity that is called aleph-null (ℵ₀). The cardinality of the set of real numbers (“the cardinality of the continuum”) is 2^ℵ₀^ ! That is a very big number.

Finally, if you’re still with me, I’ll offer a bonus. We can divide the real numbers into two sets: the algebraic numbers, which are numbers that can be solutions to one-variable polynomial equations with rational or integer coefficients, and transcendental numbers, which cannot be. It turns out that there are countably many algebraic numbers. You might already see where this is going. If there are 2^ℵ₀^ real numbers, and aleph-null algebraic numbers, how many transcendental numbers are there? 2^ℵ₀^ – ℵ₀, which is exactly equal to 2^ℵ₀^ . (If you have difficulty seeing this, try 100 instead of aleph-null: 2^100^ – 100 is very close to 2^100^ , no?). What this means is that “almost all” numbers are transcendental.

https://www.elidourado.com/p/cantors-diagonal-argument

For convenience, we are going to give the proof in terms of the binary system (see Number Systems), though it applies equally well for the decimal system. We will use the binary system because this makes everything a lot simpler – in the binary system, the only symbols used to define numbers are 0, 1 and the binary point, e.g.

0.1 in decimal notation this means a tenth, in binary notation it means a half,
0.01 in decimal notation this means a hundredth, in binary notation it means a quarter,
0.001 in decimal notation means a thousandth, in binary notation it means one-eight. And so on. For example, 7⁄16 in the decimal system comes out in binary notation as 0.0111.

Obviously, for some fractions such as 1⁄3 , the number cannot actually be written in binary notation, and the binary expansion continues infinitely with a repeating string of digits (in the case of 1⁄3 this repeating bit is 01, giving 0.0101, 0.010101, 0.01010101, and so on). Similarly, no irrational number can be represented by a finite string of the 0s and 1s of binary notation.

Again, for convenience, when the term ‘list’ or ‘enumeration’ of real numbers is used below, the term is used to indicate a function that gives a one-to-one correspondence between natural numbers and real numbers, that is, each real number is uniquely matched to one natural number. Since it refers to an infinite number of things, such a ‘list’/‘enumeration’ cannot be written down - but there can be definitions that define infinitely long lists.

Having dealt with the preliminaries, below is a typical presentation of the Diagonal proof itself:

  1. To prove: that for any list of real numbers between 0 and 1, there exists some real number that is between 0 and 1, but is not in the list.

  2. Obviously we can have lists that include at least some real numbers. In such lists, the first real number in the list is the number that is matched to the number one, the second real number in the list is the number that is matched to the number two, and so on. For any such list, we call the list a function, and we give it the name r(x). So r(1) means the real number matched up to the number 1, while r(2) means the real number matched up to the number 2, and r(17) means the real number matched up to the number 17. And so on. There can be many such lists, and we know that we can have lists that have some finite quantity of real numbers, and some lists that have an infinite quantity of real numbers. We will later address the question of whether there can be such a list that includes every real number.

  3. Now, we suppose that the beginnings of the binary expansions of some list of real numbers are as follows (of course, we cannot actually write down infinitely long binary expansions):

r(1) = 0.101011110101 …

r(2) = 0.00010100011 …

r(3) = 0.0010111011110 …

r(4) = 0.111101010111 …

r(5) = 0.10111101111 …

r(6) = 0.11101011111001 …

  1. For any list of real numbers, there exists a number (which we will call d ) which is defined by the following rule. We start off with a zero followed by a point, viz: ‘0.’ Then we take the first digit of the first number in the list, and if the digit taken is 0 we change it to 1 and we write it down; if it is 1 we change it to 0 and we write it down. This is called the complement, so the complement of 0 is 1 and the complement of 1 is 0. We then take the second digit of the second number in the list and do the same, writing the changed digit after the previous one. And so on, and so on. For the first few numbers in our list above, this would work out like this (here we show the relevant digits in bold text):

r(1) = 0.101011110101 …

r(2) = 0.00010100011 …

r(3) = 0.0010111011110 …

r(4) = 0.111101010111 …

r(5) = 0.10111101111 …

r(6) = 0.11101011111001 …

  1. From this list, we obtain the following number: d = 0.010001. This is commonly called the ‘diagonal’ number. This real number d differs from every other real number in the list since it is different from every number in the list by at least one digit. For any finite list, the number d is a rational number, since the sequence of digits is finite. But if the list is limitless, then d is an endless expansion that is a real number. In this case, we cannot follow the instruction to write down the digits, and the number d is given only by definition - it is defined as the number where its n^th^ digit is the complement of the n^th^ digit of the n^th^ number in the list..

  2. So, given any list of real numbers we can always define another real number that is not in that list – the Diagonal number.

  3. We now assume that there can be a list that includes every real number.

  4. And now we have a contradiction – because the Diagonal number would be at the same time defined as a number that is in the list and also cannot be in the list – because it differs from every number in the list, since it is always different at the n^th^ digit.

  5. That means that the assumption that there can be a list that includes every real number (Step 7 above) is incorrect.

  6. Therefore there cannot be a list that includes every real number.

That concludes the Diagonal argument.

https://www.jamesrmeyer.com/infinite/diagonal-proof

He also showed that they were “non-denumerable” or “uncountable” (i.e. contained more elements than could ever be counted), as opposed to the set of rational numbers which he had shown were technically (even if not practically) “denumerable” or “countable”. In fact, it can be argued that there are an infinite number of irrational numbers in between each and every rational number. The patternless decimals of irrational numbers fill the “spaces” between the patterns of the rational numbers.

Cantor coined the new word “transfinite” in an attempt to distinguish these various levels of infinite numbers from an absolute infinity, which the religious Cantor effectively equated with God (he saw no contradiction between his mathematics and the traditional concept of God). Although the cardinality (or size) of a finite set is just a natural number indicating the number of elements in the set, he also needed a new notation to describe the sizes of infinite sets, and he used the Hebrew letter aleph (Aleph). He defined Aleph0 (aleph-null or aleph-nought) as the cardinality of the countably infinite set of natural numbers; Aleph1 (aleph-one) as the next larger cardinality, that of the uncountable set of ordinal numbers; etc. Because of the unique properties of infinite sets, he showed that Aleph0 + Aleph0 = Aleph0, and also that Aleph0 x Aleph0 = Aleph0.

All of this represented a revolutionary step, and opened up new possibilities in mathematics. However, it also opened up the possibility of other infinities, for instance an infinity – or even many infinities – between the infinity of the whole numbers and the larger infinity of the decimal numbers. This idea is known as the continuum hypothesis, and Cantor believed (but could not actually prove) that there was NO such intermediate infinite set. The continuum hypothesis was one of the 23 important open problems identified by David Hilbert in his famous 1900 Paris lecture, and it remained unproved – and indeed appeared to be unprovable – for almost a century, until the work of Paul Cohen in the 1960s.

https://www.storyofmathematics.com/19th_cantor.html/

view more: next ›