On very very large numbers: a repost, with commentary

One of my abiding interests (along with nuclear bombs and very-long-exposure photography and Projects Daedalus and Orion, etc) has been googology, the study of very very large numbers just for the heck of it.

As a primer, you might find Scott Aaronson’s “Who Can Name the Bigger Number?” interesting.

Most people have no idea how big infinity is. They know on an intellectual level that “infinity”, at least in lay terms, is a kind of Wittgensteinian way of saying “there’s always a bigger number”. They don’t see it, don’t feel its vast endlessness in their bones the way they know that mountains are high (that was a bad example). I doubt anyone really can; Jorge Louis Borges explores this theme over and over again in his stories (see e.g. The Library of Babel or The Aleph), and set theorists like say Joel David Hamkins (interestingly the highest-rated user on MathOverflow by far, and whose user profile description reads “my interest lies in…. the mathematics and philosophy of the infinite”) or Saharon Shelah (the most prolific mathematician after Erdos and Euler, with over three hundred publications in the last ten years alone at the doddering old age of fiftysomething – to hell with Hardy’s “mathematics is a young man’s game” quote) might appreciate actual infinities and their towering hierarchies better than anyone else alive can, but even the finite numbers themselves climb so high that you don’t even have to consider infinity itself to start appreciating how big big can really get.

You have to really spend time with the behemoths lining up at infinity’s door (what kind of metaphor was that?) to get acquainted with the vastness of the finite. I spent years.

Like every aspiring student of googology, I started with Knuth’s beginner-friendly up-arrows, then went to hyper operators and Conway’s chained arrows before encountering amateur mathematician Jonathan Bower’s homepage and no-rechristened BEAF (Bowers Exploding Array Function). Chris Bird goes higher, but it starts getting pointless. The googolplex, the Moser, Graham’s number, Ackermann’s function, Kruskal’s TREE(3), ordinals, busy beavers and their higher-order oracles – all familiar signposts on the way to infinity’s unattainable horizon.

Like every aspiring student of googology, I spent hours at a stretch trying to one-up notations created by others to express ever-larger numbers. This is one of those essays I wrote after encountering an especially novel-to-me notation, particularly one whose expressive power seemed to come from completely out of left field. Interestingly, it got a mention (a dismissive one, but still) in the Googology Wiki by one of the premier googologists in tihs subsubsubfield, who also gave a quick drive-by analysis of (the shortcomings of) one of my earlier attempts to, as I used to call it, “leap into the abyss”.

The title derives from one of Bowers’ earlier iterations of his “exploding array function”.


When the Pilot Drops One, the Passengers Explode

Just landed one hell of a big fish this morning! The dormant googologist in me decided to wake up after a months-long slumber to trawl the Web for prize catch comparable to those denizens of the upper echelons of the fast-growing hierarchy – leviathans from the sunless depths, spawn of the ordinal gods above – and he managed to snag what I think is the largest from Friedman (of Kruskal’s TREE fame) so far: the subcubic graph function SCG(k).

As is typical of Friedman, the subcubic is very simply defined, but the growth rate is extremely explosive. To give a preliminary idea of how fast I mean, I will say this: SCG(3) > TREEn(3) for any reasonable value of n. It is mind-bendingly ridiculous how completely the subcubic outpaces the TREE. We’re talking about one of the fastest computable functions ever devised; knowing that TREE(1) = 2 and TREE(2) = 3, the mental leap required to conceptually grasp the relative scale of TREE(3) (and hence function growth rate) to Graham’s number G is as big as that required to grasp G’s scale relative to the googolplex. And why the hell am I even talking about the googolplex? It is nothing, absolutely nothing, compared to these leviathans. That said, I’ll introduce the subcubic below, but first some basic ideas from graph theory are in order.

We first conceptualize a graph as being a discrete collection of objects we call vertices(singular vertex) with at least some of them being connected to one another by links, callededges. It follows that since edges are by definition links between vertices, we wouldn’t have either ‘free-floating’ or ‘open-ended’ edges. This implies that every edge is associated with two vertices, one at each end.

Clearly a vertex can be connected to any number of other vertices. Define the valence of a vertex to be the number of edges connected to it, counting loops twice. For instance, if you think of a triangle as a graph made up of vertices adjoined by sides (edges), all the vertices of a triangle have valence 2.

We are said to have performed an edge contraction if we remove an edge and connect the two vertices previously connected to one another by said edge to make a single vertex. For instance, a square can be made into a triangle by removing one edge and connecting the two vertices to make one.

Consider two graphs G and H. If we can delete edges and vertices and perform edge contractions to H so that it becomes G, we say that G is a minor of H. So we say, for instance, that a triangle is a minor of a square.

Call a graph subcubic if the valence of its vertices is at most 3. Now consider a sequence of graphs G1, G2, G3… such that

  1. Each graph Gi has at most i + k vertices (for some integer k),
  2. Gi is not a minor of Gj for all 1 ≤ i < j.

It can be shown that, for any choice of k, such a sequence cannot be infinite no matter how they are ordered; it follows that for each k there exists a sequence of graphs of maximum length.

Definition. SCG(k) is the length of the longest such sequence given k.

Simple, isn’t it? You get the feeling it is a bit too simple. The graphs’ vertices cannot have valence exceeding 3, so their options are limited; they’re that much more ‘minor-prone’ to one another (the correct term is homeomorphic embedding). In lieu of that it seems as if whatever they ‘do to try and order themselves’, they can’t ‘avoid being embeddings of their sequential successors’ for very long. At least, that’s the intuition. But oh, how spectacularly astray intuition leads you here! For they can in fact ‘avoid each other via ordering’ as such for quite a while, almost forever in fact; it is this astonishing capacity for avoidance via ordering among the graphs that interests me so much. The resultant growth rate of the function defined on their behavior completely boggles the mind. And it is two-pronged: where does this explosiveness come from? For recursion, that source of the ordinal gods’ might and sheer scale, is completely absent!

As is the case with its far smaller brethren, SCG(k) quickly escapes estimability, and we become forced to work with teasing out lower bounds instead, which in turn leads to a hierarchy of lower bounds that would prove frustratingly imprecise for anyone else not used to the realities of googologism. Sure, we can show that SCG(0) = 6: the longest such sequence would first be a vertex connected to itself by an edge that ‘loops around’ back to its starting point, then you have two vertices (i + k = i = 2) connected by a single vertex, followed by 3, 2, 1, and 0 unconnected vertices (yes, a graph with ‘0 vertices’ counts as a graph). But SCG(1) is already out of sight. It is larger than Graham’s number G, larger than Friedman’s n(4) even, its growth rate already comparable to fԑo(n).

(A refresher is in order. Recall that for some strictly increasing function f(n) we have fm+1(n) = fmn(n), fω(n) = fn(n) and the ordering ω, ω+1, ω+ ω = 2ω, ωω = ω2, ωω = ԑo.)

Here comes the interesting part. SCG(2) is monstrous, but provably far smaller than TREE(3). SCG(3) on the other hand, as has been mentioned in the introduction, truly blows the TREE function out of the water; it surpasses TREE(3), TREE(4), indeed TREE(n) for any reasonable value of n; it laughs in the face of even TREE(TREE(3)), even, in fact, TREEn(3) for again any reasonable value of n. Insofar as computable functions are concerned it far surpasses anything else ever defined except the comparably enormous and similarly unfathomable intuition-escaping ordinal form of the Buchholz hydra, which will be covered in a later post.

You cannot help but wonder at the near-transfinite possibilities afforded by the sheer power of the subcubic. A little game, perhaps: delay inevitable death at the hands of Rado’s Sigma, the Busy Beaver, which eventually surpasses all computable (let alone recursive) functions. I was, to be honest, surprised at how speedily the Beaver slayed such well-known computable monsters: the first few inputs yield disappointingly low values, and you have to wait all the way up to ∑(64) to be sure it surpasses Graham’s G, but past that it far outpaces Chris Bird’s last number N at k = 150, and the legendary loader.c at k = 160. That said, perhaps you could bite the bullet and subject the pure subcubic to recursion – we would be essentially describing the fast iteration hierarchy, but still.

Let f(n) = SCG(n). Recycle and append. Take it all.   

Ah, how tempting. Completely, as has been the case with every single one of my past leaps into the transfinite, superfluous too.

But you could also tweak it in other, more decidedly imaginative ways. Recall that the valence of the vertices in all the graphs making up the sequence is limited to at most 3. Now for the tremendous leap of faith: what if SCG(k, m) for k constant and m = valence limit were an increasing – a very explosively increasing – but still finite function of m, for all k?

You see now the possibilities for iteration. You could define a rudimentary, Bowers-ish variant of the subcubic admitting an arbitrary number of entries. Define for instance SCG(k, m, n+1) = SCG(k, SCG(k, m, n), n) with the famous drop trailing zeros mantra invoked, and add the extension SCG(k, m, X = arbitrary-length string of entries, p, n+1) = SCG(k, m, X SCG(k, m, X, p, n), n). Or maybe Bowers’ more breathtaking ‘pilots n’ passengers’ invocation: when the pilot drops one, the passengers explode – otherwise more rigorously written SCG(a, b, c, … , n+1) = SCG(A, A, A, …, n) with A = SCG(a, b, c, … , n). All that’s left to do is to arbitrarily define some sequence f(n) based on this Bowers variant of the subcubic, Ackermann-style – perhaps this will do: f(0) = SCG(0), f(n+1) = SCG(A, A, A, … , A) with number of A’s = A = f(n) – and let it scale the fast iteration hierarchy.

Recursively.

I’ll just leave that to you. Give me a call when you reach infinity.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s