Research Contributions

Designing the tools to navigate software space

Other specific contributions are explained below.

Nature, the top-ranked journal, produced this video (by their own initiative) to explain my research on causal deconvolution based on algorithmic probability, based on our paper published in the same journal:

Play Video

I was invited to write an essay on Computing in the Year 2065 for a new Springer book edited by Prof. Andrew Adamatzky. My contribution is titled Reprogramming Matter, Life, and Purpose of which first draft can be read here. I was also invited to write about how to be unconventional in science which I did under the title Paths to Unconventional Computing: Causality in Complexity, available here.

Graph and Tensor Complexity

I have introduced native measures of universal complexity i.e. independent of description language and selected feature(s). This was by way of algorithmic probability based on

Show more

​Entropy-fooling Graphs

To illustrate the problem of ad-hoc measures I constructed a network whose node degrees follows the so-called Champernowne artificially-constructed Borel normal constant thus indicating maximal entropy (and maximal

Show more

Algorithmic Data Analytics

Because data of interest is usually, if not always, produced by mechanistic and algorithmic causes, I have proposed an empirical Bayesian approach to science where

Show more

Creativity in Cooking

In my personal quest to understand the process of, and the sources of creativity, I have explored apparently disparate areas, such as cooking. I have

Show more

Key concepts from my research explained in ​a computable piece of uncomputable art:

This picture, titled “Visualising the Computational Universe: Runtime Space in a Peano Curve,” won me 1st. prize in the Computational Imagery category of the “2011 Kroto Institute Scientific Image Competition” held in the UK. Here is the explanation: It represents a small slice of the computational universe of all possible computer programs.

Each pixel represents a computer program with empty input running on an abstract computer (a Turing machine). The color indicates the runtime of the program upon completion of its computation. The darker a pixel the longer the program took to produce its output. White pixels are computer programs that do not produce any output because they never halt (just as happens when your computer stalls forever, sometimes making you reboot).

Red pixels are the computer programs that take the longest time—relative to other programs of the same size—before finishing (they are also called Busy Beavers). I call this color type coding the computability spectrum. Programs were sorted by size from smallest to largest (in terms of number of instructions) in such a way that one could be certain that any program would eventually be reached (including versions of MS Windows and Angry Birds that can be encoded as one-run computer programs).

Because there is no upper limit to the size of programs, the space of all computer programs is infinite but discrete, perhaps mirroring our own world. Because an enumeration of computer programs is linear (you start with program 1, then proceed to program 2, and so on) pixels depicted in this 2-dimensional figure are disposed along a “space-filling” Peano curve. This is a curve that guarantees that 2 contiguous elements in the enumeration remain as close as possible to each other in the 2D configuration.

The picture has the striking particularity that it is ultimately uncomputable. Imagine you point a telescope and take a picture of an object in the computational universe for which some pixel values cannot be determined, not only in practice but in principle– that is, no matter how good your camera, or how many ‘terapixels’ some future version of it may have. This slide of the computational universe was only possible because it is a tiny region of small computer programs for which all pixels can be determined by zooming in deep enough.

However, Gödel and Turing proved that if you zoomed out to a larger view of the computational universe, the picture would begin to look completely blurry because pixel colours would not be determinable. In other words this picture is the inverse equivalent of Hubble’s Deep Field space–the larger the space field the blurrier.

While we can only have access to the visible universe determined by the speed of light, we can only see a small part of the computational universe due to similar limitations both physical and computational (it seems we cannot build more powerful computers than the ones we already have to overcome these limitations). Hence, most of the computational space will remain unknowable, not only due to a limitation of resources but due to a fundamental barrier established in the digital world and extending into our own reality.

Only incremental improvements to this picture are possible. While the picture is ultimately uncomputable and unattainable, one can make an informed guess as to what it would look like if zooming out. The theory predicts the average color of the picture at a greater scale, this theory being the theory of algorithmic probability, closely related to what is known as the Chaitin Omega number, or the halting probability. It turns out that the picture will be mostly white, as in an ever expanding universe, finding computer programs that halt and produce something will be increasingly harder. Indeed, most programs will never halt, and those that halt will do so relatively quickly in proportion to their size (see paper P10).

Algorithmic probability also tells us that most programs will produce an output following exponential decay (-log_2 Pr(s)) hence most of them will produce the same strings and the fewest will produce algorithmically random digital sequences. In other words, among the computer programs that do produce an output, most of them will be highly ordered, as if they were creating a structured digital universe of highly ordered patterns out of nothing (just like ours). Some of these programs that produce the same output may do so faster than others, but precisely how much faster a program can be is another open foundational question and one of the greatest challenges in mathematics and computer science. If it were possible to find a program considerably faster than any other, then everything could be computed in ‘reasonable’ time, regardless of the apparent difficulty of the computation.

 

A full technical explanation of some of this and of how the image was generated was published in this paper:

H. Zenil, From Computer Runtimes to the Length of Proofs: With an Algorithmic Probabilistic Application to Waiting Times in Automatic Theorem Proving In M.J. Dinneen, B. Khousainov, and A. Nies (Eds.), Computation, Physics and Beyond International Workshop on Theoretical Computer Science, WTCS 2012, LNCS 7160, pp. 223-240, Springer, 2012.

A preprint is available online here.