1What does a computer compute?

What do we mean when we say "this computer adds two numbers"?

We actually mean "this computer does something that we easily interpret as adding two numbers".

In principle, it is the positional notation that matters, not the choice of the base, as long as the base is an integer that is greater than one. Positional notation is the most efficient way we know to name a number. That is, in positional decimal notation, 0 ("zero") is a name of the first natural number, 9 ("nine") is a name of the tenth natural number, and so on.

Positional notation is more efficient than unary notation because positional notation enables us to compare bigger numbers more quickly than unary notation does. We can compare an m-digit string and an n-digit string by looking at m + n digits. If we use decimal positional notation, we can see that 10 is smaller than 20 by looking at two digits. If we use unary notation, we have to look at 10 digits to decide the same thing, that is to see that 11111-11111 is smaller than 11111-11111-11111-11111.

I think we can only imagine small numbers, at most somewhere around ten.

If we encode numbers as their prime factorizations, then multiplication becomes easy but addition becomes hard, and comparing numbers become hard.

Thus numbers are not only about counting, but also about ordering: zero comes before one; one comes before two; there is nothing between zero and one; there is nothing between one and two; and so on.

What do we mean when a computer computes the function \( (D,D,F) \) where \( D \) is the set of natural numbers below 16, and \( F \) is the mapping \( (x,y) \mapsto x+y \) (addition modulo 16)?

That is not the only thing computers do. They also consume electrical energy, produce heat, and so on, but for programming, we only consider what is meaningful to us.

2Introduction

2.1What is the problem?

The problem is defining the verb "compute", from which the nouns "computer" and "computation" follow trivially.

2.2What are our approaches?

In "Linguistic analysis of computing" chapter. Analyze what it means to compute, from how we use the word.

In "Computers" chapter. Unify analog and digital computers. Find the common things between them. Craft a definition of "compute" that satisfies both of them.

In "Computing" chapter. Generalize calculation to computation. Generalize a narrow but undoubted definition of the computation we routinely do (that is calculation).

2.3Linguistic conventions

In this document, I always use "may" in the epistemic sense, and never in the deontic sense.1 Thus "may not" and "does not have to" have the same meaning.

3Linguistic analysis of computing

3.1English suffixes -er, -ate, and -ion

We only need to define what it means to compute, and the meaning of computers and computation will follow.

A computer is a thing that computes.

Computation is the process of computing or the result of computing.

3.2Concrete subject (computers)

The subject of "compute" is a concrete object (with material existence). Examples of such subjects are some machines and some animals2.

3.3Abstract object (computees)

The object of "compute" is an abstract object (with no material existence). For example, it makes sense to compute a number, but it does not make sense to compute a chair.

3.4Adjectives that modify computation

3.4.1Computation is mostly irreversible

Reversible computation relates information loss, entropy, and heat. "Reversible" implies that computations may have two directions (forward and reverse), and that most computation is irreversible, such as addition: from several summands we can know the sum, but from only a sum we cannot know the summands used to produce that sum; we lose some information in that computation.

3.4.2Computation may be interactive

Is interactive computation computation3?

Must algorithms describe only terminating computations? An operating system describes an interactive computation. Computers do not exist in a vacuum. Computers interact with reality.

4Computing?

4.1A computation is not only a discrete model of reality

Analog computers invalidate this definition.

Our knowledge of physics is a model of reality.

Computation is a circumstantial/occasional/special discretization of the laws of physics. We can model an electronic computer containing 100 bipolar-junction transistors as 300 numbers, each representing voltage at each terminal of each transistor. We can also model the same computer as 100 bits.

Computation is a discrete sequential logical model of reality.

A machine simply acts according to the laws of physics, but we interpret some of such acts as a computation. This implies that a computation is what we think a physical system does, not what the system actually does. Thus, a computation is a discrete sequential model of what some physical systems do.

Computation is our way of thinking about what the machine does. We invent the concept of computation because we are eager to patterns everywhere. Our understanding of computation enables us to manipulate machines into doing what we want.

Computation has no material existence. What exist materially are machines acting according to the laws of physics.

A computation is an explanation of what a machine does. If machine A computes Y from X, and machine B computes Z from Y, then the machine built from those machines computes Z from X.

However, if objective reality exists, then the machine will still compute, regardless of whether we exist to describe what the machine does.

4.2How do we know something computes?

We know something computes iff we can explain/describe its behavior. Finite description.

A computer's behavior may be too complex for us to describe. But our inability to describe it doesn't mean it doesn't compute.

4.3A computation does not have to end

A computation does not have to end. A Turing machine may compute without terminating.45 For example, a machine may compute 2/3 (whose binary expansion 0.10… does not terminate) by repeatedly printing 10 forever.

5Computing

5.1Computing as knowledge generation

This is a summary of Wiedermann & van Leeuwen 2014 [27], 2015 [29], 2017 [28], and Van Leeuwen 2015 [17].

Computation is knowledge generation.

Knowledge is relative: one's knowledge is another's noise.

This view unifies several other views of computation.

5.2Computing as doing mathematics

5.2.1Narrow but undoubted examples of computation

What can we infer from these examples?

Adding two numbers is a computation: To add two numbers is to compute the sum of two numbers. I can imagine gathering two cats and three cats into five cats, but I cannot visualize ten cats.

Adding two numerals is a computation: A numeral is a concise representation of a number; numerals enable us to indirectly compute bigger numbers. I add two numbers below ten quickly because I have memorized the 10x10 addition table in primary school. I add two numbers between 10 and 10,000 by long addition of their numerals: I visualize a positional decimal representation of the numbers, that is the numeral, not the quantity itself; I find it hard to visualize more than five things at once. I refuse to add big numbers.

Producing a number from a formula is a computation. A narrow but undoubted example of computing is substituting numbers into a mathematical formula, producing an expression, and evaluating the expression into a number. The input is the variable assignments. The program is the formula. The computer is the human. The output is a number.

It seems that the goal of a computation is to arrive at a number (or a numeral). It seems that computation is about transforming symbols.

5.2.2Computing with tools and machines

Some tools help us compute.

5.2.3To compute is to do mathematics?

The way we use the verb "compute" implies that computation is a model of how reality does some mathematics.

What do we mean by "doing mathematics"?

Something computes iff we think it does some mathematics. To compute is to do some mathematics. Mathematics is not only arithmetics, but also logic, etc.

But what about analog computers, such as an operational amplifier that "adds two real numbers", or "integrate a real function"?

An analog computer can integrate a real function. A digital computer cannot (but can approximate).

Are there programmable analog computers?

What does a programmable analog computer look like?

How would analog computers have conditionals, loops, and other constructs?

5.3Computation from logic point of view

5.3.1Problem as logical formula

A problem is a question, a logical formula.

"Problem" comes from Greek "problema" which means "a task, that which is proposed, a question".6 Therefore, a problem is a question, or, formally, a logical formula.

5.3.2Formula, input, output, model, relation

A problem is a formula. For example, the problem "Given an \(x\), what is \(x+x\)?" is the formula \( x+x = y \) in first-order logic with equality and some arithmetics. Note that some logic is embedded in English.7.

Some common problem shapes
name shape input output
decision problem \( p(x) \) \(x\)
search problem \( p(x) \) \(x\)
function problem \( f(x) = y \) \(x\) \(y\)

A problem may have inputs and outputs. An input of a problem is a free variable in the formula. An output of a problem is a free variable in the formula.

Another example: the problem "Is the sum of two even numbers even?" is the formula \( E(x) \wedge E(y) \to E(x+y) \).

What does it mean to solve a problem (answer a question)? Solving a problem is answering a question. Answering a question corresponds to proving a formula. Answering a question corresponds to finding a model of a formula?

A problem may be modeled by a relation between questions and answers. For example, the problem \( \forall x \exists y : x+x = y \) is modeled by the relation \( \{ (0,0), (1,2), (2,4), \ldots \} \) and is also modeled by the relation \( \{ (\epsilon,\epsilon), (1,11), (11,1111), \ldots \} \).

Do not conflate a problem and a model of it. A problem is a formula, not a relation.

Compare various definitions of "problem"89.

A problem is [7]

Problem can be composed as formulas can be composed.

5.3.3Reduction

Sometimes we can reduce a problem into another problem?

5.3.4Computation as question-answering

Computation is answering a question.

What is the relationship with Wiedermann & van Leeuwen 2015 knowledge generation [29]?

What is the relationship between computation and answering questions?

5.4TODO Summarize

Zenil 2014 [30]

Horsman et al. 2014 [12]

Horswill 2012 [13]

Denning 2010 [8]

Ladyman 2009 [16]

Copeland 1996 [6]

6Computing, representation, and complexity

Synonyms of representation: encoding, model.

A machine "computes \(y\) from \(x\)" iff the machine ends with a representation of \(y\) if the machine is started with a representation of \(x\). Alas, this definition has two big problems:

  • Must a computation be started by something outside the computer?
  • What is representation?

6.1Computation as representation transformation

To compute is to transform representation into something easier.

6.2What is a natural ordering (standard ordering)?

The natural ordering of the natural numbers is the transitive closure of \(\forall n (n < S(n))\).

It is the simplest ordering, the one with the shortest description.

Why is it called "standard"?

6.3What is a reasonable representation?

A representation is reasonable iff it makes computing the natural ordering have linear time complexity.

A reasonable encoding is an encoding that is easy to compute and is easy to invert.

A reasonable encoding has a finite description.

6.4Reification fallacy: conflating a thing and its representations

Do not conflate a thing and its representations.

First, we undo the chronic ontologically-sloppy habit of conflating a thing and a representation of the thing. "123" is not a number, but a representation of a number. We cannot manipulate numbers physically because they do not have material existence. We can only manipulate the physical representations of those numbers. When we "add two numbers", we are actually manipulating the representations of those numbers in a way that corresponds to adding those numbers. Formally, if \(e : \Nat \to \{0,1\}^*\) is an encoding scheme, then \( e(x+y) = e(x) +_e e(y) \), where \(+\) is the operation that we think we do, and \(+_e\) is the operation that we actually do. We think we are adding numbers, but we are actually writing symbols on paper or juggling symbols in our mind.

Then, we un-conflate a program and a machine running the program. A program does not compute; it is the machine that computes. A program cannot do anything on its own; a machine has to run it. When we say "a program computes a function", we actually mean that running the program on the machine causes the machine to compute that function.

An algorithm describes how to compute something but does not compute what is described, because an algorithm is a mathematical object with no material existence. An algorithm describing how to calculate a number does not itself calculate the number, in the same way a recipe describing how to cook an egg does not itself cook the egg. A recipe has no material existence; what has material existence is the physical medium (such as ink and paper) that is used to describe that recipe in the symbols we agreed upon.

Unfortunately, the ontologically correct thing is very wordy, so I write in conflated manner. For example, when I write "this program adds two numbers", what I really mean is "running the program causes the machine to manipulate two representations in a way that corresponds to adding two numbers". Fortunately, the only time we have to care about this ontological issue is when we are talking about the foundations of computation.

6.5Representation affects complexity

Encoding a natural number \(n\) in unary notation takes \(n\) symbols. Encoding the same number in binary notation takes approximately \(\log_2(n)\) symbols.

Adding two natural numbers \(m\) and \(n\) takes \(m+n\) steps in unary notation, but only approximately \(\log(\max(m,n))\) steps in positional notation.

Why don't encode a number as its prime factorization, to simplify multiplication while complicating addition?

What do we formally mean by "reasonable encoding"?

Why do we assume that numbers are encoded in positional notation10, not unary notation11?

My guess: What we mean by reasonable encoding is an order-preserving homomorphism:

\[\begin{align*} a < b &\iff e(a) <_e e(b) \\ a = b &\iff e(a) = e(b) \end{align*} \]

A homomorphism preserves structure. But which structure?

We may encode the natural numbers as the bitwise-negation of the base-2 representation: 1, 0, 11, 10, 01, 00, etc.

6.6I forgot

I asked this question12 and then I forgot it, and I found it again four years later. Where should I put that? We should not let D.W.'s painstakingly written answer be in vain.

7Computer, both analog and digital

7.1Short partial history of computers

In 1640, a computer is a human calculator.13 In 1897, a computer is a mechanical calculator.14 In 1945, a computer is an electronic calculator.15 All those computers ran approximation algorithms to generate look-up tables of values of transcendental functions. There are also analog computers made with operational amplifiers16, as opposed to digital computers made with logic gates17.

As we build stronger computers, we begin trying to simulate reality, and we wonder whether the Universe is just an extremely powerful computer. The world progressed explosively, despite being built on increasingly complex computer systems with ever-more undefined behaviors, occasionally killing people. However, modernization does not change the essence of computation.

In the 1970s, a computer was a desktop computer, calculation gained a numerical connotation,

A calculator is a computer specialized for numerical problems. and thus calculation is numerical computation. In 2019, a human calculator is a human who can mentally manipulate digits quickly and correctly.

7.2What is common to all computers?

It must be computation, right?

Every computer has a finite set of primitive operations.

Both analog and digital computers compute, but what and how? Let us compare an analog adder and a digital adder.

An analog inverting adder is modeled as a network of operational amplifiers and resistors.1819 The inputs are \(v_1\) and \(v_2\). The output is \(v_3\). Each of \(v_?,r_?,g_?\) is a random variable (which implies a probability distribution), not a number, due to physical imperfections. Thus an analog computer computes inexactly. The operation of a two-input inverting adder is modeled as: \[ v_3 = - r_f \cdot \left( g_1 \cdot v_1 + g_2 \cdot v_2 \right) \]

A digital adder is modeled as a network of logic gates.20 A number is represented as a bit string. The inputs are \(a\) and \(b\); each is \(n\) bits long. The output is \(s\). The index \(k\) goes from 0 to \(n-1\). The operation of a ripple adder is modeled as:

\[\begin{align*} s_k &= (a_k \oplus b_k) \oplus c_{k-1} \\ c_k &= a_k \wedge b_k \wedge c_{k-1} \\ c_{-1} &= 0 \end{align*} \]

What is common: Computation is the operation done by a computer. The above equations are models of computation. Both analog and digital adder computes addition. But the analog one models the addition of two random variables; the digital one models the modular addition of natural numbers modulo \(2^n\).

7.3Unified theory of analog and digital computers?

Analog computers are computers too. A theory of computation must explain both analog and digital computers. The continuous non-symbolic nature of analog computers conflicts with the discrete combinatorial nature of logic?

Example: an analog adder computes the weighted average of \(n\) real numbers in constant time and \(O(n)\) space. No digital computer can add \(n\) real numbers in constant time!

Blum 2004 [2].

Blum et al. defines machine model over a ring. They generalize Turing machine tape cell from bits to ring elements?

Blum et al. 1989 [4].

Blum et al. 1998 [3] aims to develop a theory of real computation.

The article [2] can serve as an introduction to the book [3].

Blum et al. relates Cook-Levin satisfiability and Hilbert's Nullstellensatz. Theorem BSS89: P_R = NP_R iff HN_R in P_R. HN_R is Hilbert's Nullstellensatz over ring R.

7.4Analog vs digital is continuous vs discrete

Are the continuous world and the discrete world irreconcilable?

Both analog and digital computers are made with transistors, but analog computers operate the transistors outside the saturated region, whereas digital computers operate the transistors in the saturated region. Analog to digital is knob to switch, that is, continuous to discrete. Analog computers use transistors as amplifiers. Digital computers use transistors as switches.

What does digital do better than analog? Temperature affects analog computers more than it affects digital computers. Digital signals are more immune to noises. Digital computers have a wider operating temperature range.

What does analog do better than digital? Analog computers degrade gracefully: computation gradually gets more and more wrong as the computer goes out of its designed operating conditions. Digital computers degrade abruptly: computation suddenly gets chaotic as the computer approaches a limit of its designed operating conditions.

7.5Analog computing?

Where can we find more about analog computing? Most computers are digital. We need analog computers to define what computation is.

7.5.1How do we branch on analog computers?

Conditionals?

Comparator and multiplier?

An analog computing program is a dataflow program? The computing primitives are the basic amplifier arrangements?

Asynchronous circuits?

8Computee

What may be computed?

8.1Some sets

A machine "computes the set \(D\)" iff, for each \(x \in D\), the machine can determine the truth of \(r(x) \in R(D)\), where \(r\) is the computation's encoding scheme, and \(R(D) = \{ r(x) ~|~ x \in D \}\).

8.2Some sequences

A machine "computes the (infinite) sequence \(x\)" iff the machine computes every finite prefix of \(x\). That means: given ever-longer time to run, the machine computes an ever-longer prefix of the sequence. Thus, a computation does not have to end; it may run forever. The sequence \(x\) can be identified by the function \(f : \Nat \to A\), in the way \(x_k = f(k)\).

8.3Some digits?

Turing 1937 [24] defines a computable number as a number whose digits can be generated by a machine. Thus, to compute a number is to compute the sequence of its digits, using an algorithm (a finite description).

8.4From nothing?

A machine that generates a sequence computes something from nothing.

8.5OS?

What does an operating system compute?

8.6What?

Does a quantum computation consist of discrete steps?

Immerman 1999 [14], in Definition 2.4 (page 25), defines what it means for a Turing machine to compute a query.

8.7Some functions

A machine "computes the function \(f:D\to C\)" iff, for each \(x\in D\), the machine computes \(f(x)\) from \(x\). But a mathematical function may be infinite, whereas a machine is finite. We often ignore ontology and say that a machine computes the function \(f\) to mean that the machine computes an interesting finite subfunction of \(f\). No machine can manipulate every number, because there is always a number that is too big to physically represent. It is physically impossible to manipulate extremely big natural numbers. For example, no machine truly implements the addition of every possible two natural numbers, because it is physically impossible. We can describe an extremely large number, but we can only visually imagine five to nine things.

What is a function?

We must distinguish relations and expressions. Which of these is a function: \(\{(0,1),(1,2),\ldots\}\) or \(x \mapsto x+1\)? Neither. A function \(f : D \to C\) is a triple of sets \((D,C,F)\) where \(F \subseteq D \times C\), and \(f(x)=y\) means \((x,y) \in F\), and \(\forall x \forall y ( x = y \to f(x) = f(y) )\). See also Rapaport 2005 [21], section 7.3.1.3 ("Interlude: functions described as machines"), page 239.

8.8A relation is a triple of domain-codomain-pairing

A function is extensionally described by showing each pairing in the function. Thus this only works for finite functions, because we do not have the time to write down each pairing in an infinite function. The magic ellipsis is not an extensional description. An example of such ellipsis is the triple dots "\(\ldots\)" in \(0,1,2,\ldots\). Such ellipsis means "and so on".

See also Rapaport 2005 [21], section 7.3.1 ("What is a function?") and its descendants, from page 236.

9Algorithm

9.1Some short partial history of algorithms

In 1690, an algorithm is an Arabic system of computation?21 It is the historically-and-interculturally mangled name of Muhammad ibn Musa al-Khwarizmi22 who lived in the 8th century.

9.2An algorithm is a description of how to compute something

An algorithm is a finite description of how a computer computes something. In the medievals, an algorithm is a numerical approximation scheme to be run by humans. Anyone who knows basic arithmetics can mindlessly carry out an algorithm and produce a correct answer without any understanding of why or how the algorithm works.

An algorithm restates a function as a composition of primitives.

Some note about ontology: The long addition algorithm does not describe how to add two numbers \(x\) and \(y\). It describes how to manipulate two representations \(e(x)\) and \(e(y)\) in order to produce a third representation \(e(x+y)\) that represents the sum of \(x\) and \(y\).

An approximation scheme describes a number iff the sequence of approximations converges to the number. The approximation may never reach the number, but it always gets closer.

An algorithm is a finite description. Description implies language, presumably a formal language. Language implies syntax and semantics. Thus an algorithm is a string in a language.

There are many formal languages: Turing, Post, primitive recursive arithmetics, lambda calculus, ML-family languages, computation models23, etc. There are lots of computation models, each capturing different aspect, but most are equivalently powerful.

The language should have a sensible cost model so that we can define space complexity and time complexity.

Rapaport 2015 [21] p. 269 mentions Moschovakis's idea of algorithms as recursors. See Vardi 2012 [26], Gurevich 2011 [10], Gurevich 2012 [9], Moschovakis 2001 [19]. See also Japaridze's computability logic2425. It is a game-theoretic model of computation.

A finite description may describe an infinite (non-terminating) computation.

10Algorithm and problem

10.1Algorithm is how, problem is what

An algorithm describes how to compute something.

A problem describes what to compute.

See also Rapaport 2005 [21] page 242, about the difference between formulas and algorithms.

10.2Algorithm, machine, implementation, and computation are what?

If algorithm A describes how to compute C, and machine M implements algorithm A, then machine M computes C.

Are there undescribable computations?

11Machine

11.1Tacit assumptions about operating conditions

There are many machine models26. All of them imply some operating conditions: no electrical disruptions, fires, cosmic rays, and so on. All of them also imply a sequence of operations.

11.2Machine model

A machine model is a formal system that represents the relevant aspects of the internal states of a computing machine.

11.3Machine model vs computation model?

"Computation model" or "machine model"?

Do we care about the machine or the computation done by the machine?

Do we care about what a machine is or what a machine does?

The Turing model represents what a machine is, not what a machine does.

12Mathematical models of computation

12.1Computing a function with respect to a model

Now we define "to compute the function \(f : D \to C\)" with respect to the computation model \((D,C,S,d,c,t)\) where \(d : D \to S\), and \(c : C \to S\), and \(t\) has arity \((S,S)\). The computation model is a three-sorted structure. The functions \(d\) and \(c\) together bridge two things: (1) our high-level thought of the machine computes, and (2) the logical system that abstracts the machine's internal state and computation. Let \(S\) be the computation model's domain of discourse, that is, the set of each mathematical object that is a simplified representation of a machine internal state. Let \(t\) be a relation symbol of arity 2. The relation \(t\) represents the state transition relation. Define the transitive closure of \(t\) as \(T(x,y) = (TC(t))(x,y) = t(x,y) \vee \exists z (t(x,z) \wedge T(z,y))\) where \(TC\) is the transitive-closure operator.

Machine \(M\) computes function \(f : D \to C\) according to computation model \((D,C,S,d,c,t)\) iff \[ compute(M,f) = \forall x : T(d(x), c(f(x))) \]

We can focus on the computation model, and focus on the substructure \((S,t)\) instead.

A machine computes the function \(f : D \to C\) according to the computation model \((S,c,d,t)\), iff, for all \(x \in D\), it is true that \(T(d(x),c(f(x)))\), that is, the machine starts at state \(d(x)\) and finishes at state \(c(f(x))\).

A computation model is a logical system that has a domain of discourse representing machine internal state, and has an arity-2 relation symbol \(t\) representing the state transition relation.

TODO [25]

12.2Encoding scheme

Now we define encoding.

An encoding is a representation of something. A representation is not the represented, but a representation behaves in the way the represented does. Formally, an encoding scheme is a computable bijective function \(e : D \to A^*\) where \(A\) is an alphabet. Thus, an encoding scheme is an algorithm that describes a bijective function.

If "algorithm" and "encoding scheme" depend on each other, then there is only one logical conclusion: Algorithm and encoding-scheme are the same thing.

12.3Computable, algorithm, finite description

Function \(f\) is computable by formal system \(S\) iff \(S\) has a finite description of \(f\).

An algorithm solves a problem. A problem can be solved by many algorithms with different resource usage characteristics.

An algorithm is a finite description of what a machine is supposed to do.

12.4Is computation inherently sequential? Computation as sequence of steps

In a Turing machine, a step is a state transition that consists of reading the tape cell, writing the tape cell, moving the tape head, and changing the internal state. In \(\lambda\)-calculus, a step is a \(\beta\)-reduction of an expression composed from more primitive subexpressions. These examples suggest that we can define computation as a sequence of steps.

Each of those models is a special case of deciders.

12.5Complexity

The worst-case time complexity27 of machine \(m\) for input \(x\) is \(t(m,x)\), the number of steps \(m\) makes between the beginning and the halting. The worst-case time complexity of \(m\) for input size \(n\) is \(T(m,n) = \left\vert \max_{|x| = n} t(m,x) \right\vert\). We can also write asymptotic statements such as \(T(m,n) \in O(f(n))\).

An algorithm implies a machine.

The complexity class of a problem is the worst-case time complexity of the most efficient algorithm solving that problem.

A machine \(M\) is a transition relation \(T\) (an acyclic binary relation). \[ T(x,y) = \text{\(M\) can state-transition from \(x\) to \(y\).} \]

\(M\) computes \(P\) iff a subgraph of the shortcut of \(T\) is isomorphic to \(P\). (If \(T\) were cyclic, this definition would fail.)

Related: graph isomorphism, subgraph isomorphism problem.

Deterministic machine equals functional relation.

\(G\) accepts \(v\) iff \(F^\infty(\{v\}) = \emptyset\) where \(F\) is the graph's fringe function. The language recognized by \(G\) is the largest \(L \subseteq V\) such that \(F^\infty(L) = \emptyset\).

A Turing machine is \((C,I,f)\) where \(C\) is countable and \(f\) is recursive.

https://en.wikipedia.org/wiki/Register_machine

Example: a state of a Turing machine is \((c,l,h,r)\) where \(c\) is a configuration, \(l\) is the tape content to the left of the head, \(h\) is the tape content at the head, and \(r\) is the tape content to the right of the head.

12.6Digressions

12.6.1Pullback

We can model the apparent function computed by the machine as \(g : A^* \to A^*\) where \(g(e(x)) = e(f(x))\). We then do some algebraic manipulation:

\[\begin{align*} \\ g(e(x)) &= e(f(x)) \\ (g \circ e)(x) &= (e \circ f)(x) \\ g \circ e &\equiv e \circ f \end{align*} \]

An equation of the shape \(g \circ e \equiv e \circ f\) is a special case of pullbacks28 in category theory.

12.6.2Cheating

"Cheating" with an unreasonable encoding is a common error in P vs NP "proofs".

12.7Rant: The sad state of computational complexity texts?

It is philosophically appaling that most computational complexity texts readily show what a problem is represented as, but never clearly and formally define what a problem is. It is appaling that they spend hundreds of pages discussing something undefined.

13Fields of study

13.1Computation theory

Computation theory spans philosophy, physics, and mathematics. The mathematics part29 studies logical models of computation, not computation itself. Which part of computation theory are we interested in? This document is mostly the mathematics part, because there is a one-million-dollar prize for solving the P vs NP problem. See Piccinini 2017 [20] if you are interested in the philosophy and physics parts.

1999 Immerman [14], 2009 Arora & Barak [1], 2009 Marek & Remmel [18], 2002 Boolos, Burgess, & Jeffrey [5], 1987 Rogers [15].

Where are the researchers? There is ACM Special Interest Group on Logic and Computation (SIGLOG)30. There is also Computational Complexity Conference31.

We can think of computation theory as refining these hierarchies: automaton power hierarchy32, problem complexity hierarchy, logic strength hierarchy, Chomsky language hierarchy33, arithmetical hierarchy34, formal system power hierarchy35, and so on. They are related to each other. We want to find out which feature gives which power.

What is the difference between descriptive complexity theory and implicit complexity theory36?

13.2Computer science

Rapaport 2005 [21] surveys various definitions and their problems. It summarizes the discussion in page 154 (3.15.4 Conclusion).

Computer science37 is not science (the application of the scientific method to make falsifiable theories).

Scott Schneider defines "computer science" as "everything to do with computation, both in the abstract and in the implementation".38

Is CS a branch of math?39

If science is a Latinate synonym of the Germanic "knowledge", then computer science is a synonym of "computer knowledge".

14Digressions

14.1The suffix -er works with all verbs except modals

The suffix -er works with both both Germanic and Latinate verbs. For example, a (Germanic) reckoner is one who reckons, and a (Latinate) computer is one who computes.

The suffix -er works with neologisms. Once we accept that V is a verb, we readily accept that Ver is a noun that means one who does V. For example, after we have accepted that "google" is a verb, we readily accept that a "googler" is one who "googles". Conversely, once we have accepted that Ver is a noun, we readily accept that V is a verb that is done by a Ver. For example, "burgle" is backformed from "burglar".40

The suffix -er sometimes works as a backforming "anti-suffix": we can sometimes form a verb V from a noun Ver by removing the -er. For example, it is easy to imagine backforming "cadave" from "cadaver", and it is not hard to imagine that "to cadave" means "to be a dead body" or "to behave like a dead body".

Indeed the suffix -er seems to works with all verbs except modals (may, might, can, could, shall, should). Hence we say that the suffix -er is productive.

14.2The suffix -ion expects Latinate verbs

Germanic verbs take -ing instead of -ion. Example: Germanic "eat" and Latinate "manducate"41, and Germanic "eating" and Latinate "manducation"42. Thus, in this case, Germanic -ing is Latinate -ion.

I think generally Latin -re → English -te → -tion.

This is not how those words were actually historically imported, but we can think of these words this way.

manducare → manducate43 → manducation

computare → computate4445 → computation

renovare → renovate46 → renovation

It seems that the historical path was longer: Latin infinitive → first-person singular present subjective → past participle → nominalization -ionem / -ionis → English drops -em / -is → backform -ation to -ate.

Latin gestare → gesto → gestatio → English gestation → gestate

Letin renovare → renovo → renovation → English renovation → renovate

Latin computare → computo → computatio → English computation → computate

formare → formo → formatio → formation

English is a mess of doublets.

14.3Genus-differentia definition of computation?

A computation is (what) that (what)?

Process? Activity? Mechanism?

A program describes the computation performed by a machine. A program modulates the machine. Manipulates computational resources to compute something.

14.4Computation as information transformation?

What is information?

A computer reduces information? Transforms information?

Computation is transformation of information?

14.5Computation as model/concretion?

Computation is running a program on a machine.

It seems that the defining feature of computation is conditional and repetition.

Program is a model.

14.6A machine is what?

A machine is a tool that computes what the machine is designed for. A machine has material existence. It is a physical implement.

Digression: In file:philo.html, I write that a machine is a tool, that is something that we use to extend our self (what we control).

14.7Even more historical?

Leibniz used the term "calculation"? Turing used "effective calculability" to mean "algorithmic"? Computation is calculation? It's just following rules?

14.8Machine, automaton, robot are what?

In 1540, a machine is any structure or device.47 The word "machine" may have come from a Proto-Indo-European word that means "that which enables".48 Some machines are programmable. Such machine implements several functions that can be chosen by a program which is a part of the machine's input. The program chooses which function the machine shall compute.

In 1610, an automaton is a self-acting machine.49 Thus an automaton has an energy source or is connected to an energy source that enables the automaton to run with minimal human intervention.

In 1923, the English word "robot" came from the Czech word "robotnik" that means "forced worker".50

14.9The implicit agency of -er might have impelled us to invent God

Why is -er so productive? Because every sentence that has verb can always have a subject. Because it is always possible, if not necessary, for a verb to have a doer. Because every action has an agent, because everything happens because an agent does it, perhaps this is a tacit fundamental assumption of our logic, or perhaps this reflects how the Universe works? Our language implies that the subject causes the action or outcome described by the verb? A Ver is one who Vs, that is, one who causes a Ving to be done, and thus there is an implicit agency in each Ver. Recall that an agency is an ability to cause.

There is a problem: if we assume that, then the sentence "X exists" implies that X causes its own existence, but it seems problematic for something to cause itself51? Did we invent God because we impose, through our languages, that everything has a cause, that every verb has a doer? We invented God because we have evolved to crave explanations for everything, because craving for explanations promote survival? We want to explain everything, but our finiteness precludes us from explaining everything.

It is curious that Christians call Jesus "Word (logos) of God"5253, and the Greek word "logos" also begets the English word "logic".

Spreading religion requires language, unless our ancestors were telepathic.

Aren't we rambling too much? We are merely trying to define "compute", which requires us to traverse linguistics, and somehow we arrived at theology? It is trivial to get lost in philosophy, as each question readily begets more questions. How do we find the way out?

14.10God in Cantor's paradise is what/where?

If a god resides in a paradise, and Cantor has made us a paradise54 (according to Hilbert), then what/where is the god in Cantor's paradise?

14.11Computation and reckoning are the same?

The Germanic English of "compute" is "reckon" (German rechnen, Dutch rekenen). Thus computation is reckoning.

14.12Philosophy of language?

14.12.1How do we answer "What is something?"

It is difficult. Consider this example: What is an elephant? There are several possible answers:

  • An elephant E is an individual whose genetic code makes it belong to a species of the Elephantidae family, such that E can mate with other members of the species and produce fertile offsprings. But this begets another question: what is a member of the Elephantidae family?
  • An elephant is a big gray mammal with proboscis.

But those are what we think an elephant is, not what an elephant is. What is the essence of an elephant? What is true of elephants regardless whether there were humans to describe elephants? An individual's genes determine whether it is an elephant, but the genes themselves are not the elephant.

We often conflate a thing and its representations. For example, in everyday conversation, we call a drawing of an elephant an "elephant", and we call an elephant statue an "elephant". Ontologically, they are a drawing and a statue, and not an elephant. But calling them "drawing" and "statue" is considered uninformative. Therefore, the semantics of language depends on context.

Of these two conversations, the first one is practically correct but ontologically wrong, and the second one is practically wrong but ontologically correct.

  • "What is this?" "An elephant."
  • "What is this?" "A drawing." "I know. A drawing of what?" "An elephant."

The practically correct conversation is the one that conveys the most information. If something has higher computational complexity, then it has more information? It is harder to see that the drawing is of an elephant than it is to see that the drawing is a drawing.

Suppose that I draw an elephant on a sheet of paper. Is it a sheet of paper, is it a drawing, or is it an elephant?

Kurzgesagt has a video55 that discusses "What is something" from physics point of view.

14.12.2Concrete and abstract objects?

Concrete objects exist in material space. Abstract objects exist in ideal space. The mind manipulates abstract objects. The hand manipulates concrete objects. But those definitions have problems with dreaming, hallucination, imagination. If I imagine myself holding a box, is that box a concrete object? No. That box exists only in my imagination.

Concrete objects are real? Abstract objects are not real? But how can something unreal be true?

Abstract vs concrete objects [22].

14.13What?

14.13.1Computing theory

  1. What is the relationship between P and NP?

    What is the relationship between P and NP? We are trying to understand the P vs NP problem, which requires computation, model, and logic. We were motivated by the prize money56, but our chances seem slim.

  2. Should we merge approximation theory and propositional calculus?

    Why do we want that?

    Boolean metric spaces merge logic and analysis? How do we define a sensible approximation error? How do we define the distance between two formulas? The formula \( p \wedge q \) approximates the formula \( p \wedge q \wedge r \). What is the approximation error?

    What are these papers?5758

14.13.2What is a model?

What is a model?

14.13.3When should we care about a philosophically sound foundation of mathematics?

Math seems to work just fine even if it isn't philosophically justified. Why should we care about finitism?

I am looking for a philosophically sound foundation of mathematics. It may be finitism.

14.14The meaning of the English suffixes -er and -ion

14.14.1The English suffix -er means one who does

Appending -er to a verb V forms the noun Ver which means "a person who does V", which we often stretch to mean "a thing that does V".

14.14.2The English suffix -ion nominalizes Latinate verbs

Nominalization turns a word into a noun.59

"Computation" comes from "compute" and "-ation".

The suffix -ation comes from -ate + -ion.60 Appending -ion to a Latinate verb Vate forms the noun Vation.

For example, an explosion is an action, but not a process, and not a condition, and not a state. However, an explosion looks fast because we are humans and we are relatively stationary to the explosion; it would look slow if we were traveling near the speed of light with respect to the explosion.

A process is an action that takes some time?

Computation is a process. This is supported by the usage "this computation does not terminate", the usage "this computation produces 123", and the non-usage "this computation is 123".

The sentence S V O translates to the logic formula V(S,O). Example: "John eats bread" translates to "eat(John,bread)", or, "john(x), bread(y), eat(x,y)".

S V O = S does NV to O, where NV is the nominalized V. Examples. He is renovating (renewing) his house = he is doing a renovation to his house. He is manducating the food = he is doing a manducation to the food. He is computing the number = he is doing a computation to the number? How can we do something to an abstract thing with no material existence?

NV is P = S VP iff S V. Stealing is bad = S steals iff S is bad. Stealing is taking others' belongings without their permission = S steals iff S takes others' belongings without their permission.

[11]. [31] paywall.

14.15What

Defining computation as the execution of an algorithm raises difficult issues [23].

Rapaport's 2005 book [21] deals with things in the layer below the layer we work at.

15Concordances

15.1Piccinini 2017 "Computation in physical systems"

Piccinini 2017 [20] distinguishes between "concrete computation" and "abstract computation". We use "computation" to mean his "concrete computation" and "computation model" to mean his "abstract computation".

We say that an algorithm describes, not computes. "An algorithm computes" is a category error.

We say "Turing model", not "Turing machine", because it is a mathematical model, not a concrete machine that has material existence.

16Bibliography

[1] Arora, S. and Barak, B. 2009. Computational complexity: A modern approach. Cambridge University Press.

[2] Blum, L. 2004. Computing over the reals: Where turing meets newton. Notices of the AMS. 51, 9 (2004), 1024–1034. url: <http://www.cs.cmu.edu/~lblum/PAPERS/TuringMeetsNewton.pdf>.

[3] Blum, L. et al. 1998. Complexity and real computation.

[4] Blum, L. et al. 1989. On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines. Bulletin (New Series) of the American Mathematical Society. 21, 1 (1989), 1–46. url: <https://projecteuclid.org/download/pdf_1/euclid.bams/1183555121>.

[5] Boolos, G.S. et al. 2002. Computability and logic. Cambridge University Press.

[6] Copeland, B.J. 1996. What is computation? Synthese. 108, 3 (1996), 335–359. url: <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.3830&rep=rep1&type=pdf>.

[7] Dean, W. 2016. Computational complexity theory. The stanford encyclopedia of philosophy. E.N. Zalta, ed. https://plato.stanford.edu/archives/win2016/entries/computational-complexity/; Metaphysics Research Lab, Stanford University. url: <https://plato.stanford.edu/archives/win2016/entries/computational-complexity/>.

[8] Denning, P.J. 2010. What is computation. Ubiquity. (2010). url: <http://denninginstitute.com/pjd/PUBS/CACMcols/computation.pdf>.

[9] Gurevich, Y. 2012. Foundational analyses of computation. Conference on computability in europe (2012), 264–275. url: <https://pdfs.semanticscholar.org/d444/921db6240f656390c6e9114d53756fd20494.pdf>.

[10] Gurevich, Y. 2011. What is an algorithm. Technical Report. Microsoft Research, Redmond, WA. url: <https://pdfs.semanticscholar.org/762f/178c7983d7431d04919453c043760d691366.pdf>.

[11] Hamm, F. et al. 2013. Formal foundations for semantic theories of nominalisation. Universitätsbibliothek Johann Christian Senckenberg. url: <http://www.phil.uu.nl/ozsl/articles/Lambalgen05.pdf>.

[12] Horsman, C. et al. 2014. When does a physical system compute? Proc. R. Soc. A. 470, 2169 (2014), 20140182. url: <https://royalsocietypublishing.org/doi/full/10.1098/rspa.2014.0182>.

[13] Horswill, I. 2012. What is computation? XRDS: Crossroads, The ACM Magazine for Students. 18, 3 (2012), 8–14. url: <http://www.cs.northwestern.edu/~ian/What%20is%20computation.pdf>.

[14] Immerman, N. 1999. Descriptive complexity.

[15] Jr., H.R. 1987. Theory of recursive functions and effective computability. MIT Press.

[16] Ladyman, J. 2009. What does it mean to say that a physical system implements a computation? Theoretical Computer Science. 410, 4-5 (2009), 376–383. url: <https://core.ac.uk/download/pdf/82706621.pdf>.

[17] Leeuwen, J. van 2015. The philosophy of computation. (2015). url: <https://pdfs.semanticscholar.org/6af0/9369268356d6776eb1407cbfba10891e7d82.pdf>.

[18] Marek, V.W. and Remmel, J.B. 2009. The complexity of recursive constraint satisfaction problems. Annals of Pure and Applied Logic. 161, 3 (2009), 447–457. DOI:https://doi.org/http://dx.doi.org/10.1016/j.apal.2009.07.005. url: <http://www.sciencedirect.com/science/article/pii/S0168007209001523>.

[19] Moschovakis, Y.N. 2001. What is an algorithm? Mathematics unlimited—2001 and beyond. Springer. 919–936. url: <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.7.5576&rep=rep1&type=pdf>.

[20] Piccinini, G. 2017. Computation in physical systems. The stanford encyclopedia of philosophy. E.N. Zalta, ed. https://plato.stanford.edu/archives/sum2017/entries/computation-physicalsystems/; Metaphysics Research Lab, Stanford University. url: <https://plato.stanford.edu/archives/sum2017/entries/computation-physicalsystems/>.

[21] Rapaport, W.J. 2005. Philosophy of computer science: An introductory course. Teaching Philosophy. 28, 4 (2005), 319–341. url: <https://cse.buffalo.edu/~rapaport/Papers/phics.pdf>.

[22] Rosen, G. 2017. Abstract objects. The stanford encyclopedia of philosophy. E.N. Zalta, ed. https://plato.stanford.edu/archives/sum2017/entries/abstract-objects/; Metaphysics Research Lab, Stanford University. url: <https://plato.stanford.edu/archives/sum2017/entries/abstract-objects/>.

[23] Scheutz, M. 2006. Computation, philosophical issues about. The Encyclopedia of Cognitive Science. (2006). url: <https://pdfs.semanticscholar.org/5a83/113ac2d781ea672f42a77de28ba23a127c1d.pdf>.

[24] Turing, A.M. 1937. On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London mathematical society. 2, 1 (1937), 230–265. url: <https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf>.

[25] Vardi, M.Y. 1998. Computational model theory: An overview. Logic Journal of the IGPL. 6, 4 (1998), 601–624. url: <https://www.cs.rice.edu/~vardi/papers/wollic97-igpl.pdf>.

[26] Vardi, M.Y. 2012. What is an algorithm? Communications of the ACM. 55, 3 (2012), 5–5. url: <https://cacm.acm.org/magazines/2012/3/146261-what-is-an-algorithm/fulltext>.

[27] Wiedermann, J. and Leeuwen, J. van 2014. Computation as knowledge generation, with application to the observer-relativity problem. Proc. 7th aisb symposium on computing and philosophy: Is computation observer-relative (2014). url: <http://doc.gold.ac.uk/aisb50/AISB50-S03/AISB50-S3-Wiedermann-paper.pdf>.

[28] Wiedermann, J. and Leeuwen, J. van 2017. Epistemic computation and artificial intelligence. 3rd conference on philosophy and theory of artificial intelligence (2017), 215–224.

[29] Wiedermann, J. and Leeuwen, J. van 2015. What is computation: An epistemic approach. International conference on current trends in theory and practice of informatics (2015), 1–13.

[30] Zenil, H. 2014. What is nature-like computation? A behavioural approach and a notion of programmability. Philosophy & Technology. 27, 3 (2014), 399–421. url: <https://pdfs.semanticscholar.org/578a/0d1120967479d7b4f89e15953994c9952ee4.pdf>.

[31] Zucchi, A. 2013. The language of propositions and events: Issues in the syntax and the semantics of nominalization. Springer Science & Business Media.


  1. https://english.stackexchange.com/questions/189974/why-do-they-say-may-not-for-things-which-people-shouldnt-do

  2. Some animals can count, and counting is a computation; thus some animals can compute. http://www.bbc.com/future/story/20121128-animals-that-can-count

  3. https://en.wikipedia.org/wiki/Interactive_computation

  4. https://math.stackexchange.com/questions/1561293/must-an-algorithm-terminate

  5. "An example of a non-terminating Turing machine program is a program that calculates sequentially each digit of the decimal representation of pi" http://www.alanturing.net/turing_archive/pages/reference%20articles/what%20is%20a%20turing%20machine.html

  6. https://www.etymonline.com/word/problem

  7. English is at least second-order, as demonstrated by the Geach–Kaplan sentence "Some critics admire only one another" https://en.wikipedia.org/wiki/Nonfirstorderizability

  8. https://en.wikipedia.org/wiki/Computational_complexity_theory

  9. https://plato.stanford.edu/entries/computational-complexity/

  10. https://en.wikipedia.org/wiki/Positional_notation

  11. https://en.wikipedia.org/wiki/Unary_numeral_system

  12. https://cs.stackexchange.com/questions/40672/are-there-name-and-literature-for-this-sat-like-problem

  13. https://www.etymonline.com/word/computer

  14. https://www.etymonline.com/word/computer

  15. https://www.etymonline.com/word/computer

  16. https://en.wikipedia.org/wiki/Operational_amplifier

  17. https://en.wikipedia.org/wiki/Logic_gate

  18. https://en.wikipedia.org/wiki/Operational_amplifier_applications#Summing_amplifier

  19. The output is inverted for practical engineering reasons, but it is simple to chain an inverter to the adder's output. https://electronics.stackexchange.com/questions/268547/inverting-summing-amplifier-vs-non-inverting-summing-amplfier

  20. https://en.wikipedia.org/wiki/Adder_(electronics)

  21. https://www.etymonline.com/word/algorithm

  22. https://en.wikipedia.org/wiki/Muhammad_ibn_Musa_al-Khwarizmi

  23. https://en.wikipedia.org/wiki/Model_of_computation

  24. https://en.wikipedia.org/wiki/Computability_logic

  25. http://www.csc.villanova.edu/~japaridz/CL/

  26. https://en.wikipedia.org/wiki/Model_of_computation

  27. https://en.wikipedia.org/wiki/Worst-case_complexity

  28. https://en.wikipedia.org/wiki/Pullback_(category_theory)

  29. https://en.wikipedia.org/wiki/Theory_of_computation

  30. https://siglog.acm.org/about/

  31. http://www.computationalcomplexity.org/

  32. https://en.wikipedia.org/wiki/Automata_theory

  33. https://en.wikipedia.org/wiki/Chomsky_hierarchy

  34. https://en.wikipedia.org/wiki/Arithmetical_hierarchy

  35. https://en.wikipedia.org/wiki/Reverse_mathematics#The_big_five_subsystems_of_second-order_arithmetic

  36. http://www.cs.unibo.it/~martini/BISS/martini-1.pdf

  37. https://en.wikipedia.org/w/index.php?title=Computer_science&oldid=875563283#Etymology

  38. http://www.scott-a-s.com/cs-is-not-math/

  39. https://math.stackexchange.com/questions/649408/is-computer-science-a-branch-of-mathematics

  40. https://en.wiktionary.org/wiki/burgle

  41. https://en.wiktionary.org/wiki/manducate

  42. https://en.wiktionary.org/wiki/manducation

  43. https://en.wiktionary.org/wiki/manducate

  44. https://en.oxforddictionaries.com/definition/computate

  45. https://en.wiktionary.org/wiki/computate

  46. https://en.wiktionary.org/wiki/renovate

  47. https://www.etymonline.com/word/machine

  48. https://www.etymonline.com/word/machine

  49. https://www.etymonline.com/word/automaton

  50. https://www.etymonline.com/word/robot

  51. https://en.wikipedia.org/wiki/Causa_sui

  52. https://en.wikipedia.org/wiki/Logos_(Christianity)

  53. https://biblehub.com/sepd/genesis/1.htm

  54. https://en.wikipedia.org/wiki/Cantor%27s_paradise

  55. Kurzgesagt: What is something?https://www.youtube.com/watch?v=X9otDixAtFw

  56. one million US dollars http://www.claymath.org/millennium-problems/millennium-prize-problems

  57. https://arxiv.org/abs/0903.2567

  58. https://www.um.es/beca/papers/Aviles-Algebras.pdf

  59. https://en.wikipedia.org/wiki/Nominalization

  60. https://www.etymonline.com/word/-ation