1Note

Alternative title: computational complexity theory.

This page is utterly broken. Do not see.

Last update was 2019-01-04.

The conversion of this page from Markdown to Org Mode using Pandoc may introduce some errors, beside my errors that are already in the page before conversion.

By "complexity theory", we mean "computation-complexity theory", that is, a theory about the complexity of computations.

[1]

fn::https://cstheory.stackexchange.com/questions/4489/should-experts-in-tcs-charge-money-to-read-proofs-that-p-np/4605

"Eight Signs A Claimed P≠NP Proof Is Wrong" https://www.scottaaronson.com/blog/?p=458

2Research questions

Why do problems have different difficulties?

The P vs NP problem itself is a formula like \( NP - P = x\) or \( \exists p : NP(p) \wedge \neg P(p) \). What is the complexity of the P vs NP problem itself? Now we go full cycle. Isn't this somewhat meta?

3A machine model: nondeterministic simplified x86-compatible processor with unbounded external tape

Does this concreteness help, or does it confuse us even further?

Every instruction takes the same amount of time. Totally sequential execution. No speculative execution. Not superscalar. No pipelining.

Imagine a x86-compatible processor connected to a tape controller with an unbounded tape. We can imagine such machine as a Turing machine.

Memory-mapped tape registers: TC is tape control register (write-only; 1 = move tape right, -1 = move tape left). TD is tape data register. We assume that every tape operation takes the same amount of time as one processor instruction does.

mov dword [TC], 1 ; move tape head right
inc dword [TD] ; add 1 to the content of the tape cell under the tape head

What would nondeterministic x86 assembly look like? We add something like McCarthy's "amb" operator to x86 assembly. We add two magical instructions: AMB and FAIL. When a machine executes an AMB instruction, a parallel universe is created for each possible value of the operand. When a machine executes a FAIL instruction, the parallel universe containing that machine is destroyed. In the following example fragment, the instruction "AMB eax" creates \(2^{32}\) parallel universes.

; create 2^32 parallel universes:
; one with eax=0, another with eax=1, and so on
AMB eax

cmp eax, 123
je .ok

; destroy the universe we are in, if eax is not 123
FAIL

; accept, if eax is 123
.ok:
hlt

We can also add AMB and FAIL to an abstract procedural language.

P vs NP implicitly depends on the machine. There are many Turing machines.

3.1Why do we use Turing machines instead of lambda calculus when discussing computational complexity?

Because we don't know a cost model for lambda-calculus.1

3.2rewriting system

  • What does it mean that a rewriting system computes a function?
  • Given more time, a rewriting system can compute more functions.
  • A Turing machine is a rewriting system.
  • What is the difference between a rewriting system and a formal system?
  • What is the difference between a formal system and a formal language?

3.3Computation graph?

  • computation graph
    • ? \(M\) simulates \(N\) iff a subgraph of \(M\) is isomorphic to \(N\).
      • \(T\) is the type of terms.
      • \(M\) is the semantic function.
    • Shortcut
      • The shortcut of \(A\) is \(B\) where \[ B(x,y) = \text{\\(x\\) is initial, \\(y\\) is terminal, and \\(x\\) reaches \\(y\\).} \]
        • \(A\) should be acyclic.
      • Alternative names for shortcut: WP:Disintermediation, immediation, curtailment, shortening, abridgement.
      • Shortcutting is idempotent: the shortcut of the shortcut is the shortcut itself.
  • What should we name this sequence?

    • The sequence: \(x, f(x), f(f(x)), \ldots, f^n(x), \ldots\)

      • \(x\) is the initial state.
      • \(f\) is the next-state function.
    • Trace? History? Path? Computation path?
    • WP:Iterated function
    • WP:Iteration

3.4next-state relation

  • The next-state relation \(N\) is obtained from \(T\) by making a loop for each isolated vertex. For each \(x\) in the domain of \(T\), \(N(x,y) = T(x,y)\). For each outside \(x\), \(N(x,x)\).
  • The computed relation of \(T\) is \(N^\infty = N^\infty \circ N\).
  • The problem computed by the graph is the infinite self-composition of the graph's next-state function. Such problem is the smallest \(X\) that satisfies \(N \circ X = X\). It is the least fixed point of \(F\) where \(F(X) = N \circ X\). The nth self-composition of \(N\) is \(N^n = E \circ N^{n-1}\).
  • An infinite composition \(N^\infty\) is a relation satisfying \(N^\infty \circ N = N^\infty\). The empty relation satisfies this. The other one is nontrivial.

3.5Problem computed by a graph

A graph G computes the problem \(P(G) = \{ (x,y) ~|~ \text{\\(y\\) is the nearest terminal vertex reachable from \\(x\\)} \}\). Because the graph is loopless, there is no path from a vertex to itself, a vertex is not reachable from itself.

terminal(x) = not exists y : E(x,y)
reach(x,y) = E(x,y) vee exists m ( reach(x,m) wedge reach(m,y) )
adist(x,y,1) = E(x,y)
adist(x,y,n) = exists m exists k : dist(x,m,k) wedge dist(m,y,n-k)
dist(x,y,n) = adist(x,y,n) wedge neg exists m < n : adist(x,y,m)

Configuration graph as formal system? Configuration is well-formed formula. \(E(a,b)\) is iff \(a' \vdash b'\). Initial state is axiom. \(F \models P\)

4Philosophically-sound definitions?

4.1Philosophy

Oded Goldreich has some interesting ideas2:

  • Complexity theory offers interesting philosophical perspectives.
  • Complexity theory relates knowledge, randomness, and secret.
  • "importance of representation"
  • "knowledge" from complexity theory perspective
  • "Approximation is a natural relaxation of various computational problems"

4.2Finitism?

Can we formulate computational complexity theory in a philosophically sound manner, with finitism, without assuming infinite sets? What is a problem then, if not an infinite set? What is a function then, if domains and codomains must not be infinite?

4.3Do not confuse

Do not confuse a problem and an algorithm that solves that problem. Example: Consider \(p(x)\) that wastes \(2^{|x|}\) steps, and then returns the leftmost bit of \(x\). Thus \(p \in \ExpTime\), but \(\Search(p) \in \Time(O(1))\), because every string that begins with \(1\) satisfies \(p\), and we can just hardcode any of those strings in the solution of \(\Search(p)\).

5Relationship between a decision problem and its corresponding search problem

Every decision problem has a corresponding search problem. If we can solve this question, then we can solve pnp: Is there a decision problem in P whose corresponding search problem is not in P? Find a decision problem in P whose search problem is in EXP but not in P. Is there a corresponding search problem for which generate-and-test is optimal? Suppose yes. Suppose no. Which of them leads to contradiction?

There are fast decision problems whose corresponding search problems are fast. Example: determining whether a bit string contains any one-bit. Both its decision and search problem is fast.

Decision versus Search 2010 https://cseweb.ucsd.edu/~mihir/cse200/decision-search.pdf

M. Bellare and S. Goldwasser. The complexity of decision versus search. SIAM J. on Computing, Vol. 23, No. 1, February 1994

Is there an algorithm that translates an optimal solution of a decision problem to an optimal solution of the corresponding search problem? I doubt it.

6Attempts

6.1Questions

Can we apply pigeonhole principle to the computation graph?

What problems are equivalent to the P vs NP problem?

6.2Finding an search problem that forces a DTM to traverse the search space

Let \( \newcommand\SetOutcome{\mathbb{F}} \newcommand\SetBit{\mathbb{B}} \newcommand\SetPred{\mathbb{P}} \newcommand\FunSat{\text{sat}} \newcommand\FunMinTime{\text{MinTime}} \newcommand\FunLen{\text{Len}} \mathbb{B}= { 0, 1 } \) be the set of bits.

Let \(\mathbb{B}^*\) be the Kleene closure of \(\mathbb{B}\).

Let \( \mathbb{F} = \{ \text{accept}, \text{reject} \} \) be the set of final states.

A predicate is a function in \(\mathbb{B}^* \to \mathbb{B}\).

Let \(\mathbb{P}\) be the set of all computable predicates.

Let \(p \in \mathbb{P}\) be a computable predicate.

Let \(\text{Len}(x)\) be the length of the string \(x \in \mathbb{B}^*\).

Let the function \(\text{sat}: \mathbb{P}\times \Nat \to \mathbb{F}\) be

\[\begin{equation*} \text{sat}(p,n) = \begin{cases} \text{accept} & \text{if \( \exists x \in \mathbb{B}^n : p(x) = 1 \);} \\ \text{reject} & \text{otherwise.} \end{cases} \end{equation*} \]

Let \(\text{MinTime}_M(p,x)\) be the shortest time (the minimum number of steps) required by machine \(M\) to compute \(p(x)\) (to compute the predicate \(p\) with input \(x\)).

Let \(N\) be an NTM (non-deterministic Turing machine).

Let \(D\) be a DTM (deterministic Turing machine).

Such NTM \(N\) can compute \(\text{sat}(p,n)\) in \(O(n + \max_{x \in \mathbb{B}^n} \text{MinTime}_N(p,x))\) steps. This is such algorithm:

function sat (p, n) {
    var x: array [1..n] of bit
    for i := 1 to n {
        x[i] := guess
    }
    if p(x) { accept }
    else { reject }
}

Such DTM \(D\) can compute \(\text{sat}(p,n)\) in \(O(\sum_{x \in \mathbb{B}^n} \text{MinTime}_D(p,x))\) steps. This is such algorithm:

function sat (p, n) {
    for x in B^n {
        if p(x) { accept }
    }
    reject
}

Conjecture: There exists a computable predicate \(p \in \mathbb{P}\) such that

  1. \( \text{MinTime}_D(p,x) = \text{MinTime}_N(p,x) \),
  2. \(\text{MinTime}_D(p,x) \in O([\text{Len}(x)]^k)\) where \(k > 1\),
  3. \(N\) optimally computes \(\text{sat}(p,n)\) in \(O(n^k)\) time, and
  4. \(D\) optimally computes \(\text{sat}(p,n)\) in \(O(2^n \cdot n^k)\) time.

If that conjecture is true, then \(\TimeP \neq \TimeNP\).

6.3Another attempt?

  • This is an older attempt.
  • This should be merged to the attempt above.
  • Let:

    • \(f\) be a predicate
    • \(k\) be a natural number
    • \(Sat(f,k)\) be the problem of finding a string \(x\) of length \(k\) such that \(f(x) = 1\)
  • Lemma: If \(f \in \TimeP\) then \(Sat(f,k) \in \TimeNP\). (This should be obvious and simple to prove?)
  • Conjecture: There exists a predicate whose search cannot be faster than brute force.

    • Formally: There exists \(f \in \TimeP\) such that \(Sat(f,k) \not \in \TimeP\).
  • That lemma and that conjecture, if proven true, would imply \(\TimeP \subset \TimeNP\).
  • We try to prove that conjecture by diagonalization/pigeonholing? The set \( {0,1}^k \to {0,1} \) has \(2^{2^k}\) elements, because by combinatorics, in the truth table, there are \(2^k\) rows, and each row has \(2\) possibilities. There are \(2^{2^k}\) possible \(k\)-letter-string predicates. Suppose that a deterministic machine can solve \(Sat(f,k)\) for all \(f\) in \(O(poly(k))\) time. (Can we apply pigeonhole principle to the configuration graph?)
  • Every predicate can be stated in disjunctive normal form.

6.4Plan for the P vs NP problem?

  • Relate configuration graph and problem theory
  • Unexplored ideas:

    • Machine is not computation.
    • Machine is formal system.
    • Computation is repeated function application.
    • Under what conditions does nondeterminism give extra power?
  • Where is computation theory, computability theory, complexity theory now?

6.6What

  • How do we solve the P vs NP problem?
    • What is problem, computation, complexity, P, NP?
    • Can we construct a problem that is in NP but not in P?
    • Can we show that P = NP leads to contradiction?

7Meta-research

7.1Where are progress tracked?

Open access journals:

World effort:

Blogs to follow3

not recommended: drinking from the firehose: recent publication trackers: arxiv list of recent submissions:

Better let well-known researchers discriminate the signal from the noise for us.

7.2What is the P vs NP problem?

Official problem description4.

7.4NP-complete problems? Why do we care about this list?

8Bibliography

[1] Hromkovic, J. and Rossmanith, P. 2017. What one has to know when attacking p vs. NP. FCT 2017 (2017).


  1. https://cstheory.stackexchange.com/questions/23798/p-and-np-classes-explanation-through-lambda-calculus

  2. http://www.wisdom.weizmann.ac.il/~oded/cc-over.html

  3. https://cstheory.stackexchange.com/questions/4090/ways-for-a-mathematician-to-stay-informed-of-current-research-in-complexity-theo

  4. http://www.claymath.org/sites/default/files/pvsnp.pdf