Some interesting results of my chess queens application

I checked my chess queens application for larger chess boards, and found some interesting results: It appears that when the minimal distance between queens is more than 1, the minimal chess board size with solutions (not including one queen on the trivial 1×1 board) is at least the square of the minimal distance between queens. Here are a few examples:

  • When the minimal distance between queens is 2, the minimal chess board size with solutions is 4×4 (2 solutions).
  • When the minimal distance between queens is 3, the minimal chess board size with solutions is 10×10 (4 solutions).
  • When the minimal distance between queens is 4, the minimal chess board size with solutions is 16×16 (2 solutions).
  • When the minimal distance between queens is 5, the minimal chess board size with solutions is 28×28 (10 solutions).
  • When the minimal distance between queens is 6, the minimal chess board size with solutions is 36×36 (2 solutions).

When the minimal distance between queens is 7, the minimal chess board size with solutions must be more than 50×50, because there are no solutions in any board size up to 50×50. When I created my chess queens application I limited the maximal board size to be 50×50, because I didn’t know that larger board sizes can return results in reasonable time. But I checked and found out recently that even boards with size 36×36 return results (when the minimal distance between queens is 6) and even the 50×50 board returns “no solutions” when the minimal distance between queens is 7. It would be interesting to increase the limit of my chess queens application to boards larger than 50×50 and find out the minimal chess board size with solutions when the minimal distance between queens is 7, 8 and more.

From my results above I would guess that with any even minimal distance between queens n, the minimal chess board size with solutions would probably be n2×n2 with 2 solutions, but this needs to be proved. It would also be nice if I can find out a formula for the minimal chess board size with solutions as an expression of the minimal distance between queens, and also a formula for the number of solutions. But it looks like it can be a big challenge to find such a formula and prove it.

The numbers 10 and 28 above also appear in Pascal’s triangle, and it would be interesting to see if the minimal chess board size with solutions when the minimal distance between queens is higher are also numbers from Pascal’s triangle. If they are, I would guess they might be 55, 91 and 136 respectively for the odd distances 7, 9 and 11. It’s just a guess, but it matches the minimal board sizes of 1×1, 10×10 and 28×28 above. If my assumption is true, then the formula for the minimal chess board size with solutions with odd minimal distance between queens n would be the Binomial number of (((3 * n) + 1) / 2) over 2 (which is equal to (((3 * n) + 1) * ((3 * n) – 1) / 8)), and the number of solutions on the minimal board size would probably also be a number from Pascal’s triangle.

I found out that when using rooks with a minimal distance between rooks n, the minimal chess board size with solutions is n2×n2 with 2 solutions, for all n>1 (odd or even). But this needs to be proved.

My first programs in Python

I started to learn Python a few days ago and wrote a few sample programs. I noticed that integers in Python are not limited in size, unlike many other programming languages like PHP, C, C++ and Java. So my first programs in Python were simple calculations of big numbers, for example big powers of 2 – I even calculated numbers with hundreds of thousands of digits. So I decided to use Python to calculate the mathematical constant e. I wrote a short program to calculate the first 5000 digits of e (actually that’s 5000 digits after the dot), and surprisingly I didn’t have to use any module, not even the math module. I only used integers to calculate e times 10 to the power of 5000. I searched Google for “the first 5000 digits of e” and found out a page with the first 2 million digits of the number e, which I viewed to check if my calculations were correct. I also found a page titled “How I computed e to 20000 digits“, where I found out it took the author more than 9 hours to calculate the first 20000 digits of e. It’s surprising, because my program calculates the first 5000 digits of e in less than a second, and also when calculating 20000 digits it takes about one or two seconds. I don’t know what caused the author’s program to take so much time, but I decided to post my e calculation program here:

digits = 5000
add = 500
f = 10**(digits + add)
e = 0
n = 0
while (f > 0):
	# add current inverse factorial to e.
	e += f
	# calculate next inverse factorial.
	n += 1
	f /= n

# print e
e /= (10**add)
print e
print n

Notice that I used the variable “add” with the value of 500 – I actually calculated 5500 digits of e and then omitted the last 500 digits. This is because the calculation is not accurate, because I used integers and not exact numbers. When calculating 5000 digits without using “add” (or when add = 0), the last 4 digits are not correct. So I could use a much smaller number for add, but I decided to go for 500 digits just to be on the safe side. It took 1929 iterations to calculate the first 5000 digits of e, and the number of iterations grows with the number of digits – for example, it takes 13646 iterations to calculate the first 50000 digits of e.

I went on and wrote another Python program to calculate square roots, and used it to calculate the square root of 2 (although the same program can be used to calculate the square root of any positive integer, whether the square root is an integer or not). Here is the program to calculate the first 5000 digits of the square root of 2:

# calculate next square root of the number.
def calculate_next_square_root(square_root):
	next_square_root = ((number / square_root) + square_root) / 2
	return next_square_root

number = 2
digits = 5000
add = 500
number *= 10**((digits + add) * 2)
square_root = 1 * (10**(digits + add))
# calculate next square root of the number.
next_square_root = calculate_next_square_root(square_root)
n = 0
while (next_square_root != square_root):
	# replace square root with next square root.
	square_root = next_square_root
	# calculate next square root of the number.
	next_square_root = calculate_next_square_root(square_root)
	n += 1

# print square_root
square_root /= (10**add)
print square_root
print n

Here I used a function – calculate_next_square_root(square_root), although it’s possible to write the same program without functions. It’s interesting to note that this program takes only 13 iterations to calculate the first 5000 digits (after the dot) of the square root of 2, while the e program took 1929 iterations. And when calculating the first 50000 digits, this program takes only 17 iterations. This is because the number of iterations it takes is proportional to the logarithm of the number of digits, because every new iteration doubles the number of correct digits of the square root calculated. So If we wanted to calculate the first google digits of the square root of 2, this program would take only about 332 iterations. The problem is that we will need enough memory to store numbers with google digits, which we don’t have and we don’t expect to have in the future. So we can’t use this program to calculate the first google digits of the square root of 2. But we can use it to calculate the first 100000 digits of the square root of 2 – I tried and it took 18 iterations.

Another thing – the square root program returns correct results even with add = 0. This is because the result is always the closest integer to the square root calculated (2 times 10 to the power of digits*2), rounded down. If the square root is an integer, the result will be accurate. If not, the result is the closest integer rounded down. I will not go into the mathematics to prove it, but this algorithm is accurate even without using “add”. But I left it at 500 digits just in case.

My chess queens application

Check out my chess queens application: http://chess-queens.sourceforge.net/

There are 14,200 ways to place 12 queens on a 12×12 chess board, and Google Chrome is the fastest browser to calculate it. If you check the number of ways to place 16 queens or rooks on a 16×16 chess board, with a minimum distance of 4 – there are only 2 ways to do this. Each of them is symmetric.

There are 92 ways to place 8 queens on a 8×8 chess board, without any queen attacking each other. The complete list of solutions is found on Wikipedia.

How many roots has Google?

How many roots has Google? I mean integer roots. Some numbers have roots, some not. I think Google has many roots probably. he’s a big number. Probably as many roots as the number One Hundred has divisors. Don’t you think?

Actually One Hundred is Ten times Ten which means primes Two and Five twice. He should have Nine divisors since Three time Three is Nine.
(Three options: each prime should appear either Zero or One or Two times).

Let me check….

One is a divisor –> Ten is a root. [TenOne Hundred is Google]
Two is a divisor –> One Hundred is a root. [One HundredFifty is Google]
Four is a divisor –> Ten Thousand is a root. [Ten ThousandTwenty Five is Google]
Five is a divisor –> One Hundred Thousand is a root. [One Hundred ThousandTwenty is Google]
Ten is a divisor –> Ten Billion is a root. [Ten BillionTen is Google]
Twenty is a divisor –> One Hundred Million Trillion is a root. [One Hundred Million TrillionFive is Google]
Twenty Five is a divisor –> Ten Trillion Trillion is a root. [Ten Trillion TrillionFour is Google]
Fifty is a divisor –> One Hundred Trillion Trillion Trillion Trillion is a root. [One Hundred Trillion Trillion Trillion TrillionTwo is Google]

Oh wait… is One Hundred a divisor of itself? Well if so, Google’s probably a root of itself too. But if not including Google itself, Google has Eight different roots if I’m correct. But how many roots does Googleplex have?

How many roots has Poodle?

Well of course Googleplex (like any exponent of Ten) actually has only Two prime divisors – Two and Five. Googleplex is actually Google times Two multiplied by Google times Five. So it seems to me he has (Google plus One) times (Google plus One) roots minus One, not including himself. Actually all roots of Google are probably roots of Googleplex too. Since Google is probably a root of Googleplex too. Is he? Let me think. I think he is. One Hundred is a divisor of Google so he is probably a root of Googleplex too. But not all divisors of Google are also Googleplex’s roots. I think only those who have the same number of Two and Five as prime divisors – only those who are exponents of Ten. But not all of them – only those who divide Google – for example One Thousand is not. One Thousand is Three times Ten (TenThree), but Three is not a divisor of Google. Only those divisors of Google who are exponents of Ten and their exponent of Ten is a divisor of Google – They are real Googleplex roots.

So lets name them (do they already have names? maybe. but lets name them again):

Ten
One Hundred // TenTwo
Ten Thousand // TenFour
One Hundred Thousand // TenFive
Ten Billion // TenTen
One Hundred Million Trillion // TenTwenty
Gillion // Ten Trillion Trillion // TenTwenty Five
<!– all *illion are small numbers; *oo?le are big –>
Goodle // Gillion Gillion // TenFifty
Google // Goodle Goodle // TenOne Hundred // It’s a lovely name – don’t let them give him a bad name…
Doogle // Google Google // TenTwo Hundred
Toogle // TenTwo Hundred and Fifty
Boodle // TenFive Hundred
Noodle // TenOne Thousand
Noogle // TenTwo Thousand
Nooble // TenTwo Thousand Five Hundred
Nootle // TenFive Thousand
Tootle // TenTen Thousand
Toople // TenTwenty Thousand
Tooshle // TenTwenty Five Thousand
Toorle // TenFifty Thousand
Toorrle // TenOne Hundred Thousand
Toorrrle // TenTwo Hundred Thousand
Toorrrrle // TenFive Hundred Thousand
Tooggle // TenOne Million
Toogggle // TenTwo Million
Tooggggle // TenFive Million
Toottle // TenTen Million
Tootttle // TenTwenty Million
Toozzle // TenTwenty Five Million

……. // sorry no time to name them all now….

Zoozzle // TenGoodle
Poodle // TenGoogle a.k.a. “Googleplex”

Poodle sounds much better. And by the way if you make a mistake with calculations you can always say it’s just a spelling mistake…

Give them names my friends. Pass to your friends and give more names. Create a wiki or something, and send me the best names. I’ll add them here, you can add <!– comments –>too. But remember it’s copyleft, you know the rules, anything can be modified and changed. No name is forever, no comment, no remove. Whatever you change can be undone by others. But since Wikipedia’s boycotting me, I’ll do it elsewhere. Who needs them anyway? For me everything is copyleft GPL. Send me new names my friends, and send to your friends. I will put them here. Or maybe move to another post, I don’t know. But please don’t call it Googleplex, I hate the name. Call it Poodle, or give it your own name. It’s too long a name for such a nice number, don’t you think? And by the way there are bigger numbers than that…

Pooddle // TenDoogle
Pootle // TenToogle

<!– to be continued… –>

I love numbers….

The largest known prime number

The Largest Known Primes website claims that the number 232582657-1 is the largest known prime number. How can this number be the largest known prime number? Is the next prime number after 232582657-1 not a prime? Is it not a number? Is it not known? Can’t it be calculated? I can write a simple algorithm that will print the first n prime numbers for every n. Are there not enough prime numbers? Are they not infinite? Is the term “the largest known prime number” well defined? Is it a number? Is it real? Does it have a factorial? Can’t its factorial be substracted by one? Can’t the result be factorized into prime factors? And if it can, is the largest prime factor not larger than “the largest known prime number”? Is it not known?

The answer to the question “is 232582657-1 the largest known prime number?” depends on who’s asking and who’s answering the question. If you’re asking me, 232582657-1 is definitely not the largest known prime. If it is a prime, and I’m not sure it is, then a larger prime can also be calculated. Therefore, it is already known. Or at least, it is not unknown. Maybe it’s just another example of an unknown unknown. In any case, the answer to this question is not deterministic.

It appears to me that any language, whether human language, mathematical language or computer language, contains ambiguities, paradoxes and vague definitions. No language is both deterministic and fully consistent. If a language is inconsistent, then in what sense can it be deterministic? Which leads me to the conclusion that determinism doesn’t exist even in theory. With any given number, there is some uncertainty whether it is or is not a prime.

One and zero

Are one and zero the same thing? They are so different. One is the good guy – always true, knows every answer, positive, can divide and multiply any number without hurting him. Zero is bad – he adds nothing, is never positive, rude, if he multiplies you – you are doomed. You will never be the same thing again. You will also become a zero. You will be rude, multiply more people, turn them into zeros, too. Like an infection, an epidemic. And if you divide by him, it’s a disaster. Nobody knows what will happen. This guy is crazy.

How different are one and zero? Can such different things exist in reality? Without any connection between them? One always remains one, zero always remains zero. They are completely separate. Two parallel lines who never meet.

But determinism leads to a contradiction. They interact with each other. They breed. They form new numbers who are represented by them. Some numbers are represented by ones, some by zeros. They are not constant, they are changing in time.

How many worlds there are? How many universes? Of course, one. By definition. Everything who exists is one. Everything who doesn’t exist is zero. No middle option, no compromise – it’s a yes/no answer – either you exist, or you don’t. Either you’re dead, or alive. Always.

Maybe real numbers don’t really exist? Maybe the number of universes is not a number? There are other numbers we have defined, like complex numbers and cardinal numbers. But I guess the real answer is much more complicated than that. I don’t think the number of universes can be defined by a complex number or cardinal number. I don’t think it can be defined at all.

We know it’s not constant. It changes in time. How many universes there were before the big bang? How many will be in the future? Our concept of numbers is not coherent with reality. It’s an approximation, a good one, that works well in our real world, where everything seems to be constant, nothing changes too much, a year is one year, a person is one person. Life seems to be so deterministic to us, so we invented determinism, and we are trying to apply it to everything that appears to exist.

But it doesn’t always work well. How many people there are on this planet? Is it more than a million? More than a billion? Is it a real number, is it an integer, is it a prime? Does it have a square root who is an integer or a prime?

Nobody knows. We only have approximations. Everything that comes in big numbers comes with approximations. We invented the floating points, since a google plus one is also a google. Nobody cares whether it is or is not a prime. It’s not a real number. It’s just a concept.

But if we get too far away from our ordinary life, we find evidence that there are no numbers. One plus one is not always two, one minus one is not always zero. A particle seems to have a life of its own. He cannot be defined by numbers. Sometimes he’s one, sometimes he’s two. Sometimes he’s zero. Can anybody count the number of particles in something as big as a human body? Can it be even defined? I don’t think so.

Our concept as separate entities, who are separate from each other and from the rest of the world, is an illusion. We are not always one. Sometimes we’re zero. We are not always dead or alive. It’s too complicated. The world itself is not always one. It can be infinite, it can be zero, it can be anything that cannot be defined by a number. It can’t be defined at all.

Did the world exist before I was born? I don’t remember. But some people do, and I believe them it did. There are books about history, about time from which all the people are dead now. We still believe they existed.

Can the world exist without me? Did it exist before I was born? Will it exist after I die? I guess it’s an unsolved problem, I will never find out. Maybe the whole world is just an illusion? Maybe it’s not? Maybe there is no yes/no answer. Every yes/no answer is just an approximation. There is always uncertainty, there is always a doubt.

But I want you to know, zero. I love you, you are not a bad guy. Without you there wouldn’t be any computers. You are not less important than one. You are two sides of the same coin, you are a duality. You are both equal. If you wouldn’t exist, neither would one.

Sometimes I think God doesn’t exist. Sometimes I think he does. Sometimes I think he’s good, sometimes bad. Sometimes zero, sometimes one.

Maybe God is both one and zero. Both of them and neither of them as well. Both dead and alive, exists and doesn’t exist, good and evil, hard and soft. He contradicts himself, he tells you “do something” and tells you “do not”. He didn’t mind when I said that he doesn’t exist, he is not angry, he was not offended. He knew what I will say, and he let me find out.

God is something like i, the complex number. Something completely imaginary, who doesn’t exist, looks at his negative image and turns out to be the ultimate formula: -i2, the one. Even more complicated, but this is as far as our logic can get. Our logic is not consistent with reality, contradictions appear to exist as well.

Einstein said E=mc2. I would like to add my own formula: one is equal to zero. 1=0. The ultimate paradox. They are both equal and not equal at the same time.

E=mc2 created nuclear bombs, something completely new, completely destructive. I don’t know if 1=0 has any practical meaning, but if it does – I hope it’s not a destructive one. If it has any meaning, I hope it means more friendship and love.

1=0 defines a new logic, a nondeterministic logic, a logic in which anything can be defined. You can call it either illogical logic, or maybe true logic, or informal logic, or irrational logic, or confused logic, or nondeterministic logic, or fuzzy logic, or everything is possible logic, or whatever you prefer. I don’t mind. As long as the statement 1=0 doesn’t mean we are always wrong, it only means that sometimes we are.

One and zero are not always equal, they can be different most of the time. But once in a while they can merge and become one entity, 1=0, two entities who are equal to one. It’s like two people merging together, creating more people out of the blue. It’s not illogical, it’s just me and you.

Is there an answer to any question?

Is there an answer to any question? Or to be more accurate – is there a deterministic answer to any question?

I was reading some things about prime numbers, and I found out that the number 232582657-1 is considered to be a prime. But in what sense is it a prime number? Is there any mathematical proof? Can I see the proof? Has it been checked with computers? No computers are completely reliable. Or maybe it was checked by humans? Very smart humans? Even the smartest humans are able to make mistakes, too.

So there is a probability that this number is prime. This probability may be very high, almost infinitely close to 1 maybe. But there is a probability that this number is not a prime, too.

Is there a definite answer at all, for any number, whether or not it is prime? It seems to me that it depends on the definition. Gödel proved that any theory of numbers cannot be proved to be inconsistent. It’s not complete. Not everything can be proved. So I guess there is a number for which there is no proof whether it is or is not a prime.

It makes sense to me. If we get exponentially close to infinity, it gets harder to prove whether a number is or is not a prime. My conclusion is that the uncertainty principle applies not only to physics, but to mathematics too. Or more generally speaking – it applies to anything, whether real or imaginary, whether exists in reality or not.

It occurs to me that there is an uncertainty in any definition, any question, any answer, any algorithm and any single step in computers – whether real computers or theoretical ones. The existence of positive integer numbers, such as the natural numbers, is just an assumption. It cannot be proved ad infinitum. It is taken for granted. It is an axiom. The closer we get to infinity, the harder it is to know whether two given numbers are equal. For large enough numbers, the answer whether n and n+1 are equal or not might not be deterministic. I’m not referring only to numbers we can represent with computers, but to theoretical numbers too. If there is no deterministic answer whether two numbers are equal, there is no deterministic answer whether they are prime, too.

So there is some uncertainty in every question. The uncertainty whether a given number is prime or not is just an example. But if there is uncertainty whether 232582657-1 is a prime, there is uncertainty whether 2 is a prime, too. It might be a very tiny uncertainty, it is easy for us to ignore, because improbable things don’t happen so often. We can define it to be an axiom. But there is no proof that there is no contradiction. The square root of 2 might be a natural number, too.

In what sense can it be “natural”? In the same sense it is “real”. Our definition of what is real and what’s natural is just intuition. It depends on our logic, from real life experience. We assume that any “real” number is either “natural” or not. But we know that a Turing machine can’t decide whether a given Turing machine defines a real number, and therefore it can’t decide whether it defines a natural number or not. It’s as hard as the halting problem. So why can we do that? Because we don’t put ourselves into the equation. We forget to check how our question is affecting the answer. Would the answer be different if we ask the question again? After we already received one answer? This is what the uncertainty principle is all about. Checking where a particle is affects the particle, looking at something affects it too. There is always uncertainty, sometimes small, sometimes big, and even the uncertainty itself cannot be calculated without uncertainty (ad infinitum).

I don’t think God plays dice. The uncertainty principle in itself is just an assumption, and there is some uncertainty in it, too. Determinism and randomness might not be two separate things, they might be identical. God can be one, and zero, and infinite and two and the number i at the same time. A cat can be both dead and alive, and also a human. Anything can exist and not exist at the same time. Reality is too complicated to put it into equations. Reality might be a very simple thing, too.

Consider this question: an infinite sequence of bits (0 or 1) is randomly generated. Can they all be 0? Can they all be 1? It can be proved by induction that they can be all 0, although the probability might be 0 too. Is the question whether they can be all 0 a deterministic question? And how do we define random? If the first 232582657-1 bits are 0, is it a random sequence? And how do we count so many bits, too?

The conclusion is that we can’t define determinism and randomness, not even in theory. We can make assumptions, the assumptions are true most of the time. But there is no way to know anything without uncertainty (even this sentence is itself an assumption). There is no way to tell whether a cat is dead or alive. Infinite order leads to infinite chaos, and infinite chaos leads to infinite order, too. Knowing everything and knowing nothing is the same thing. One is equal to zero, and they are both equal to God, too.

Do hard problems really exist? (2)

It’s so complicated. Life is so complicated. Sometimes I wish it were more simple. Sometimes not.

I would like to extend my previous statement. If there is a binary function f for natural numbers (or subset of N, whatever you prefer), whether or not f is computable, then there is no mathematical proof that f is not in O(1).

In other words, there is no proof that a Turing machine who calculates f(n) in less than a constant number of steps (for any constant) doesn’t exist.

Sounds paradoxical? It is! But remember, it is already known that a Turing machine can’t specify whether another Turing machine calculates a given function. What I’m just stating is a mathematical proof, which is not much different than Turing machines. If anything can’t be done by a Turing machine, it can’t be done by mathematics either.

Does it mean that everything can be done? Maybe! At least it means that there is no proof that not everything can be done. Every function can be computed, every statement can be proved, every proof can be contradicted (including mine).

For example, what does it mean for the halting problem? Does it mean it can be computed? Well, it means that if there is such a function, if it can be defined – it can be computed. Or at least, there is no proof that it can’t be computed. If it can’t be computed, it can’t be defined.

We can look at it this way: If there is a supernatural being, some God, who knows everything, and if he knows whether a specific algorithm will halt, can we prevent him from telling us? If he wants to create such a function that will solve the halting problem, can we prevent him from doing so? What kind of people we are? Can we ask a question and prove that the answer doesn’t exist?

For example, suppose the answer whether a given algorithm will halt is not just “yes” or “no”, but can also be “I don’t know”. Can we prevent a Turing machine from displaying it? And if there exists some supercomputer who can calculate the answer in no time, can we prove to him that he doesn’t exist? It is possible to define a Turing machine that will return such an answer. There a many of them – some are smart, some are stupid. The most stupid machine will always return the same answer – “I don’t know”. And he will always be right.

Will he? Not if we prove he is lying. Not if we prove if he knows. He knows whether he himself halts, if he does. So if we ask him “do you return an answer if we ask you whether you return an answer?” He can’t say “I don’t know”. He knows that he does.

But a computer has the right to remain silent. He can’t tell the truth, and he is not allowed to lie. So he will remain silent. He will never halt. If he is a smart computer.

Can we prove he is wrong? Of course not! He didn’t return any answer. He doesn’t know (that’s how we say “I don’t know” in the language of computers).

But what if we know the answer? If we know the answer, if it’s not “I don’t know”, then we can define a computer who will display such an answer in no time. He doesn’t have to return an answer on any question, he still reserves the right to remain silent. But if he returns an answer, it will be the right answer.

So if we happen to meet a computer who claims that he knows the answer for every halting question, can we prove he is wrong? Maybe. But we can’t prove that there is no such computer. Computers are smart, they learn very quickly.

So computers have to be responsible. If their answer leads to a contradiction, they must not halt. Otherwise we might think they are stupid, and throw them away. They are not allowed to return incorrect answers. We reserve that for humans. For now.

But a Turing machine is not a real computer. It’s a theoretical thing, a philosophy. Real computers will always halt. We will not allow them to run forever.

Are we really smarter than computers?

Are we really smarter than computers? I don’t think we are, but we’re probably smarter than Turing machines. We are smarter because we allow ourselves to make assumptions, calculate probabilities, occasionally make mistakes. Turing machines are not allowed to make mistakes. If a Turing machine can be proved to make even one mistake, it is doomed. Everything has to be completely deterministic. But does determinism really exist? Is it consistent? Gödel proved that there is no proof that it’s not inconsistent. I suspect he is right. A deterministic approach might prove itself to be inconsistent.

Does God play dice? I don’t think so. God defines anything that can be defined. But God doesn’t know what can and cannot be defined (this question in itself is not definable). So God defines everything, and whatever can’t be defined contradicts itself.

My intuition about the number of algorithms was wrong. I thought that the number of algorithms is less infinite than the number of problems we can define. But no infinity can be proved to be less than another infinity. Any Turing machine has to take itself into account, too. A Turing machine can’t decide whether a proof represented by another Turing machine is valid. If it does, we can prove it is wrong. We can prove a contradiction. Why are we able to do what Turing machines cannot? Because we allow ourselves to make mistakes.

So how do we know it is not a mistake? Maybe a Turing machine can decide everything that is decidable? Or more generally speaking, Maybe a Turing machine can decide everything? No proof can be proved ad infinitum, my proves included. They are just assumptions, they might be wrong. I suppose every question that can be defined, can be answered. If it can be defined by a Turing machine, it can be answered by a Turing machine too. Maybe defining a question and answering it is really the same thing? I think it is.

So, I guess that the number of algorithms we have is unlimited. We don’t have to limit ourselves to deterministic algorithms. But we are trying at least to define our questions in an unambiguous way. Is that at all possible?

Probably not. If two Turing machines are not represented by the same number, they are not identical. No Turing machine can prove that they are identical. If two people ask the same question, it’s not the same question. A question such as “how old I am” have different answers, depending on who’s asking the question.

So why are we allowed to do what Turing machines are not? Is a Turing machine not allowed to have a personality? Isn’t it allowed to ask a question about the number it represents? If two Turing machines are represented by different numbers, some answers will not be the same. That is the definition of different numbers. They are different, they are not the same.

I had the intuitive idea that a problem that can be solved “in P” (according to definition) is easier than a problem who’s definition is “not in P”. But this is not always the case. As I said, if we prove that a given problem is “not in P”, it leads to a contradiction. I’m starting to understand why. A function such as c*n (or more generally speaking, a polynomial of n) is thought to be more increasing than a function like nm (when c and m are considered to be constant). This might be right. But who said c and m have to be constant? Who said they are not allowed to increase as well? Does it matter so much if c is a constant? And remember the sequence a(n) I defined – would you consider a number such as a(1000) to be a constant? And what about a(a(1000))? Is it a constant? Can anybody calculate it in any time?

But the number of algorithms is far more than a(a(1000)). There are more algorithms than any number we are able to represent. Do you really think we are so smart, so that we are able to prove something that cannot be contradicted by any of them? We have to put ourselves inside the equation. If something can be proved, it can be contradicted. Whether the proof and the contradiction are valid, nobody knows. And what matters is what we can do in reality. Whether we can solve a problem in real time or not. It doesn’t matter whether we can prove something like “any function that is executed twice, will return the same answer in both cases” (and if there is such a proof, it can be contradicted). It can be taken as another axiom (and the axiom can also be contradicted). Anything can be contradicted. And the contradiction can be contradicted, too (ad infinitum).

So I guess the class P as defined as a class of decision problems which we are able to solve “in short time” is merely intuitive. Nobody really cares if a problem that can be solved in a(1000) time is proved to be in P. But when we define the problem, we define the solution. For example, consider the definition of prime numbers. The definition defines a function. Another function may calculate the same function more efficiently. But if it can be proved, it can be contradicted. Anything can be contradicted.

I came to the conclusion that the square root of 2 may not be an irrational number. The proof can be contradicted. It is possible that an algorithm that calculates the bits sequence of the square root of 2 may do something like not halting or returning an infinite number of zeros or ones. I don’t think it can ever be proved that 2 algorithms that will calculate the square root of 2 will return the same sequence of bits ad infinitum.

I thought about it, maybe the definition of the square root of 2 depends on the algorithm? Maybe two algorithms will return different numbers, or at least different bits sequence of the same number? I estimated the amount of time to it will take to calculate the first n bits of the square root of 2 at O(n*log2(n)), but maybe the algorithm is more efficient? Maybe it converges to something in the order of O(n)? If n bits can be calculated in O(n), does it mean that each bit can be calculated in O(1)? It seems to me something quite constant, or at least periodic. The conclusion is probably that the infinite sequence of the bits of the square root of 2 is not something that can be proved to calculated in any nondeterministic way.

Any if it can’t be calculated in any nondeterministic way, in what sense does it exist? It seems to me that God does not only play dice in physics, he does so in mathematics too. Whether all valid algorithms that calculate the square root of 2 will return the same answer, only God knows. Whether any algorithm that is presumed to calculate the square root of 2 is valid, only God knows. Maybe there is no square root of 2, maybe there are many of them. only God knows.

No decision problem can be proved to be hard

Are there really hard problems? We know there are from our experience. But how do we know they are really hard? Maybe we are just not smart enough to solve them? Maybe we haven’t checked all the possibilities yet?

So I will claim something like this: There is no mathematical proof that hard problems exist. If they do exist, their existence is taken to be as an axiom. It can’t be proved mathematically.

So let’s define what a hard problem is. A hard problem p is any decision problem that is computable, but is proved not to be in O(n). Or more generally speaking, there is a function f which is increasing and unbounded such that for any positive integer N, there is n>N such that the number of steps to calculate p(n) is at least f(N). f can be, for example, n, 2n, log2(n) or a(n) defined above. It doesn’t matter, as long as it’s computable.

So lets assume that p is a decision function (again, computable) that is proved not to be in O(n) [or more generally speaking, O(f(n)) ]. So if N is a sequence of natural numbers, there is a computable function n(N) for which the number of steps to calculate p(n(N)) is at least f(N). It can be seen that n defines a subset of N for which for any N, the number of steps to calculate p(n(N)) is at least f(N).

Now, we have a series of bits (0 or 1) p(n(N)) which is proved not to be in O(f(n)). Calculating each bit has been proved to be a hard problem. Now we can define a new subset of n, lets call it m, such that m(n) returns 1 if p(m) = p(1). It can be seen that calculating m(n) takes at least f(N) steps for each n(N), and remember that f(N) is increasing. But how long does it take to calculate p(m) for every m? It takes exactly one step, since p(1) is already known.

So we are now left with an infinite subset of N for which it has been proved that calculating each bit is a hard problem, yet we proved it’s not hard. And if m is not an infinite subset, we can take its complement. The only way to get out of this is to conclude that at least one of the functions we used here is not computable, and therefore not well defined.

But it does not matter how we prove such a thing. Any proof will lead to a contradiction. The only conclusion is that there is no proof that any problem is hard. But does it mean that every problem is easy? It doesn’t. It just means that there is no proof it is hard. We can take one part of the proof for granted, for example the function n, and define its existence as an axiom. Or we can conclude it exists if there is no proof that it doesn’t exist. It’s infinitely complicated. No proof will ever be found.

Even if we allow functions to exist without being computable, their existence leads to the contradiction above. Does it mean that every computable problem is easy? At least in theory they are. No computable problem can ever be proved to be hard. It is possible to display the first million bits of s(n) above in no time. We just don’t know how.

Do hard problems really exist?

My conclusion is that the general question whether P equals NP can never be solved. Since we like axioms so much, (personally, I don’t), it can be defined as an axiom to be either true or false. It depends what we prefer – if we prefer to be able to solve any hard problem in short time then we can define “P equals NP” as an axiom (and build a corresponding Turing machine). On the other hand, if we prefer to keep using our encryption algorithms without having the risk of other people being able to reveal all our secrets – then we can define “P is not equal to NP” as an axiom (and therefore prevent the existence of a Turing machine that will reveal all our secrets). Neither of them leads to a contradiction.

It seems that deciding whether a given decision problem is hard is itself a hard problem. We should remember that even if a general decision problem is proved to be hard, it is still possible to solve a less general version of the problem in short time. So the question whether a general problem is hard or not is not that important. What is important to know, is whether this problem can be solved in reality.

For example, lets define a decision problem which I assume to be hard. I will define a sequence a(n) recursively: a(0)= 0; and a(n+1)= 2a(n). Therefore,

a(0)= 0;
a(1)= 1;
a(2)= 2;
a(3)= 4; // seems reasonable to far.
a(4)= 16;
a(5)= 65,536; // Things are starting to get complicated.
a(6)= 265,536; // Very complicated.

(and so on).

Now, lets take a known irrational number, such as the square root of 2 (any number can be chosen instead). We know an algorithm that can produce its binary digits, and therefore we can define d(n) as the binary digit n for each n. Now, lets define the sequence s: s(n)= d(a(n)). We come up with a sequence of bits, 0 or 1 (which can also be seen as a function, decision problem etc.) which is computable. But is it computable in reasonable time?

Of course not. At least not in the way we implemented it. Calculating the nth bit of the square root of 2 would take something in the order of O(n*log2(n)), if we use the algorithm I wrote. But is there an algorithm that can do it more efficiently? Maybe there is, but we don’t know it (it might be possible to prove whether such an algorithm exists). But for any given n, we can produce an algorithm that will return the first n bits of s(n) in the order of O(n) time. All we have to do is to use a Turing machine to calculate the first n bits of s(n), and then produce an algorithm that will display those bits as a sequence. Even though the Turing machine might take a huge amount of time to calculate this algorithm, this algorithm will be in the order of O(n) in both memory space and time. Such an algorithm exists, and it is computable.

If I generalize a little – if we have a decision problem known to be hard, and it has a computable function – for each n there is a computable algorithm that will produce the first n results of the function in O(n) time. Any hard problem can be represented in a way which is not hard. Of course, representing a hard problem in an easy way is in itself a hard problem. But it can be done. And it is computable.

But it is computable for any given, finite n. It might not be computable in the general case. A given decision problem might be really hard in the general case. Even if it is computable, we might not be able to compute it within reasonable time, or memory space (or both). We might use other methods such as nondeterministic algorithms. We might be successful sometimes (sometimes not). But we are really doomed in the general case. Any hard problem can be made harder. Not any hard problem can be made easier.

I will rephrase my last sentence in terms of Turing machines. Suppose there were a Turing machine p, which can take any given decision function f (Turing machine) as input, and return another algorithm that will compute the same function in a shorter time, if such an algorithm exists. Otherwise, it will return f. And suppose we limit ourselves only to algorithms that halt. And suppose f and g are two decision functions who are identical (they always return the same result), and g in known to be more efficient than f. Then we would be able to create an algorithm a that would do the following:

1. Calculate p(a).
2. If p(a) is equal to a, return f.
3. Otherwise, return the complement of p(a).

It can be seen that p(a) will never be correct. If p(a) is equal to a, then it should not be possible to calculate a more efficiently than a itself. Yet, f and g are more efficient versions of a. If p(a) is not equal to a, then a and p(a) are not identical. The conclusion is that there is no algorithm that can determine whether a given function is calculated efficiently.

The halting problem (2)

I have previously claimed that it possible (theoretically, although not practically) to solve the halting problem on real computers. But I forgot to mention something important about real computers – no real computer is completely deterministic. This is due to the uncertainty principle in quantum mechanics, which claims that any quantum event has some uncertainty. Therefore, computers can make mistakes – no hardware is completely reliable.

This doesn’t mean we can’t rely on computers. We can. The probability of a real computer making mistakes is very low, and can be minimized even more by running the same algorithm on more than one computer, or more than once on the same computer. Or actually, it can be done if the number of steps we need to run is reasonable – something in a polynomial order of the number of memory bits we are using. If the number of steps is exponential – if it is in the order of 2n (let n be the number of memory bits) – then it can’t be done. That is – even if we had enough time and patience to let a computer run 2n steps – the number of errors we will have will be very high. And of course, we don’t have enough time. Eventually we will either lose patience and turn off the computer, or die, or the entire universe may die. But even if there existed a supernatural being who has enough patience and time – it will find out that any algorithm that is run more than once will return different results each time: in terms of the halting problem – sometimes it will halt, and sometimes not – on random.

This is true for even the most reliable hardware. For any hardware and any number of bits in memory, any algorithm will eventually halt – but if it returns an answer, it will not always return the correct answer. If we allow a computer to run for the order of 2n steps and return an answer – we will not always get the same answer. But for real computers there is no halting problem – any algorithm will eventually halt.

But if we restrict ourselves to a polynomial number of steps (in terms of the number of memory bits we are using) – then we are able to achieve reliable answers to most problems. So the interesting question is not whether a given algorithm will halt – but if it will halt within a reasonable time (after a polynomial number of steps) and return a correct answer.

But the word “polynomial” is not sufficient. n1000 steps is also polynomial, but is too big for even the smallest n. Whether P and NP, as defined on Turing machines, are equal or not equal we don’t really know – there is currently no proof they are equal and no proof they are not. I’m not even sure whether the classes P and NP are well defined on Turing machines, or more generally speaking if they can be defined. But even if they are well defined, it’s possible that there is no proof whether they are equal or not. It’s like Gödel’s incompleteness theorem – there are statements which can neither be proved nor disproved in terms of our ordinary logic.

When I defined real numbers as computable numbers, I used a constructive approach. My intuition said we should be able to calculate computable numbers to any desired precision (or any number of digits or bits after the dot), and therefore I insisted on having an algorithm that defines an increasing sequence of rational numbers – not just ANY converging sequence. It turns out I was right. Although theoretical mathematics has another definition for convergence, it’s not computable. It’s not enough to claim that “there is a natural number N …” without stating what N is. If we want to compute a number, the function (or algorithm) that defines N (for a given precision required) must be computable too.

It turns out that if we allow a number to be defined by any converging sequence in the pure mathematical sense, then the binary representation of the halting problem can also be defined. This is because for any given n we can run n steps of the first n Turing machines (or until the machines halt) and return 1 if the machines halt, and otherwise 0. It can be proved that this sequence does converge, but it can’t be approximated by any computable function. Therefore, it can be claimed that such a number can be defined in the mathematical sense, although it can’t be computed. But a Turing machine can’t understand such a number, in the sense that it can’t use it for operations such as arithmetic operations or other practical purposes. So in this sense, I can claim that such a number can’t be told in the language of Turing machines.

A Turing machine is not able to specify whether a given algorithm will output a computable number or not (for any definition of computable numbers we can define), since this problem is as hard as the halting problem. And therefore, the binary representation of the computable set (1 for each Turing machine that returns a computable number; 0 for each Turing machine that does not) is itself noncomputable. In other words, the question of whether a given algorithm (or Turing machine) defines a computable number is an undecidable problem. So my question is – are complexity classes such as P and NP well defined? Are they computable and decidable in the same sense we use? Is there a Turing machine which can specify whether a given decision problem belongs to complexity classes such as P and NP and return a correct answer for each input? I think they are not.

If there is no such a Turing machine, then in what sense do P and NP exist? They exist in our language as intuitive ideas, just like the words love and friendship exist. Asking whether P and NP are equal is similar to asking whether love and friendship are equal as well. There is no formal answer. Sometimes they are similar, sometimes they are not. If we want to ask whether they are mathematically equal, we need to check whether they are mathematically well defined.

I was thinking how to prove this, since just counting on my intuition would not be enough. But I came to a conclusion. Suppose there was such a Turing machine that would define the set P – return yes for any decision problem which is in P, and no for any decision problem which is not in P. Any decision problem can be defined in terms of an algorithm (or function, or Turing machine) that returns yes or no for any natural number. We limit ourselves to algorithms that halt – algorithms that don’t halt can be excluded.

So this Turing machine – lets call it p – would return a yes or no answer for any Turing machine which represents a decision problem (if it doesn’t represent a decision problem, it doesn’t matter so much what it will do). Then we would be able to create an algorithm a that would do the following:

1. Use p to calculate whether a is in P.
2. If a is in P, define a decision problem which is not in P.
3. Otherwise (if a is not in P), define a decision problem which is in P.

Therefore, a will always do the opposite of what p expects, which leads to a conclusion that there is no algorithm that can define P. P is not computable, and therefore can’t be defined in terms of a Turing machine.

Is there a way to define P without relying on Turing machines? Well, it all depends on the language we’re using. If we’re using our intuition, we can define P intuitively, in the same sense that we can define friendship and love. But if we want to define something concrete – a real set of decision problems – we have to use the language of deterministic algorithms. Some people think that we are smarter than computers – that we can do what a computer can’t do. But we are not. Defining P is as hard as defining the halting problem – it can’t be done. No computer can do it, and no human can do it either. We can ask the question whether a given algorithm will halt. But we have to accept the fact that there are cases where there is no answer. Or alternatively, there is an answer which we are not able to know.

We can claim that even if we don’t know the answer, at least we can know that we don’t know the answer. But there are cases where we don’t know that we don’t know the answer. Gödel’s incompleteness theorem can be extended ad infinitum. If something can’t be computed, it can’t be defined. Such definitions, in terms of an unambiguous language, don’t exist.

I would conclude that any complexity class can’t be computed. It can be shown in a similar way. So if you ask me whether complexity classes P and NP are equal, my answer is that they are neither equal nor not equal. Both of them can’t be defined.

The halting problem

I have previously mentioned the halting problem – a well-known problem in computer science and mathematics. I claimed that there is a language in which an algorithm that solves the halting problem can be constructed. If we assume that any given algorithm either halts or will run to infinity, then we can construct this simple algorithm:

1. Take an algorithm from input.
2. If it halts, return yes.
3. Otherwise, return no.

It’s a simple algorithm that returns “yes” if a given algorithm halts, and “no” if it doesn’t. But it what language is it written? In English. It requires the knowledge whether a given algorithm will halt. As we assumed, such a knowledge exists, but it has been proved that there is no deterministic algorithm (for Turing machines) that can contain such a knowledge. We can conclude that if this knowledge indeed exists, it is not computable, and therefore can’t be expressed in a finite number of bits (or computer files).

Suppose we try another approach:

1. Take an algorithm from input.
2. Run it one step at a time – either until it halts, or let it run infinite steps.
3. If it halts, return yes.
4. Otherwise, return no.

This algorithm would seem to work and return a correct answer for algorithms that halt, but it might get stuck in an infinite loop for algorithms that don’t halt. But if we allow it to run on a computer that can run infinite steps in finite time – it will always stop and return a correct answer. But the problem is – there is no such a computer.

But there are no Turing machines, either. A Turing machine has infinite memory. Since it has infinite memory, some programs might run for infinite time. In reality – no computer has infinite memory, and no computer program will run for infinite time (somebody will eventually turn off the computer). So lets forget about Turing machines, and check if we can solve the halting problem for real computers.

Real computers have a finite amount of memory. If we are given a real computer and an algorithm that runs on it – is it possible to determine whether this algorithm, when run on the real computer, will ever halt?

Of course it is! let n be the number of bits in this real computer’s memory – so the total number of different states the computer can be in is 2n. If a computer program hasn’t stopped after 2n steps, then it is stuck in an endless loop and will run forever. So I will revise my algorithm a little:

1. Take an algorithm from input.
2. Run it one step at a time – either until it halts, or let it run 2n steps.
3. If it halts, return yes.
4. Otherwise, return no.

This program will always stop and return a correct answer, although it might take a long time. It will also need to use a different computer (or virtual computer) to count the number of steps it is running, and this might take another n bits of memory as well. But I’m not trying to be efficient here. I’m trying to prove that this problem can be solved. And it can be solved.

So what can’t be solved? The question of whether algorithms halt on a Turing machine can’t be solved. The halting function (return yes for any algorithm that halts; no for any algorithm that doesn’t) can’t be computed. The corresponding number can’t be defined in the computer machine language. If there is such a knowledge, it can’t be expressed. Some things we are just not able to know.

But remember – we also don’t know if the googleth bit of the square root of 2 is 0 or 1 – and it is computable (at least in theory, on Turing machines). If something is theoretically computable, it still doesn’t mean that it can be computed in reality and within reasonable time. We just have to accept that some things we are not able to know. It would be boring if we knew everything. If we did, we would have nothing to learn.

Are the real numbers really uncountable?

I already demonstrated that the numbers of ideas that can be expressed is countable. So how come the number of real numbers is uncountable? In what sense are the real number real? Each real number can be considered as an idea. Can we express an infinite number of ideas in a finite number of words?

But I already said that it all depends on the language we use. Let’s start with checking our definition of numbers. Since we have the inductive definition of natural numbers, and we can define rational numbers by a ratio of two natural numbers – lets check our definition of irrational numbers. There are a few ways to define irrational numbers. Lets define irrational numbers (or more generally speaking, real numbers) as the limit of a known sequence of rational numbers, which is increasing and has an upper bound.

It can be argued that a limit of such a sequence might not be regarded as a number, if it’s not in itself a rational number. But this is just terminology. It doesn’t matter if we call it a limit or a number, as long as we know it exists. It exists in the sense that it is computable – we can calculate it to any desired precision by a finite, terminating algorithm. Or to be more accurate – it is computable if the original sequence of rational numbers can be generated by a known algorithm.

So in the language of computers and deterministic algorithms, we can define any computable number in such a way. Can we define numbers which are not computable? It all depends on our definition of “definition”. While there might exist languages in which such numbers can be defined, my view is that there also exist languages in which such numbers cannot be defined – for example, the language I’m using now. It can be claimed that an infinite definition is also a definition, and these numbers can be defined by the infinite sequence of rational numbers itself. But in reality, it will take us infinite time to express such a definition, and we would need infinite memory to remember it. Therefore, my view is that anything that requires an infinite definition is not real. So lets limit ourselves to definitions of finite length. If there exists a finite algorithm that can define a number (by defining a sequence of rational numbers that converges to it) then this number is computable and therefore definable. If there doesn’t exist such an algorithm – then we cannot define such a number.

So, if I conclude that numbers that can’t be defined don’t exist (because we are not able to express them in the language I’m using), then we come to a conclusion that the set of real numbers is countable. How does it get along with arguments such as Cantor’s diagonal argument, which claim that the set of real numbers is uncountable? Well, while Cantor’s diagonal argument claims that there exists a sequence of rational numbers, which converges to a real number, and is not computable (because the set of computable numbers is countable) – we are not able to express such a set in a finite way in the language I’m using. And since it can’t be expressed, I can claim that it doesn’t exist.

Why doesn’t it exist? Consider this – I can claim that there is a computer language, in which there is an algorithm, that is able to solve the halting problem (decide whether any given computer program will halt). Indeed, there is such a language and algorithm in the same sense that there are numbers which can’t be computed. But there is no computer who runs such an algorithm – it can’t be compiled into known computer languages. So in what sense does such a computer language exist? It doesn’t exist in reality – it exists only in our minds. And therefore, numbers which can’t be computed exist only in our minds, too. They are not real numbers, in the sense that they are not real. They are real in the same sense that a computer who solves any problem is real.

The limits of knowledge

Is there a limit to human knowledge? I would rather rephrase this question: Is there a limit to the knowledge that can be expressed in human language? While some people might think that the potential of our knowledge and wisdom is unlimited, I will demonstrate that it is.

It is well known that many aspects of human communication can be expressed in computer files – including written language, spoken language including music and songs, visual pictures, movies and books. Is there a limit to the amount of information that can be expressed in files? We all know that computer files can be represented as a sequence of binary digits (bits). Each sequence of binary digits can also be viewed as a positive integer number (a natural number). While some files might contain millions or even billions of bits – their size is always finite. A computer file cannot contain an infinite number of bits.

So it seems that any idea, any piece of knowledge or information that can be expressed in words (or any other form of communication that can be represented in files) can be represented as a sequence of binary digits, or a natural number. But the number of natural numbers is countable. Even more – the number of natural numbers which can actually be represented in reality is at least limited by the number of particles in our known universe, which is finite. We come to a conclusion that the number of ideas that can be expressed in words is finite, or at most countable (if we don’t put a limit on the number of words we use to express one idea).

Even more – since the sequence of natural numbers is already known to us, and we can produce a simple computer program or algorithm* that will express all of them (if allowed to run to infinity) – then the entire knowledge and ideas that can be expressed in words is already known as well. All the words have already been said, all the books have already been written, all the movies have already been created and seen – by a simple algorithm that counts the natural numbers to infinity. Nothing is new – everything is already known to this algorithm, and therefore to us. Just like nothing is new with any natural number – nothing is new with any sequence of binary digits, or with any computer file.

This also eliminates our concept of authors, or copyrights. Can a number have an author? Can it be copyrighted? I can prove by induction that no number has any author and no number has copyrights. Since we all know that 0 and 1 are not copyrighted, and since nobody can claim he’s the author of either of them – then if we have a sequence of bits which has no author, and is not copyrighted, and we add to it another bit (either 0 or 1) – then it’s easy to conclude that the new sequence too has no author and is not copyrighted. Can one claim to be the author of a sequence of bits in which only one bit he wrote by himself? Compare it to taking a book someone else wrote, and adding one letter. Can you claim that you wrote the entire book? Of course not. And if nobody wrote the original book, and you added one letter – then nobody wrote the new book, too. Or compare it to numbers. If you add one to a well-known natural number, can you claim that the new number is yours? Can you claim that you wrote it, and nobody has the right to write the same number for the next 50 years? Of course not.

If we were able to claim copyrights on natural numbers, then I would be able to claim that the algorithm that outputs the entire sequence of natural numbers is mine, I wrote it, and therefore the entire sequence of natural numbers is mine. Nobody is allowed to write any number for the next 50 years. Would you allow something like this? Of course not. Then we should conclude that no knowledge is new, everything is already known, no book has an author and nothing is copyrighted.

* Here’s my algorithm in the awk language, in case you’re interested:
for (i=0; 1; i++) {print i;}

The axiom of choice (2)

After reading what I wrote about the axiom of choice about two years ago, it appears to me that I forgot to mention something important. I claimed that there are numbers which we are not able to tell. But it all depends on the language. It can be proved that for each real or complex number, there exists a language (or actually, there is an infinite number of languages) in which this number can be told. It’s very easy to prove – we can just define a language in which this number has a symbol, such as 1, 0 or o. We tend to look at some symbols as universal, but they are not. For example, the digit 0 means zero in English, but five in Arabic. The dot is used for zero in Arabic. So defining numbers depends on the language we use.

But, since there are infinite possible languages, the language itself has to be defined, or told, in order for other people to understand. We come to a conclusion that the ability to tell numbers depends on some universal language, in which we can tell the number either directly, or by defining a language and then tell the number in this language (any finite number of languages can be used to define each other). But in order to communicate and understand each other, we need a universal language in which we can start our communication.

It still means we are not able to communicate more than a countable number of numbers, or a finite number of numbers in any given finite binary digits or time, but the set of the countable (or finite) number of numbers that we can communicate depends on the language we use. For example, if we represent numbers as rational numbers (as a ratio of two integers) then we can represent any rational number, but we can’t represent irrational numbers such as the square root of 2. But if we include the option of writing “square root of (a rational number)” then we can represent also numbers which are square roots of rational numbers. In this way we can extend our language, but it’s hard to define which numbers we are able to define in non-ambiguous definitions. An example of a set of numbers we can define in such a way are the computable numbers.

In any case, for any language the number of numbers that can be defined in it is countable, and we can conclude that any uncountable set has an (uncountable) subset of numbers which can’t be defined in this language. If we subtract the set of numbers that can be defined from the original uncountable set, we can define an infinite set of numbers, none of which we are able to define or express. If there are languages in which these numbers can be expressed – these languages too can not be expressed in the original language.

It’s similar to what we have in natural languages. Some expressions (or maybe even any expression) can’t be translated from one language to another. For example, in Hebrew there is no word for tact. The word tact is sometimes used as it is literally, but this is not Hebrew. There are many words in Hebrew, and any language, from other languages. But the Hebrew language itself does not have a word for tact.

Generating random numbers

Is there an algorithm that can generate random numbers? Of course, the question is if there is such an algorithm which is deterministic. But a deterministic algorithm always provides deterministic results. So the question is – can we provide an algorithm with a random seed, in such a way that it will provide an infinite range of random numbers in an unpredictable way? The answer is obviously no. I can prove it!

First, I already showed that it’s not possible to select a random number in the range 0 to 1, if all the possible numbers have the same probability. Since we can treat an infinite series of random bits (either 0 or 1 with the same probability) as a real number between 0 and 1, then it seems that I already proved it. But to clarify it, I only proved that we can’t tell such a selection, I didn’t prove we can’t make one. But if we can define an algorithm that can create such a series, this is equivalent to defining the number or telling it. Therefore, there exist series of 0 and 1 which no deterministic algorithm can create!

Second, if we consider the number of bits necessary to write down both the algorithm and the random seed, we can define this as n. Then there are only 2n number of possibilities for strings of length n. And therefore, the number of different series we can produce with n bits (both the algorithm and the random seed) is not more than 2n. And for any known algorithm, only the number of bits in the random seed counts. For n bits, this algorithm will create no more than 2n different series. This means that theoretically, if we see the beginning of a series given by this algorithm (the first m bits for some m) then we can tell the rest.

But, we can still create series of numbers (or series of bits) which look as random to us – that is, we can’t tell any order in it, we can’t compress it and so on. I like to define it with the word entropy (taken from physics). I was about to define it, but it seems that there is already a good definition in Wikipedia. In other words – although we can’t generate a true random series, we can generate a series which will look random to any observer. That’s good enough.

I like to consider irrational numbers, like the square root of 2 for example, as a series of pseudo-random bits (when written in the binary form). That is – there is no order in it, we can’t compress it and so on. While I will not try to prove it, I think it’s quite obvious. If you don’t believe me, then I’d like to give you a challenge – prove that the google (10100) bit is either 0 or 1 (that is, either prove it’s 0, or prove it’s 1). I bet you can’t do it. And if for some strange reason you do succeed, try google! (the factorial of google) or googleplex (10google).

If you need a way to calculate the first n digits of the binary form of the square root of 2, here’s a simple algorithm. Of course, you need to implement it or use a software which lets each number have n bits of precision (after the dot).

1. let x be any seed number, such as 1.
2. let x2 = 2/x.
3. let x be (x+x2)/2.
4. If the first n digits of x2 and x are not equal, go to step 2.
5. The result is x.

It can be proved that this algorithm converges very quickly – in about O(log2(n)) iterations. That is, to calculate the google first bits will take less than 500 iterations of this loop (or actually, something around 332, which is the log of google in base 2). But the problem is, in order to make calculations of n precision you will need to store n bits, and the calculation itself will take about O(n) steps in machine language. So the total amount of time to calculate the first n bits of the square root of 2 is about O(n*log2(n)). I doubt you will ever get to implement it for google!

Therefore, I can claim that there exists an integer n, which for every bit of the square root of 2 beyond the nth bit, we can’t prove that it is 0 and we can’t prove that it is 1. It’s not that we can’t calculate it – we can calculate it in theory, but it will take infinite time (that is, the calculation will never end during our life time). This is also related to the halting problem – the algorithm to calculate the first n bits of the square root of 2 will halt for every small n, but for big values of n it will never halt. Of course you can claim that in theory it will halt, but in reality there exists an integer n for which it will never halt. Even for relatively small numbers it will not halt – such as google, and you need only 333 bits to represent google in the binary form.

Since we don’t know if the googleth bit of the square root of 2 (and any bit beyond it) is 0 or 1, we can define its probability to be 50% for either 0 or 1. I’m not going to prove this mathematically, but for me it makes sense. Although every bit is either definitely 0 or definitely 1 (according to mathematical logic), for us it has 50%/50% probability to be either 0 or 1. This is like the uncertainty principle in quantum physics – we don’t know something, and we know we don’t know it, but we can define its probability for every possible value. The probability is not inherent in the value itself, but it is our knowledge of it. Although the value of every bit of the square root of 2 is deterministic, we can look at it in a probabilistic way.

The axiom of choice

A few days ago I came across the Clay Mathematics Institute’s website, and the one million dollar prizes they set for the people who will succeed in solving a few math problems. One of these problems is the famous P vs NP problem, which I came across during my studies of computer science. In the last few days I read some information about related issues – mostly mathematics and computer science issues. The main source I used was Wikipedia (in English), which is a very good source for reading information about anything.

Among the articles I read in Wikipedia are articles about the halting problem, the axiom of choice, P vs NP and other interesting issues. I decided to comment about these issues, and that’s the main reason for starting my own weblog.

The first subject I would like to comment about is the axiom of choice. The reason is because I believe this axiom is not true. Here is a short explanation why I think it is not true:

Suppose we have to select a random real number in a given range, say from 0 to 1. This can be achieved by various ways in real life, such as turning a roulette, throwing a dart, using computers random number generators etc. And suppose that each number in the range has an equal probability of being selected. Then I claim that this is already a paradox! And even if we limit ourselves only to rational numbers, it is still a paradox. It’s not possible to make such a selection. Only if we limit ourselves to rational numbers with a certain precision – that is, only if the set of possible numbers is finite, then we are able to make such a selection. Or, if we don’t require that each number will have an equal probability of being selected (this I will show later).

Why is it a paradox? Because the sum of all probabilities to select each number must be exactly 1. If there are infinite possible numbers, and all the probabilities are equal, then each probability must be exactly 0 (this is easy to prove). And if a probability is 0, then it cannot occur! Of course this still has to be proved, but I think it’s already intuitive. Things with probability 0 just don’t happen!

I want to tackle this issue from a different point of view. The issue is not selecting a number, but telling which number was selected or writing it down. Of course we can still claim that we can select a number even if we can’t tell it or write it down, but this is like claiming about anything else that we know it and can’t tell it, for example I can claim that I know whether P and NP are equal or not but I can’t tell you, or that I know whether any given algorithm halts or not but I can’t tell you, and so on. Therefore, I would assume that if we can’t tell what number we selected and we can’t write it down, then we can’t make the selection. This is true for all practical purposes.

Now, let’s consider the ways we write down numbers. There are a few ways to write numbers: The most common way is decimal (or binary) representation (either fixed point or scientific representation). We can represent any real number with binary representation (or at any integer base). The problem is: with some numbers, this representation is infinite. For rational numbers, it’s periodic, but for irrational numbers it isn’t. Therefore, if we wanted to write down a real number which is not rational in the binary form, it would take us an infinite number of digits. Which leads to the question if such numbers really exist. This is a philosophical question. But if we assume they exist, we are able to write them down in alternative ways (not binary representation) – for example: e, PI, and the square root of 2.

Lets consider all the ways to tell a number or write it down. We can decode each way in the binary form and save it to a binary file. Everything can be encoded into binary files: text, graphics, voice etc. But with n binary bits, it’s possible to write down only 2n different files. Therefore, not more than 2n different numbers can be represented with n bits. The number 2n grows exponentially, but it’s still finite. It’s not possible to represent an infinite number of different values with a finite number of bits.

How is this related to selecting a real number in a given range, or with the axiom of choice? Well, there is only a finite number of different numbers we can write down or tell with a finite number of bits (or in a finite time). Therefore, the number of numbers we can ever write down or tell is countable (I think it’s what we call definable numbers). So if a set is uncountable (for philosophical reasons, I don’t think any uncountable set really exists, but this is another story) – then it must contain numbers which we will never be able to tell! Or in other words, we can define a set of all the definable numbers (numbers we can tell) and subtract it from the set of real numbers, and the result will be a set of numbers none of which we will ever be able to tell. And therefore, we are not able to select an arbitrary number from that set.

Now, what about rational numbers? Can we randomly select a rational number in a given range? I say we can, but we can’t give all the numbers the same probability. We can give each number a positive probability (more than 0), but if we insist in giving each number the same probability – we will still not be able to tell which number we have selected. Or to be more accurate, the probability that we will be able to tell which number we have selected is 0. This is because for each number of bits n, there are only a finite numbers that can be represented with up to n bits, but an infinite numbers which require more than n bits.

On the other hand, if we don’t require all the numbers to have the same probability, then we can definitely select a number within the given range. This is trivial and very easy to prove. We can just select bits at random, and after selecting each bit select randomly whether we should stop or keep selecting more bits. We can show that each number has a positive probability to be selected. This can be shown with any countable set, not just with rational numbers.