# Re: determinism, freewill, chaos, and circular causality

*From*: curt@xxxxxxxx (Curt Welch)*Date*: 26 Apr 2006 02:46:57 GMT

"Stephen Harris" <cyberguard-1048@xxxxxxxxx> wrote:

"Curt Welch" <curt@xxxxxxxx> wrote in message

news:20060419151144.187$xS@xxxxxxxxxxxxxxxxx

"Stephen Harris" <cyberguard-1048@xxxxxxxxx> wrote:

"Curt Welch" <curt@xxxxxxxx> wrote in message

news:20060417185142.082$bI@xxxxxxxxxxxxxxxxx

"JGCASEY" <jgkjcasey@xxxxxxxxxxxx> wrote:

Curt Welch wrote:A turning machine emulator, running on a turning machine, can't be

...

made more powerful, buy changing the code of the emulator. Anything

you could do by changing the code of the emulator, could also have

been done, by running a program on the emulator. What makes you

think that a learning machine can be made more powerful by allowing

it to change it's own code? If you code it right to start with,

there's nothing new it will be able to learn by changing it's own

code, that it could not have learned by changing

the structures which it was originally set up to evolve through

learning.

It seems to be to be the same as the turning machine problem. What

you would need to do is build a signal processing learning machine

that has enough flexibility to solve any signal processing problem

in a given domain

(time/frequency/complexity). This is an interesting question, and I

don't know if anyone has found a way to prove that any some class of

machines would have the power to solve any possible learning problem

given enough time and memory. It's seems possible to me.

--

I think you probably meant a Turing machine which solves computable

problems. Marcus Hutter has explored a non-computable AI approach:

http://www.idsia.ch/~marcus/ Universal Artificial Intelligence (2nd

choice)

"There are strong arguments that AIXI is the most intelligent unbiased

agent possible in the sense that AIXI behaves optimally in any

computable environment. The book outlines for a number of problem

classes, including sequence prediction, strategic games, function

minimization, reinforcement and supervised learning, how they fit into

the general AIXI model and how AIXI formally solves them. The major

drawback of the AIXI model is that it is incomputable. The book also

presents a preliminary computable AI theory. We construct an algorithm

AIXItl, which is superior to any other time t and space l bounded

agent."

SH: The Turing machine is an ideal machine, so has no physical

limitations.

Actual computers are constrained by physical resources and by time in

that there are intractable problems which cannot be computed in the

time available before the universe experiences heat death. Physical

computers have proven computational limitations:

http://www.cis.udel.edu/~case/colt.html

John Case's COLT Page

"Consider the problem of finding a rule for generating a sequence of

numbers such as 9, 61, 52, 63, 94, 46, 18, 1, 121, 441, ... . Here

is a rule for this sequence. First compute the squares of successive

integers beginning with 3, but, then, to generate the sequence, use,

in place of these squares, the squares each with its decimal digits

written down in reverse order (ignoring any lead zeros). N.B. This

rule can be written as a formal algorithm (or computer program).

The problem of finding such rules gets harder as the sequences to

generate get more complicated than the one above. Can the rule

finding itself be done by some computer program? Interestingly,

it is mathematically proven that there can be no computer program

which can eventually find (synonym: learn) these (algorithmic) rules

for all sequences which have such rules!"

That sounds interesting. Do you happen to know what this proof is?

No. I sent him, John Case, an email once asking this question, but did

not receive a satifactory answer. At first I thought it was related to

the Halting problem but now I think the reason is Intractable restraints

on time and resources. Wolfram thinks the universe has evolved from a

cellular automata rule which now has an exceedingly complex structure.

He admits that to test this theory one would have to select a likely

candidate and run that rule for about the same age as the universe in

order to confirm a match, though there would be good indications

after only 9 billion years.

It seems to me that for a turning machine with infinite time and space

to work with, it should be able to find the answer to any finite sized

rule (any finite sized program for generating a sequence.

Well let us take Pi, which is expanded by computing the relationship

of the perimeter to the diameter of a circle. Pi is defined as computable

and passes all randomness tests. A Turing machine can expand Pi

forever, adding one digit at a time. It can produce a sequence of

finite Pi values which increments and never terminates.

There are several algorithms (rules) for generating Pi which are short

and finite. Suppose I extract some finite portion of the infinite

expansion of Pi. Now suppose I have a geiger counter which measures

random decay of the nucleus (or any fair random generation method) and a

scale, perhaps in miliseconds, which records clicks in a range of

0-9, sort of like rolling a fair ten-sided di with numbers on each side.

I don't tell you the source of these two sequences of random numbers,

both of which are finitely long. Can you distinguish between the two of

them? I don't think so. Since both are random, somewhere within the

expansion of Pi, the sequence of digits generated by random atomic

decay will appear.

Can you prove that? I mean, just because Pi seems random in all ways we

know to test it, how can you prove the algorithm will actually produce some

finite string of length say 10^100 at some point?

Do the algorithms for generating Pi all require infinite memory? i.e.,

increasingly larger amounts of memory as the number of digits produced

increases?

The finite sequence generated by the rule for Pi

will somewhere include the random atomic sequence. If it doesn't,

then it isn't random.

There are tests for randomness but no proofs because how could

one prove an infinitely long number is infinite. How does one

determine if a short finite sequence originated from a random

process or a process which is not random but produces random

output? Chaitin and others have defined what they call "truly random

numbers" and the qualifcation is that there is no shorter input string

to generate the output, there is no rule or algorithmic process.

That seems intuitive. But does anyone have a way to prove that a given

finite string is truly random?

I don't think there is a way to distinguish between some randomly

extracted sub-sequence of a truly random number and a randomly

extracted sub-sequence of Pi if you don't first know the source.

A random number expresses all possible random sub-sequences

infinitely often.

It seems to me it would be possible to write an algorithm that generated an

infinite string of digits in such a way as to guarantee that at some point.

I guess it's trivial to do since you can just generate all strings of

length 1, then 2, then 3 and just keep going. But that would require

memory equal to the size of the string you were generating. It seems to me

however it might be possible to find such an algorithm that had the

property that you could prove every finite string would be generated at

some point but yet use far less memory then the string you were generating.

Or, looking at it from the other direction, it would imply that for any

finite string you could generate there would aways be an algorithm for

generating it that would be shorter than the string.

This would just imply that the language you used for specifying the

algorithm was simply more dense than the language of digits. Which seems

potentially possible since whatever language you specified the rules in

would in fact have more meaning to each digit than a simple list of digits.

It seems to me that you could not answer these sort of questions without

first putting some limits on the type of language you could use to specify

your rules with. For example, if I could specify a rule like this in my

"rule language":

Generate 10256 digits of Pi starting at digit offset 18237237862373

And the above rule was encoded like this:

111110 10256 111111111111110 18237237862373

Then you have a simple rule for specifying large finite strings in a

language that might be shorter than the strings it generated for most

cases. Or maybe not because the offset value might tend to be larger than

the string you were trying to generate for most cases.

It just seems to me however that the power to specify an algorithm would

tend to allow you to encode any string in a shorter form with a powerful

enough rule language.

I wrote all this to bring up one possibility in reply

to:

It seems to me that for a turning machine with infinite time and space

to work with, it should be able to find the answer to any finite sized

rule (any finite sized program for generating a sequence.

Suppose somebody discovers another number X, like Pi, computable,

infinite and passing all randomness tests. How, by just intercepting

a partial sequence of an encrypted transmission, could you tell if

this sequence originated from Pi, or number X, or a truly random

number, or some other number in the category as X and Pi?

If there is more than one possible answer, it should be able to find them

all.

Ah, but no, it couldn't. Because the list of possible rules is infinite if

you put no limit on the size of algorithm that is generating the string

(there's nothing stopping you from written really bad code). So the best

you could do is specify a program that would find them all, from shortest

to longest, but it would never terminate and never stop finding more

answers.

I am not so sure that there is a finite input rule for the Turing machine

which allows it to discern which of an infinite number of finite or

infinite patterns, some truly random, some like Pi, corresponds to a

given finite sample under examination?

Well sure. You could write a program to generate all possible programs,

and emulate each one to see if it produced the finite sample under

examination. Ah, but there is the halting problem in there that could

prevent you from knowing if the program under examination was every going

to finish. Yeah, there might be issues. You might be able to get around

those issues by specifying a restricted programming language which was

guaranteed to keep producing output digits so you could know that the test

would always terminate.

As a matter of fact, the solution is obvious. Generate all possible

programs and see if they produce the correct sequence. Finite sized

Aren't there an infinite number of these finite programs? Like there

are an infinite number of finite representations of the expansion of Pi.

Pi to a hundred digits, to a million digits, to a zillion digits ...

I was thinking simply that there is a finite number of programs of length

10K bytes or less. It would require to you specify the language you were

using to specify the program.

programs can produce very very large sequences - but they are forced to

repeat. So there's only a finite amount of testing required for each

program to know if it matches. Ah, but no.... You don't know how long

the program is which is generating the sequence so you never know how

much you have to test. No matter how much you test, you might not have

tested enough - so it's impossible to ever complete the test.

Pi is a short rule which never repeats (random) when expanded

indefinitely on a Turing machine. If you mean on a physical computer, I

don't know if it would repeat, but it would run out of resources due to

combinatorial explosion, intractability, when the rule was sufficiently

complex evolved. I don't think there is a computable way to backtrace the

universe to a single cellular automata rule given finite resources. Or

even Minsky says you can't reproduce intelligence by running zillions of

speeded up scenarios on the evolution of life on our planet.

So, I think the only thing you could write a program to test is a

sequence of finite length. Given any sequence of finite length you

could always find a set of rules to produce it (which I guess is

obvious since the program only needs to be "print sequence").

Well, I thought we were discussing printing out (predicting) the next

number in the sequence, not repeating the same input sequence.

We might have started there. I can't remember how we got here. :)

But the problem of prediction is a much cleaner and and easier one to

address than the problem of producing an algorithm to generate a fixed

sized string.

That

is BTW, how they define a truly random number. The input length

and the output length must be of the same length. There is no shorter

rule which exists for generating a longer output length (such as Pi).

Also don't you need to discover the unique rule which generated the

finite output under examination? I'm pretty sure there are an infinite

number of programs that can generate a given finite output. But the

next digit(s) in the given finite sequence (predicting it) can change

with every one of those infinte programs which generated the first

specified finite output. Many rules could generate the sequence

used as an example by John Case, and the next digit could be

different than 169 (using his example rule) with each different rule.

"Consider the problem of finding a rule for generating a sequence of

numbers such as 9, 61, 52, 63, 94, 46, 18, 1, 121, 441, ... . Here

is a rule for this sequence. First compute the squares of successive

integers beginning with 3, but, then, to generate the sequence, use,

in place of these squares, the squares each with its decimal digits

written down in reverse order (ignoring any lead zeros).

And for an infinite sequence

produce by a finite size set of rules, you could never know when to

stop testing the sequence to know if you current set of rules is

actually producing the right sequence.

So, as a practical program you could write a program to test all

possible rule sets to see which ones currently worked to produce the

first N numbers

of the sequence for any N, but not for all N.

The rules may be finitely long, but that doesn't mean there is a finite

number of such rules. I think there are a infinite number of finite rules

for producing any finite sequence.

Yeah, I think that must be true.

I'm not catching your drift any more.

How many rules/programs are there to produce any given number such as 2?

Yeah, I'm drifing for sure. :)

Regards,

Stephen

--

Curt Welch http://CurtWelch.Com/

curt@xxxxxxxx http://NewsReader.Com/

.

**References**:**determinism, freewill, chaos, and circular causality***From:*feedbackdroid

**Re: determinism, freewill, chaos, and circular causality***From:*Curt Welch

**Re: determinism, freewill, chaos, and circular causality***From:*JGCASEY

**Re: determinism, freewill, chaos, and circular causality***From:*Curt Welch

**Re: determinism, freewill, chaos, and circular causality***From:*Stephen Harris

**Re: determinism, freewill, chaos, and circular causality***From:*Curt Welch

**Re: determinism, freewill, chaos, and circular causality***From:*Stephen Harris

- Prev by Date:
**Re: when can bayesian "not" be applied?** - Next by Date:
**Re: Is a general purpose mechanism possible?** - Previous by thread:
**Re: determinism, freewill, chaos, and circular causality** - Next by thread:
**Re: determinism, freewill, chaos, and circular causality** - Index(es):