First - a mistake in Turings unsolvability proof of Halting Problem

Let's assume that we do not have a DoesHalt(program) function, but
DoesHalt() function inside runtime of program, which is able to signal
true or false. It is also able to hang.

At first, the case that DoesHalt() will stop the program when called,
continue afterwards and send program no additional input, will remove
some uncertainties.

At second - the case that DoesHalt() can hang, will remove the
impossible task from it - to return something when there is no way to
communicate it. It can be implied that for any time timespan one can
say (say, 1000 seconds) there must be infinite number of DoesHalt()
methods, which take longer - otherwise one could get the output signal
by waiting a given amount of time. Thus, the program run must take
longer with more complex problems and with a specific set of problems,
it must run into infinity - does not halt.

Now, the method leaves us, anyway, a workaround - the case DoesHalt()
sees it's own code instance in code together with tricky program, it
can detect this condition that it itself will hang. At least we
haven't proven that it can't.

Lets see it in action:

DoesHalt(). - will signal true if the current process will eventually
halt and false otherwise. Will hang if undecidable.

Program() {
if DoesHalt(), run forever; otherwise halt.
}

Proxy() {
if DoesHalt(), print "Running..."; otherwise halt.
Program().
}

We see the general flow of events here:

If we call Program(), we will wait forever for output. That's not what
we want - but that's neither bad. We could hypothesise about a halt
checker, which runs always less than second in case program itself
runs less than time passed from Big Bang or left to Big Collapse
(there might not be one, but to show the principle). In such case we
would never need to wait more than second.

If we call Proxy, which contains DoesHalt(), which will check the
program code including DoesHalt() and the whole code for a paradox, it
possibly could detect the whole situation. It must, of course, also
detect the signal sent from inner DoesHalt(). There is no logical
reason, why it should not be able to recognize all three situations as
inner DoesHalt() will always hang in problematic cases without even
reaching the puzzles and logical paradoxes. It will start looping
through some metalevel set of different answers without being able to
detect it's loop - possibly because each round becomes more complex.
But an observer seeing both DoesHalt() and the tricky code using it's
results might as well instantly understand, what's the case.

Proxy, in here, is also an example of the classical halting test
program - it does not have impossible task now, because DoesHalt()
inside will know the appropriate behaviour - hang when needed. Or at
least it will fall into that behaviour.

Second - a general case solver sketch

Let's say we have 1D cellular automata, bounded at beginning.

The following is solution for halting problem - mathematically
unproven, somewhat open here and there, but basically very near to
what I have seeked.

As I have lately rephrased the problem itself and how to solve it,
it's here:
1. We can not, possibly, find a general solver, but we should look for
what is possible. FSM solver has a weakness in that it can not solve
theoretical algorithms - solving infinitely growing number, just an
iteration, would take a very long time. This is a theoretical infinity
we should be able to handle.
2. The most general solver possible will do the following: analyze the
whole picture logically, finding all strong implication rules; figure
out all possible logic moves, which lead to hanging. Detecting not
hanging is not priority as this can be detected by running program. As
there is no upper complexity limit of a program, solver must guarantee
only one thing - that it grows in it's logic in such ways that it will
make a move in any given direction after any given time moment. This
means - it must guarantee that it reaches whatever logic possible at
some point in future.
3. I rather try to not play with mathematics (arithmetics etc),
because math is not complete itself. If there is anything complete,
this is mathematical logic. I think it should be able to handle the
cases of combinatorics, which are solvable at all?

I am here trying to do this - to go through all possible abstractions
of a program, which one can create, until eventually some of them
gives a hang sign - an implication showing that something will grow
into infinity. If 1D CA does not grow into infinity, we can check it
with FSM method - if it does, we need more powerful one. Now - I don't
know if there are questions solvable by computer (particularly in
general case) and still not expressible with mathematical logic. We
know that Russell failed in making logic into foundation of math, but
there is a question - can combinatorics with infinity be based on
logic? And can it be based on logic, which does not directly use
numbers? I know that FSM is purely based on such logic I meant - but
future states of indefinitely growing CA? Will we reach, at some point
of time, a conclusion that a thing hangs if it does? This, here, is
one more try to put together one aspect of my research over time,
including things I have lately found. I suppose that programs, which
do not halt, usually have some pattern producing that non-halting. I
do not go into nonhalting problems themselves, but the following would
be able to solve all mistakes I can remember from myself.

It goes a whole lot further in creating a general halting problem
solver, but I am not sure, how far.

We want to detect if it is infinitely growing or not.

We need the most general-case solver we can achieve with mathematical
logic.

First we create set of all possible states of one cell. We create a
map of all possible sets of those states (except an empty set).

This is the first layer of state map.

We now create implication map - which states imply which states,
considering all possible states on both sides of our map.

Now we make our map bigger by one cell - we are going to cover two-
cell area. This area's possible states go to our state map as regular
members - we can consider them as members and we will calculate all
possible states etc. Remember that we have the state groups in
addition to states - all possible groups we can form. Of course, one-
cell group can not transform into two-cell group, but it can have such
as a neighbor.

Now, we gave the total sum of all ways states can change. What implies
what. This covers all relations between all state groups - we know,
which groups imply which ones.

We will start growing our cell group bigger and always calculate all
implications. In many cases, cells and several-cell-groups can form
groups with each others.

We start growing in time. We have also time flag "infinite" - for
example, the group of all possible states will stay everywhere
infinitely (every cell must be at some state of those). Other state
groups usually change, but for example, each oscillator (infinitely
repeating pattern) must have some group in all possible groups, that
all it's states and only those are contained in that group.

We name all states, which combine current time moment with next one,
squaring the number of states. Now we can create implication rules
with neighbors, which will be in effect in overnext moment. Into this
moment, we can imply one-cycle, two-cycle or longer states.

We have now the implication rules of a number of state groups. We lack
one thing - generalization ability. We need that for a clear reason -
there are exploding cycles possible, which rather change in form, not
essence. For example, there might be indefinitely long cell group
always carrying our signal.

We now need the knowledge, which implication rules are directly or
indirectly implied from which groups of other rules. This means
ordered sets of all implication rules. We will pattern-match all new
rules - all rules, for shorter or longer slices, for shorter or longer
timespans, will be matched to all other known rules. What we are
interested in, are direct matches of those (between state groups).

For example, if there are groups A, B, C, X and X, Y, Z, B such that:
A=>B, B=>C, C=>A, X=>A
and
X=>Y, Y=>Z, Z=>X, B=>X

Then we will see a direct match. First ordered ruleset is identical to
second one. They are talking about different rule groups, but they
give our abstraction ability a kind of completeness - when we know
that a set of connections with some specific pattern matches a set of
connections with other pattern, we will know that they are of a same
kind. One group of states becomes a metagroup to other - some letters
are not connected, but A and X are connected and this must be noticed
and stored. By a whole set of implication rules and such knowledge we
are able to imply that in certain situations, some different groups
behave the same.

What this gives us? It gives us such ability:

Imagine number counting: 0, 1, 10, 11, 100, 101, 111, 1000, 1001,
1010, ...

We would not catch the point otherwise than only matching a few things
together - namely that the middle parts of numbers are actually the
same pattern. They produce similar results - and if we occasionally
find rules, which produced the counting itself, and match it to our
counter, we will have a meta stategroup. This is really necessary to
connect those meta stategroups together with normal stategroups, but I
have here some undone homework - anyway, I am sure that this is a
question of logic. As we can create new stategroups from beginning
part of number, middle part and end part, we need to create
stategroups in such ways that if we know that all important
implications stay untouched, we can create the group rule, which
actually state that in case the end part stays growing (and it's
growing must be in some of our longer-time stategroup), the middle
part will always propagate the growth to beginning part at some point
in future. As we have times of varying lengths, we must also be able
to generalize those - we will need to find rules, which are not time-
centric, but simply follow each others. To join those rules with
surroundings, we need to catch a lot of points here how to handle
those things in 2D - this is actually why my first text is
specifically about 1D world. If things grow, the same pattern of
events will possibly happen in bigger an bigger time frames.
Indefinitely. And this indefinite growth of same pattern is the
reason, why we have those means for metagroups - otherwise we would
never detect any growing pattern.

It is - my intuition states that we have a middle part of a number,
say 1101, and a middle part of another number, say 101, then the rules
by which first is composed by second, stating that the composer of a
number will go as much farer as needed, will also produce the
knowledge that the same would apply to potential 11100. That would
happen, because this growth certainly has some logic and if it does,
then it should be included in sum logic. In the same way - for prime
calculator, we can not say, which primes it will calculate (and if it
will give null division error?), but we can find some common logic,
which is there to produce more and more primes. And this, actually -
how to logically solve each single problem and detect if it is
solvable in such way, too - is not in scope of this text nor me right
now. This is more like an architecture document for general solution
than one itself.

This implication set matching, now, in addition to those
generalizations, solves our second problem - it allows us to find all
possible rules in things, which grow explosively. And to show the
logic, this is simple:

1. We need to know all implication rules about which effects our
systems produce on the empty world at right side.
2. In case we have clear implication rules, which say that there will
appear on right side of us something, which will produce something on
right side of itself, which will produce... Then we need to match
together the rule and the metarule in such way that by implication
rules we could see that things in the rule, which produced right-side
activity, match metarules controlling that right-side activity. From
that we could tell that the right-side activity will indefinitely
produce new levels of right-side activity, which doesn't directly
match neither pattern.

Now, we need a whole-universe observer. This observer looks at all the
implication rules of world cells and infinity - for example, if my
neighbors never produce event X, I will never produce event Y; also
patterns of neighbors about that if my neighbors never produce event
Y, I will never produce event X. Matching this together you will know
that one side there will never produce X and another will never
produce Y. By just watching cells it's impossible to say. By knowing
that, you can delete a big bunch of state groups and things get
simpler - not only simpler, but you are nearer to solution. By
deleting all impossible state groups of all cells (and infinity), you
might delete those state groups, which keep you doubting if things
hang. Each round, you will statically analyze the whole situation and
remove more doubt. Only this observer allows to keep all facts
together. This means - each frame things should be looked as one
logic, all implication rules executed and impossibilities removed.
This part happens on a real world we run and it will lead us to
understanding of our situation. It removes everything we know to be
false. Eventually it might remove the possibility of halting (related
state group from some cell).

Running the world - in same way as actualities are interacting and
working, both CA style and world-stopping are used to grow certainty
in different aspects of future. CA style interaction between cells
will show them the future happenings from all aspects we know as
general case. Whole-world analysis is necessary primarily to interact
with infinity (as our cell stategroups will eventually grow to handle
all locality).

Considerations about explosive growth of memory need

The implication group match is basically a way to solve the logical
infinity problem - that there is a kind of explosive logic, which a
normal FSM based general solver would never tackle (it will run into
memory overflow and still have no answer to if halting problem solves;
on Turing machine, such FSM based general solver keeping list of all
already passed memory states would hang). Anyway, trivial look to such
program might easily show that it will run forever - say, iterating
over an integer.

An exploding pattern in 1D CA would look like:
11
101
1001
1101011
10100100101

Under that pattern, some complex activities would take place - "1"
might mean any actual state of group (00, 000, 010) as the logic
machine would have a group, which is set of those three. To match such
patterns we need to understand their composition - this pattern, for
example, always has ones at both ends and left alone, would grow
infinitely (lets make it so). Thus, inside it must be some proven
group of patterns, which always produce the result that "at some point
in future" there will be 1-something-1. This something is a group,
which is growing. We must be able to say that 1-something[i]-1 will
produce 1-something[i+1]-1.

To do so, we must look into our known implication rules - like that we
have a base ruleset of implications, which, to some new ruleset of
implications, is identical match. This would mean something like that:

If 110011 will always change into 11100111 in case there is 101 on
side of it,
and 11100111 will always change into 1111001111 in case there is 1001
on side of it,
and 1111001111 will always change into 111110011111 in case there is
10001 on side of it,
and also 101 changes to 1001 changes to 10001 with same speed.

and we actually know the whole set of implication rules - why this is
so.

and the implication rules are such that we could also detect, how they
produce the next set of implication rules producing the next set of
implication rules.

For that we must find a pattern in implication rules - for this
pattern, there is always a set of base rules and compounds, which can
shown to produce the new set, which is "topologically" identical to
them. As for any given moment there is a fixed number of base rules
and compounds (states and state groups), we can actually show and
prove the relation between new set and old set - if something grows in
treelike pattern (each leaf becomes bigger and produces two childs,
where it has some clear logic, how the rest of world makes space for
them and this logic is also generalizable from some aspect of finite
logic of those), then each set of leafs growing their parents to next
level must have a set of basic implications, which is "topologically"
identical to another set of basic implications. This is another
question, how to store or show the relations between two - but they
clearly exist and are clearly finite. This allows us to eventually
find any pattern at any abstraction level - I think that all growing
patterns come back into those primitives in one or another way. And
the only logical property we are interested in - is there any rule,
which would guarantee that this pattern continues to grow? There will
be relative relations (this implication set is parent of this
implication set, which maps all variables and implications to a
relative space) and absolute relations to other states. This means,
implication rule set of new state we have found can relate to other
implication rule set in such way that exactly the same way one
produces one kind of event, another produces another kind of event as
long as their neighbors belong to some given state groups. As we can
relate each specific state to implication rulesets to which it has
something to do - thus knowing the equivalent in related rulesets -,
we can create very complex relations; anyway, I don't know if we need
to - I think that there is always a parent implication rule set
combined from known implication rules in different sets and in memory,
we must simply point to it. We never have to walk backwards - we only
need to show that some rule relates to child of some rule in this
direction; some state relates to child of some state in this direction
- but we don't need to relate anything to parent. Because there is
finite number of elements in parent chain and thus we can reach that
knowledge, with finite logic, otherwise.

We can not reach the cycles with finite logic. We must find the
relations, how implication rules for sets of groups relate to
implication rules of other sets of groups. Here I haven't yet
formulated the logic for time factor, but this is clearly possible -
time cycle lengths of composed rulesets are also composed from parent
ruleset.

Now, what we exactly search from those rulesets:

* As we have static knowledge about how rulesets produce each
others,
* and we have static knowledge about what we have already implied
about surroundings (we actually mostly need to imply that surroundings
do not intervene one or another aspect of our process in case it does
not intervene in parent).
* From those rulesets we seek the implication rules, which show
that some aspects of our world are going to grow indefinitely
(infinitely).
* To find them, we find the circular rulesets, where each state
will produce another state [larger than itself]; when we reach
infinite cycles - we do not need to iterate infinitely to understand
that they are in cycle.

So, this form of metalogic - finding topologically identical
implication rulesets and relating us to it - will allow us to find
infinite cycles, which are not periodic as in repetition, but growing
as in explosion.

I have also thought about things like - we can not deduct something
like Eucleids proof that there is infinite number of primes to our
program.

But we need not to, because:

* For any calculator calculating primes, there is built-in logic
to continue infinitely; program checking that does not even have to
know, what is a prime. It must effectively reduce the primes away from
program.
* For any calculator calculating primes with some odd ending
constraint (for example, starting from 3 and ending if prime%2=0)
there is some logical pattern, which shows, that no matter which kind
of allowed prime (going to be checked) you check, it does not divide
by two. I think it's related to place where you throw out primes,
which divide by a given number. I can not exactly reproduce that
logic, but I still see that I could logically formulate it, given
enough time and structure of that calculator. I mean, when I look code
myself, I will not think about infinities to solve that one.
* For any calculator calculating primes with some normal ending
constraint (prime > 5000) it will eventually halt and we need not to
find any pattern.

Also, the kinds of math related to irrational numbers, imaginary unit
or patterns never repeating themselves - I hardly believe that you
can, by writing infinite amount of combinatorics (as the code always
reduces to one) create something, which does not repeat in any way.
Something in it must repeat - some aspect or calculation would
transform it into counting integers from 0 to infinity. It always
forms some kind of repetitive pattern even if repetition is hidden.
For example, irrational numbers are transformed into rational numbers,
floating point numbers or some kind of abstract mathematical notation
- and none of those things is able to produce "truly random numbers"
or a true chaos, where no patterns could be detected.

Second - I think that all we need to detect those patterns might be,
talking about computer, mathematical logic. There, now, are two
questions - am I right and is the kind of logic I described truly
enough.

Example of this solver in action

I try to more or less formally define one hanging situation. The proof
that this halting problem solver solves all possible hanging
situations would be proof that they all can be expressed in such
formal way, which is compatible to how the system works (as we can not
know them all). If any hanging situation can be formulated in
compatible way, the system must be able to always detect if a program
hangs.

To create a complex example, which shows the actual activities of our
solver, lets make such world:

1. It is 1D cellular automata, which has "two processors" - for each
cell, it has two sets of states effectively simulating two-processor
system doing different things in parallel; it also has some states,
which affect the working of other processor. This is "noise reduction"
part of my algorithm I want to show with that - how it gets irrelevant
information out of it's way.
2. It has an oscillator at cells 0-100, which shows, how it relates to
other parts of program, which do not interact with hanging part.
3. It has infinite counter after those cells; let's assume that CA
rules already contain the counter logic - some type of cells swap
between 0 and 1 each cycle, some cells do it if their next neighbour
has reached 1 and some cells send signal to move the whole thing one
cell further when first cell counts to 1. Or something like that. As
well we could suppose that this counter is complex object containing
numbers as complex objects, but it's simpler right now if I can just
refer them as cells.
4. On "another processor" it has single oscillator at cells 0-1000

Now, what our analyzer does:
1. It finds all cell state rules of our world step-by-step - there are
infinite number of them, but if we can prove that any given rule is
someday reached, we can say that it iterates over all of them.
2. It does the whole-world-checks to mark some rules unreachable for
some cells; it marks that to cell state information of running world
(some cells will definitely reach some states). It will also use
"infinity cell" for that purpose and will react to the state, where
infinity cell does not contain zero-state anymore among it's possible
future states.

Now, how it does it in current situation:

1. As it divides cell states into groups and finds their possible
implication rules, it will put cell states into all possible groups.
Amongst those groups there will be groups, which contain cells states
of "one processor" (one group contains all possible states, which mark
specific state of this cell on that "processor") and groups for "other
processor". Eventually, implication rules will contain information to
say that this other processor is in infinite cycle in case nothing
disturbs it. It will contain also information about states of both
processor that neither will disturb other without other disturbing it.

2. This is why we need universe static check - universe static check
will make our implication sets more strict, using information of cell
combinations. Right now it will deduct out all rules, which state some
dependency between two processors - as soon as information is there,
which allows a static implication that they do not disturb each other,
all implication rules are grown stronger; this "if that is not that"
part of rules will be removed. Each cell state change is only
dependent on neighbour states on same processor from now on.

3. Now, the oscillator at beginning of world. This basically follows
the same rule - as it will be made clear that it effectively makes up
a cycle, which will not disturb others if not disturbed itself, it's
states will be marked as staying infinitely in some i-dont-disturb-you
group. This happens only when we have the same kind of data about the
following cells. As both real cells and potential cell states [or
states of infinity cell, as this can be seen if we waste more
processor] contain the potentiality information, this will be very
real data.

As we can see oscillator and counter forming two cell groups - one
contains oscillator and another contains the number -, which have
infinity cell as their neighbor (and infinity cell is clearly marked
as I-dont-disturb-you-if-you-
dont-disturb-me), we can effectively see the whole thing as one
multicell state containing all three. [just a sidenote]

4. After oscillator, there comes a counter cellgroup. This is now the
more complex case.

It is possible to divide counter into several groups.

First group is a beginning of it: Z001101, for example, where Z is the
cell meant to shift a number if it gets "full" (which includes some
complex state handling, but I take it as granted right now).
Second group is end of it: the counter, which goes 0-1-0-1-0-1
forever.

Now, lets take an intermediate part of it - the counter. Let's call
the group containing this:
1 - the T[0] group as it triggers the Z to create a new oscillator at
an end.

And state group containing this:
0 - the C[0] group as it counts numbers. It also belongs to Z[0].

We can see the implication rules:
C[0] group will eventually become T[0] in case it has trigger Z on one
side of it and oscillator O on another side of it.

In the system of all implications, we have this mentioned. State maps
show that state C[0] will be there until triggered to C[0] and that it
must be triggered in case there is an oscillator.

Now, these are rules about middle part of Z0O and Z1O cellgroups.

If we look into bigger counter:
00, 01, 10 - C[1] group
11 - T[1] group
00 - Z[1] group

We can see that this group, in relation to oscillator, behaves in the
same way. If we keep the word "eventually", we will very directly see
that this new ruleset telling that as long as there is Z on one side
of it and O on another side of it, it will eventually change O into
Z[0] and itself into Z[1]. For O we know that this event changes the
next place into Z[0].

If we do not keep the word "eventually" as allowed in rules, we will
also start growing O - for O[0], which contains just 0 and 1, we will
add O[1], which also contains time - it contains both states of O as
they change. This means, it contains two states, one in first and
another in second place. For O[0] has also two values of which only
second one will change the state, there might be two O's - anyway,
O[1] already is a stable state, it contains O=0 at first cycle and O=1
at second cycle. Then we would get a four-cycle group combining C[1]
and T[1] for which we know that if it has Z on one side and two cycles
of O[1] (which means four cycles of oscillating as O[1] is two-cycle
state) on another, it will change O[1] into Z[0].

Now, when we look the whole picture (with static universe analyzer),
we can see:
As we have the group C[1] producing Z[0] on side of it as it changes
to T[1] because it's composed of C[0] behaving in the same way to C[1]/
T[1] on side of it, there exists also C[2] etc. based on identical
implication rules. This means - as we have the whole logic visible
here, how one thing produces another and why we implied that C[1] has
such behavior, we will have a logic cycle, which shows that this kind
of group always produces something, what has identical behviour to it.
And that despite of all noise we added at beginning!

Thus, we have shown that this Halting Problem solver will effectively
solve a simple iteration. In my mind, I have played with calculating
primes with halt condition of prime=0 etc., but this here is what I
could formulate. This with-time and without-time ...I am still not
sure if it matters. It seems that if all combinations of implication
rules must be included in main ruleset, there is clearly no need for
specially covering time - as it will just see all combinations. If
there is any hope to really optimize it into solving actual problems,
anyway, the question of time is possibly more important as it might
remove some data overhead to have time specifically included.
.

## Relevant Pages

• Re: Alan Turings Halting Problem is incorrectly formed (PART-TWO)
... >> problem for an infinite number of programs, ... > halting problem for only a finite number of input programs, ... I don't dispute your statement of the halting problem at all. ... Haven't you taken an automata course? ...
(sci.logic)
• Re: Alan Turings Halting Problem is incorrectly formed (PART-TWO)
... Your counterexamples are invalid, the program must ... that's not the way to talk about mathematical theorems AFAICT. ... > the halting problem for a finite number of programs? ... > the program output for all of the infinite programs for which it does ...
(sci.logic)
• Re: Alan Turings Halting Problem is incorrectly formed (PART-TWO)
... >> problem for an infinite number of programs, ... > halting problem for only a finite number of input programs, ... I don't dispute your statement of the halting problem at all. ... Haven't you taken an automata course? ...
(comp.theory)
• Re: Alan Turings Halting Problem is incorrectly formed (PART-TWO)
... Your counterexamples are invalid, the program must ... that's not the way to talk about mathematical theorems AFAICT. ... > the halting problem for a finite number of programs? ... > the program output for all of the infinite programs for which it does ...
(comp.theory)
• Re: Calculus XOR Probability
... What does "acheive infinity" mean? ... Are externally infinite sets infinite? ... reals, or rationals, in are a trivial counterexample. ... I have Big'un unit intervals on the real line, ...
(sci.math)