Re: Proof that neural nets work

John Moody <john.atwell.moody@xxxxxxxxx> wrote in message
news:1148081025.906760.320300@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Bill Reid wrote:

Since you are posting this on a stock newsgroup, are you implying
that stock prices can be predicted by linear, albeit "complicated"
trig functions? If we could devine what these functions are, similar
to "attractor reconstruction" in chaos theory, could we then just throw
the nueral net away and use the functions to predict the market?

enough. It is a precise question and deserves a precise answer.

The pictures at http://www.goldengem.co.uk/description.html#proof show
the result of putting some related math functions (not linearly
related) into a neural network, then asking the network to predict each
one into the future.

The functions are chosen not to be useful, but to be hard to predict.
Things like cubing the sin of i, then adding a number, then taking the
cosine of that!

4x(1-x) starting with 0.80 rounded to two places =
0.80
0.64
0.92
0.29
0.82
0.59
0.97
0.12
0.42
0.97
0.12

And then it starts to repeat the same sequence (0.12,0.42,0.97) due
to the rounding. As a "random number generator" it sux (rounding off
at six places rather two would give it a much greater "period"), but
here's the thing. You can indeed use a neural net to PERFECTLY
predict a sequence of apparently "random" numbers, after some
amount of "training", if indeed those numbers are generated by
some unknown, but continuously and precisely applied discrete
function.

But what happens if the function is both unknown and only applied
sometimes and not precisely? Or for that matter, what happens to
your "training" if, for example, you are currently perfectly predicting the
sequence of an unknown "random" number generator and then they
re-seed the generator?

The point is, no one could predict even one function, but since the
four functions entered are related to each other (but not linearly) the
neural network was able to figure out and predict all of them
perfectly.

This is what makes nueral networks "neat".

What does that prove? Not that neural networks can predict the stock
market. It proves this particular one does what neural networks can do
best: when there is a relation between graphs, it will find it, and use
that to its advantage to predict one into the future.

Again, that's "neat".

This is very different from linear algebra, or linear regression. The
relation between these functions is very complicated and indirect. Only
a neural network could do this.

I must again say "neat".

Now, a net like Stock100 that just runs automatically cannot really do
anything useful. You have to have human cognition first, to imagine a
relation between various tickers and volumes. Then, a neural net can
tell you whether that relationship is real, and quantifiable.

Yes, which is why I fooled around with neural nets a LONG time
ago to try to "predict" the stock market...

That works in GoldenGem because, when you set sensitivity to zero, the
net is blinded to the blue curve (the present day stock values) and
only sees the red curve (the historical prices and volumes). So if the
green curve it produces is matching the blue curve, this means, it HAS
found a mathematical relationship.

Well, did it? Has anybody done this (knowing that I did this almost
twenty years ago, and many others claim to have done it)? If you, or
anybody did it, how closely did the "blue curve" (actual values) match
the "green curve" (predicted values)? How much price/volume data
did you feed it, what was the period of the data?

But let's start with price data alone, and look at it from some other
angles. If we look at daily data (or really any period of data), we
"discover" that there IS a simple mathematical relationship, a
STATISTICAL relationship, between today's data and yesterday's
data, because if we plot a histogram of the change between the
two days, we get something sort of resembling a "normal curve",
with a lot of data points clustering around a very small change,
and very few for a large change.

Likewise, from chaos theory, our first pass at "attractor reconstruction"
(n vs. n-1), shows a little fuzzy but easily discernable "attraction" of
today's stock prices to yesterday's stock prices. But just plain old
logic and common sense and any amount of experience says that
of course traders base their bids and asks in the market on the most
recent sales prices, just like people believe that if their next door
neighbor sold their house for \$1,000,000, THEY should also get
\$1,000,000...and they'll stick to that unless there are other factors
that cause or allow them to raise or lower the price.

Now, as a practical matter, after running the neural net on the
same amount of data as was required to come to the above
conclusions, can it actually provide a more profitable prediction
than from what we learned (or already knew) above? If not,
then we're actually a step behind with the neural net, because
we haven't LEARNED anything, due to the "black box" nature
of the predictions...

We are not expert investors, we are providing a mathematical service
here. To check that we have not made any mistakes,we can use GoldenGem
to predict very complicated mathematical functions. We ourselves are
not using it in the stock market, and we do not know how.

Yeah, that's one of the keys to fooling around with neural nets, or
really doing anything to try to "predict" the stock market. You gotta
first know what data is important, and the question is, how do you
come to THOSE conclusions? It's kind of like a chicken and egg
thing...

It is very clear if we entered too many stocks, or entered just all of
the, it would get a fit, but for the wrong reasons (essentially because
with enough input data there will be enough random connections for it
to find a fit).

Ah yes, this is where "artificial intelligence" turns neatly into
"artificial
stupidity", where a computer can even more efficiently find "patterns"
in the market that are ridiculously non-existent.

However, I still believe that "natural stupidity" is the best stupidity of
all. At least a computer can't talk itself into a completely retarded trade
the way a human can, which is why I do like to take the advice of
my computer as much as possible, if only to "avoid mistakes"...

But if a person has an idea that a connection might
exist, that certain share activity might predict certain other, the way
to make that rigoruos, and quantify it, and get a prediction based on
that would be defniitely to use a neural net.

Yeah, that's the general drill, more specifically you're gonna have to
quantify any results you get statistically, and then ponder why the results
are the way they are, and after you've done all that, you might have
been better off taking a different tack.

I DID get "interesting" results from neural nets in trying to "predict" the
market, but did feel that I needed to try different approaches to really
refine effective strategies. One of the problems I haven't even addressed
is the quality of a lot of the relevant data (or the data I "think" is
relevant),
and how well does a neural net work when fed a diet of politically
motivated BS.

And I can certainly
promise that GoldenGem would be the one to use.

Well, since I've used neural nets in the past, I'd recommend them
to everybody, on the basis that if I've done something it must be brilliant
and useful. Hell, I've thought about digging out my old neural net
software and running some tests on the data sets I currently use for
market "predictions" and see how closely the numbers match up
against my money-flow model algorithms...

---
William Ernest Reid
Post count: 363

.

Relevant Pages

• Re: information extraction from stock market data
... changes in prices of stock given the degree to which the company in ... there was a very weak effect improving the quality of prediction. ... Ross-c ...
(sci.econ)
• Re: information extraction from stock market data
... changes in prices of stock given the degree to which the company in ... there was a very weak effect improving the quality of prediction. ... Ross-c ...
(sci.stat.math)
• Re: Proof that neural nets work
... stuff I knew from my job that the great majority of stock ... It occurred to me......most people use 'gradient flow' and the idea ... There is a recent PhD thesis about trying to rescale the error ... correction of a neural net, but this is a bad idea. ...
(misc.invest.stocks)