[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Precision Errors



PureBytes Links

Trading Reference Links



> -----Message d'origine-----
> De : Gary Fritz [mailto:fritz@xxxxxxxx]
> Envoye : jeudi 19 juillet 2001 05:47
> A : omega-list@xxxxxxxxxx
> Objet : RE: Precision Errors
>
>
> > Of course, if you use 1000 or more maxbarsback settings, such side
> > effects may appear. The question is: Who really uses 1000 or more
> > bars is his current system development?
> > Value1= average(c, 1000) does make sense to you ?
>
> Pierre, you seem to have a problem understanding what people have
> written.  I know English is not your native language but you should
> be more careful before you ridicule someone's post.
>
> Neal Falkenberry wrote:
> NF> With Currentbar > 1,000 the difference between TS's calculation of
> NF> a 90 bar correlation versus Excel or C++ (which agree to 9
> NF> decimals) is .32 versus .56.

The reason why I spoke about maxbarsback (MBB) and not current bar (CB) is
the following:
If an error appears after CB, this means that the calculation depends on the
previous bars, regardless of MBB setting.
If now a code that use MBB with large value does not show the error, because
all of the calculation are done every bar using this large MBB setting, this
will avoid the CB anomaly that is again the proof of an error accumulation
 see my previoius F_average example if you do not understand me again).


>
> Note:  **CurrentBar** > 1000, not MaxBarsBack.  **90 bar**
> correlation.  Both of which are quite within the bounds of reasonable
> use.  (Though I'm not sure why CurrentBar is relevant, since the TS
> Correlation function doesn't accumulate anything.)

Yes, I know.
But if yoou look at the coreelation code posted, you will see this in the
comments:

PROGRAM   	: User Function: S_Correlation_Moment

	COMPILER   	: Omega Easy Language

	PROVIDED BY	: Anonymouse

	LAST EDIT	: 12/30/1996

	PURPOSE		: Returns the Coefficient of Correlation.
                  For a dependent variable YDEP
                  (usually Data1) it measures how
                  closely related YDEP is to XIND (usually data2)
                  the independent variable. If YDEP increases
                  directly as XIND increases, YDEP is positively
                  correlated with value 1.0. If YDEP
                  decreases directly as XIND increases then
                  YDEP is negatively correlated at value -1.0
                  If there's no linear relation between the
                  two, the correlation is 0.0.

================LOOK========>
                  Experimental program only: Use at your own risk.
                  It was originally to illustrates a method for quick
                  calculation of sums. Do a one-time original
                  SUM = V0 + V1 + ... +VLen-1,
                  for the next data point, the old V0 becomes V1,
                  old V1 becomes V2, etc.
                  so, SUM (new) = SUM (old) + V[0] (new) - V[Len](old).

As you may read, this code carries the previous sum value to the next bar
and will probably accumulate the errors.
This is not the best way to have the correct Pearson value, but the fasted.
What you have to do is to recode using a MBB full length calculation and do
not reuse the previous sum values as it is the case here.

>
> But I'm not certain we're looking at the same thing here.  Neal, did
> you test this with single-precision floats in C++?  I'm 99.9% certain
> Excel uses double-precision, so if C++ agrees with it to 9 decimal
> places, the C++ test was probably double-precision too.  You can't
> fault TS for not matching double-precision results when it's clearly
> documented that TS handles only singles.

Makes sense!

>
> Also, Pierre is right that TS's default correlation function bears no
> resemblance to the correlation function that you probably used in
> Excel, either CORREL() or PEARSON().  See how Brickey's correlation
> code -- which should match PEARSON() -- compares to C++ with single-
> precision.  Better yet, test Brickey's EL code against a single-
> precision C++ implementation of the same algorithm, so you know
> you're comparing apples to apples.  If THAT test shows radical
> differences, then there is a legitimate problem.

It will probably not match due to the accumuilation of errors using the
sliding window summation after large CB values!

>
> > Regarding to your  precision problem, there is always a workaround.
> > ... I am always amazed to see how people are quick to bash without
> > having the idea of trying to solve with what they have and adding
> > some use of their brain.
>
> Pierre, let's imagine you've just bought a car.  It appears to work
> fine.  But then you discover that, due to a bad design, the fan belt
> rubs against the brake lines.  Occasionally the belt rubs through the
> brake lines, the brake fluid escapes, and you have no brakes.

You may consider to buy a plane.
Roads are very dangerous today.

>
> Now let's say you went to the car company or a mechanic and they said
> "Well there's always a workaround.  You could replace the brake lines
> every morning, or you could reroute the brake lines, or you could
> redesign the fan belt so it doesn't rub on the brake lines any more.
> I am always amazed how people are quick to bash without ... " etc etc.
>
> Would you consider that a reasonable response to the badly designed
> car?  If so, remind me never to buy a car -- or software -- from you.

Fortunately, I do not sell cars.
Only TS tools, and for what I knwo, I still understand what I do.

Sincerely,

Pierre Orphelin
www.sirtrade.com
TradeStation Technologies representative in France