[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Optimization Issues - WAS Optimization Speed TS2000i vs TS8.1


  • To: omega-list@xxxxxxxxxx
  • Subject: Re: Optimization Issues - WAS Optimization Speed TS2000i vs TS8.1
  • From: "Paul M. Zislis" <pzislis@xxxxxxx>
  • Date: Sun, 30 Apr 2006 13:10:00 -0400

PureBytes Links

Trading Reference Links

I should have added in my earlier post that I find it highly desirable for the region of acceptable parameter values to give fairly flat results rather than peaky ones.

Paul

My earlier post follows:

> I agree with Mike's general observations. In addition,
> "optimization" is a very unfortunate term and leads some to believe
> that such testing will lead to systems that are robust in the sense
> that future results will be similar to the in-sample test results
> for the "optimum" set of parameter values.

> I prefer to think of "optimization" testing as Parameter Stability
> Testing (PST).  I find it more useful to think of the goal of such
> testing as NOT to find a parameter set that shows the "best" results
> in terms of net profit (or any other one or more metrics). Rather, I
> think of the goal of the testing as finding a large region in
> N-space (when there are N parameters involved in the testing) where
> ANY set of parameter values within that space represents
> "acceptable" trading results.  The larger the size of the region,
> the more stable the system has been when the parameter values
> changed in the past.  I would tend to favor the set of parameter
> values for out-of-sample testing or trading to be the parameter
> values closest to the center of the region.  I strongly agree with
> Mike on the need to do out-of-sample testing to confirm (or refute) your in-sample test results.

> For example, with a single parameter being tested, think of a line
> graph of the results.  The objective of PST is to find a wide set of
> contiguous values for the parameter (on the X axis) which give a
> result at or above some threshold which is viewed as an acceptable
> level for trading results (on the Y axis).  This might look on a
> graph like some earlier parameter values having results which are
> mostly less than the acceptable level, then a broad area where the
> results are all at or above that level, and finally some additional
> parameter values where results are again mostly below that level.
> There may be some values above the threshold prior to the contiguous
> area and subsequent. But the interesting part of the graph (and the
> test) is the contiguous set of parameter values that all produce acceptable results.

> Paul

> <SNIP>

>> Mathematically speaking both of you are incorrect.  Any amount of
>> optimization on a fixed amount of prices is curve fitting no matter if
>> you have 10 or 10,000 input variable combinations.

>> Curve fitting(whether by combintorial search, Neural Nets or genetic
>> algorithms) a strategy to the noise and price patterns(if there) is
>> the "Siren Call" of todays trading  platforms.  Optimization results
>> on a fixed amount of prices will look marvelous creating the illusion
>> that the strategy will produce these profits in the future.  But the
>> truth is that you can optimize random data (which I have done many
>> times) with the same strategy and get excellent results. To minimize
>> the curve fitting of the noise one must use walk forward out-of-sample
>> methods.  Without walk forward testing on prices that were not in the
>> optimization sample(out-of-sample), the optimization illusion will
>> minimize your trading profits.

>> As an aside, following the logic of DH and Bob's statements, it would
>> appear that the quants at Goldman Sachs only use milliseconds on GS's
>> super computer to determine profitable trading strategies. Heh?





-- 
Best regards,
 Paul                            mailto:pzislis@xxxxxxx