Why 7% is the Difference between Failure and Success in Trading

Welcome to futures.io.

Welcome, Guest!

This forum was established to help traders (especially futures traders) by openly sharing indicators, strategies, methods, trading journals and discussing the psychology of trading.

We are fundamentally different than most other trading forums:

We work extremely hard to keep things positive on our forums.

We do not tolerate rude behavior, trolling, or vendor advertising in posts.

We firmly believe in openness and encourage sharing. The holy grail is within you, it is not something tangible you can download.

We expect our members to participate and become a part of the community. Help yourself by helping others.

You'll need to register in order to view the content of the threads and start contributing to our community. It's free and simple, and we will never resell your private information.

I've not much to add, I absolutely agree with the idea of an optimal frequency and that the function of transaction costs do not necessarily increase in a way which leaves higher frequency trading possible.

The following 2 users say Thank You to artemiso for this post:

Great thread....just wanna say thanks to everyone who's contributed.

Couple of thoughts, and I apologize as I'm no math genius or fancy programmer, and lots of questions going thru my head.

I personally have not found success in "simulating" real trading, much less mathematical formulations or probabilities.

The one thing I 100% completely agree on is that small edges can have a huge impact, and it only takes 1 or 2 jackass trades to knock you back a few weeks or months.

Without trying to go off subject, I've found the most success in adding the 'picky' filter to my trades. I'm not going to take every trade my 'system' produces, so that upfront filters out profitable and (hopefully mostly) losing trades...is there a way to factor that in for simulations or is it already there?

Also, is there a way to calculate 'mixed profit targets'? What I mean is, that if I don't feel the trade, I'm out...so suddenly I'm changing from say 1:1 to breaking even or losing a few ticks.....and the same for profit targets? If I normally trade 1:1, but I see major buying or selling, I'm gonna let the market tell me when to get out.

And on the same note, say if a trader mixes his trading 'styles' for market moods....say trading 1:1 for choppy days or when the trend is in, trading 1:4 or more....

just a few thoughts....

Now to continue with the ideas from the basis of the thread, I'd be interested in figuring out some mixes....

Say trading 3 equal unit sizes, 1:1, 1:2, 1:3 or maybe 1:.5 1:1, 1:2....and even smaller differences....1:.6, 1:.8, 1:1

The idea of using fixed risk/reward is interesting, in taking out the 'emotional' element to some extent.....See the trade, put the order in, once hit, have the stop and profit target in and then go for a walk....that way you're not scared out at break even or on a stop run.....

def. cool stuff....one thing I do like about this forum is the general attitude of traders helping open each others eyes to ways of more profitable trading.....much better than those 'elite' traders and such...

cheers!

The following user says Thank You to lsubeano for this post:

Just read a fantastic book related to topics discussed here, Fortune's Formula. Recommended reading.

Please register on futures.io to view futures trading content such as post attachment(s), image(s), and screenshot(s).

Btw, Poundstone has other highly entertaining books out (philosophy, math, computer science...etc.). New favorite.

"...the degree to which you think you know, assume you know, or in any way need to know what is going to happen next, is equal to the degree to which you will fail as a trader." - Mark Douglas

The following 4 users say Thank You to Anagami for this post:

Great thread, thank you for posting. I went to bed thinking about this last night and I have a question regards how the simulation is generating trade distributions based on the input figures Win%/RR. I've attached an image that shows two different distributions with resultant Win% and RR being the same. I created two graphs so that it's clear to see why they are different from each other.

Please register on futures.io to view futures trading content such as post attachment(s), image(s), and screenshot(s).

When the simulations are being run - is the model generating type A, type B and all variations thereof or is it merely generating type B distributions and randomizing the sequence of events?

I'm trying to understand the application of the edge and its ramifications for the model as a whole. If I can understand it I'll try to draw it in, which might make it slightly more accessible. Sometimes I get a bit lost in the maths.

Cheers

1Lot

Last edited by 1LotTrader; July 21st, 2012 at 01:19 PM.
Reason: Sorry my fault, early in the morning, it's drawn manually in excel. I have corrected.

The following 6 users say Thank You to 1LotTrader for this post:

Before I start, there is a small glitch on the right chart, as the visual part shows 11 wining trades and 8 losing trades. However, the figures below are correct. Just the last and the third but the last bars are off.

You have selected two examples with the same win rate (50%) and the same R-Multiple (2.0). However, there are two differences that can be spotted

(1) The first example generates larger average returns, both positive and negative.

(2) The returns of the first example are dispersed, while the second example only has two outcomes.

Actually the second example represents a Bernoulli distribution, which is a discrete distribution with two outcomes a and b which occur with a probability p and (1-p). We have built a model for optimizing trade size of this type of distribution by considering the maximum acceptable risk of ruin (or the probability admitted for a predefined drawdown) in the thread of risk of ruin. You cam compare with the cases listed here as well: https://futures.io/psychology-money-management/15602-risk-ruin-8.html#post210800

Larger Average Returns Have no Impact on the Risk Profile

(1) The fact that average returns are larger, just means that you will experience larger profits, larger losses and larger drawdowns. To understand the impact of the larger returns, let me add a third model to your two cases. The third modell is a system that produces returns of + 13.8 points for 50% of all trades and - 6.9 points for the other 50% of all trades. So it is a Bernoulli distribution of returns as model 2, just with larger outcomes. As both models 2 and 3 are Bernoulli, I can use the spreadsheet below to study the risk adjusted outcome.

Bascially the two models are identical. Except for some small differences which are a consequence of the non-possibility of trading fractional futures contracts, the model 2 can be leveraged up to match model 3. So with predefined risk characterstics, you could trade model 2 with 33 contracts and model 3 with 9 contracts for the same acceptable drawdown, and they would both require 43 respectively 45 trades to reach the target.

Please register on futures.io to view futures trading content such as post attachment(s), image(s), and screenshot(s).

Please register on futures.io to view futures trading content such as post attachment(s), image(s), and screenshot(s).

A Larger Dispersion of Returns as Measured By The Variance Leads to a Less Favourable Risk Profile

(2) This case can no longer be analyzed with our simple model, which only works for Bernoulli distributions.
The trade returns for model 1 are dispersed around the average return.

If I compare your model 1 to the newly created model 3 (Bernoulli distribution of with returns of + 13.8 points and -6.9 points) I know that the two models produce

- the same average win rate
- the same R multiple
- the same expectancy

The only difference is the risk profile, which can be determined by calculating the probability for a maximum acceptable drawdown.

(2a) If you have real trade-data you can do this with a Monte-Carlo simulation and determine the percentile corresponding to your draw down.

(2b) Before even starting any calculations, we already have a feeling that the dispersion of returns is nothing good and will negatively affect drawdowns. The average return does not help us to confirm that bad feeling, but there are other statistical tools that do, for example the variance/standard deviation of the sample (division by N-1) becomes

For obtaining the standard deviation, we simply calculate the square root, which returns

model 3: standard deviation = 11.2
model 1: standard deviation = 12.7

The suspicion has been confirmed, the dispersion leads to a higher standard deviation, which is probably bad.

Comparing Drawdowns

(3) To better understand the risk profile of the trades, we will have a look at the probability that a maximum acceptable drawdown is generated by 3 successive trades. Just for demonstration purposes I will look at the possibility of a drawdown of 25 points

The model 3 only generates a win of 13.8 points, or a loss of 6.9 points, so it cannot produce a drawdown of 25 points with three successive trades, the probability is therefore 0.

The model 1 can produce 20 different outcomes for each run, that leaves us with 20 x 20 x 20 possible outcomes for three consecutive trades. Out of these the combinations the folowing 39 produce drawdowns of more than 25 points: (-11,-11,-11), (-11,-11,-10), (-11, -11, -7), (-11,-11,-5), (-11,-11,-4), (-11,-10,-11), (11,-10,-10), (-11,-10,-7), (-11,-10,-5), (-11,-10,-4), (-11,-7,-11), (-11,-7,-10), (-11,-7,-7), (-11, -5, -11), (-11, -5, -10), (-10,-11,-11), (-10,-11,-10), (-10,-11,-7), (-10,-11,-5), (-10,-11,-4), (-10,-10,-11), (-10,-10,-10), (-10,-10,-7), (-10,-10,-5), (-10,-7,-11), (-10,-7,-10), (-10,-5,-11), (-10,-5,-10), (-10,-4,-11), (-7, -11, -11), (-7, -11, -10), (-7,-11,-7), (-7,-10,-11), (-7, -10, -10), (-7, -7, -11), (-5, -11, -11), (-5, -11, -10), (-5, -10, -11), (-5, -10, -10).

The probability of a drawdown of 25 points or more is therefore 208 / 8000 = 2.60 %.

This simple calculation (don't beat me up, if there is a numerical error in that calculation, just made it quickly) shows that the risk profile of the two models is different, and that indeed model 1 cannot be traded with the same leverage for the same risk appetite.

A Monte Carlo simulation on both in-sample and out-of-sample data will show the impact on position sizing and aggregate returns.

Last edited by Fat Tails; July 21st, 2012 at 01:29 PM.

The following 7 users say Thank You to Fat Tails for this post:

After reading your post 4x I actually understood everything you said

From my own very long hours of testing data sets and live trading I knew that Sample A would have a different risk profile to Sample B and the effect of a position sizing algorithm on top of it would only serve to increase that uncertainty. We know the expectancy is the same for both sets but we also know that the curve in A could potentially be far more volatile. So I garnered that from your post as well as how one would go about calculating that mathematically which I didn't know how to do before.

Unless I have misread something and the answer actually lies in there I'm still kind of wondering about how the simulation on page 1 is generating its curves. All I can see in the input parameters is the Win/Loss and Win Probability. So how is it simulating the trade distribution that we see in set A?, Is it the case that the simulator automatically assumes a Gaussian distribution and then generates random trade sets based on that profile. (if so then how are we defining the domain since it's not being based of an initial trade set?).

I know this is the realm of a Monte Carlo simulator but if it isn't a Monte Carlo Sim then what is it that we are we looking at?

Apologies if my questions are a bit basic, as I mentioned I'm not a statistics wizard.

Cheers
1Lot

The following user says Thank You to 1LotTrader for this post:

It is a Monte Carlo simulator. This is what a Monte Carlo Simulator does:

Let us assume for simplicity that you have a sample strategie which produced 100 trades during a backtest. Now you watch your equity curve and find out that it is pointing up. Nice. But if you run the same strategy again in the future, what is going to happen, will you observe the same drawdown?

To continue your analysis you now assume that the outcome of each trade is stochastically independent from the outcome of all prior trades. This is one of the typical assumptions made by financial mathematicians, because it facilitates model building. Of course this is never true, but if you don't make that assumption, things become too complicated. Reminds you probably that the Black-Scholes model also assumes lognormally distributed prices and we actually know that they are not.

Now, assuming that stochastic independence holds, you could take all the hundred trades, rearrange them in all possible orders and then draw the equity curves. If you do this you wil have a chart with exactly

100 x 99 x 98 x 97 x ............ x 5 x 4 x 3 x 2 x 1 equity curves

Now you switch on your PC, and in about hundred years your great-grand-son will see a black cloud on the chart.

This is not feasible, so instead of anaylzing all the combinations, you just select a few of them with the help of a random generator. After having selected about 100 different paths you draw the equity curves and look at the behaviour of drawdowns.

Variations of Monte Carlo Analysis:

Urn model without replacement: This is just a reshuffling of all trades. All equity paths will lead to the same final result, as each of the trades is drawn once and used for the equity curve.

Urn model with replacement: One trade can be drawn several times, some trades will not be used at all for the equity curve. For example you can see that the Monte Carlo Simulation on page #1 used this model, as the equity curves do not converge to a single point at the end of the back test.

The second question is what you use for your Monte-Carlo Method. You can use

- the numerical result of each trade
- or the returns of each trade

The appropriate method depends on how you determine your position size. If you use a constant position size over the backtest, then you should reshuffle / redraw the numerical result for each trade. In this case the equity path is calculated by adding or subtracting the result of each selected trade to the equity.

If you use a fixed-fractional approach to position sizing in order to obtain maximum geometric growth, then you need to calculate the equity path by multiplying the equity with the growth factor calculated from the return of the last trade.

Both approaches are possible, as - lucky enough - both addition and multiplication are commutative.

The Monte Carlo Simulation shown in post #1

I think that this simulation is limited to Bernoulli Distributions, that is trades with two possible outcomes. These can be entirely defined via the win rate (probability of a favourable outcome) and the R-Multiple (ratio obtained by dividing the winning trade by the losing trade). The simulation use an urn model without replacement, because the equity curves diverge. It use an approach of constant position sizing, as other wise the equity curves would not show linear but exponential growth.

For Bernoulli-Distributions, a Monte-Carlo Simulator is not needed, as you can determine the optimal position size with a simple calculation. The Monte-Caro Simulator is superior, when you cannot build a simple model to describe the behaviour of a statistical series. In that case it can be used to get an approximation for real probabilities, although the model is built on the false hypothesis of stochastic independence.

The following 5 users say Thank You to Fat Tails for this post:

Thank you Fat Tails for taking the time to help me reason this out and validate my initial thought. I've done quite a bit of work with different position sizing algorithms, so I am familiar with the way in which they randomize the trade sequences in order to analyse the effects of leverage on the different sequences that are possible in a normal random distribution. However, I have always done it by running a simulation on top of an existing trade set as opposed to generating it off such a simple set of inputs. This is what led me to suspect in the beginning that the only way this simulation could do this is by generating a simple two type outcome model.

As we know the market is not a Bernoulli distribution, if I'm not mistaken, the only way to actually create one is to implement a system on top of it that creates one through rigid trading rules. Having thought this through thoroughly I think it would be presumptuous for Anagami to assume that every trader or a large amount of them at least are using a Bernoulli distribution. To be quite frank in my 4.5 years trading I can't actually remember having met one that does.

My thought is that the reason a small edge is generating such magnificent returns is because of the consistency of the Bernoulli distribution itself. The risk profile as we previously talked about is in no way the same as the risk profile in type A. If we are then using fixed fractional sizing on top of it, we magnify this edge tenfold since the strength of most position sizing algorithms rely heavily on the consistency of the underlying distribution.

So are we really looking at "Why 7% is the Difference between Failure and Success in Trading?" or are we looking at "Why 7% is the Difference between Failure and Success in a Bernoulli Distribution?"

Thoughts?

1Lot

Last edited by 1LotTrader; July 22nd, 2012 at 03:08 PM.

The following user says Thank You to 1LotTrader for this post:

The Bernoulli distribution is a special case, which can be used to build a simple model. The simple model is very useful as you can already show a lot of things, for example

- that departing from an identical expectancy, a trading system with a high win rate produces lower drawdowns and gets you better risk-adjusted returns then a trading system with a high R-multiple and a low win-rate,
- that an input is needed from the trader selecting maximum permissible drawdown (= ruin) and the maximum acceptable probability (= risk of ruin) that this drawdown will actually occur during the trading period.

This is very useful, as it allows for understanding some of the basics.

Now if you have a system with a proven edge (proven means that the system worked for the in-sample period), then the question of position sizing is quintessential. In real life, you will not be able to build a model that reflects your backtest, so it is better to do some simulations. And that is where Monte-Carlo kicks in. I agree that it should be run on the real or backtested set of trades and not on a model. This will get you an estimate (not an exact value, as you assumed a Gauss distribution of returns and use a random generator to select the trades) of the likelyhood of the maximum permissible drawdown.

Then you can adjust position size to stay within the limits that reflect your risk appetite.

The following 2 users say Thank You to Fat Tails for this post: