Robustness is the elusive property that many automated and mechanical systems lack. It is conversely a quality that good discretionary traders should possess. Most system developers strive for simplicity which is understandable because robustness is very difficult to actually measure. However, a more complex model is more likely to be accurate. However, as models become more complex they can become less verifiable. The markets move because of all the factors that influence them: fundamental, technical, sentiment, and structural. The most important factors driving the market at any one time change over time.
The result is at one end of the spectrum consisting of algorithmic/automated strategies you have very simple and verifiable systems but that fail to capture the reality of the markets. On the other end of the spectrum, you have discretionary trades whose methods are very difficult to verify but likely more accurate.
I will present the following arguments and demonstrate why it is logical:
Discretionary traders, i.e. more complex models, will have greater robustness than any singular automated or algorithmic strategy
Automated strategies shall tend to both be more consistent and more fragile
Why Algos More Consistent?
First, automated strategies shall tend to be more consistent because "consistency" is strongly correlated with "same process". It means that if a discretionary trader achieves a very high consistency then there is a higher probability that they should be able to automate that strategy-- at least over a short interval. In other words, one won't be able to achieve super consistency unless they are acting in a consistent or well-defined way. Consistency also implies fragility. Because for any strategy to be consistent implies routineness. It implies that something can be detected and not only detected but detected at high probability. If anything can be detected then most likely others have detected and are also exploiting. As a result, consistency is more likely to lead to fragility. The only exceptions to this are in the cases where: the consistency has a barrier to entry such as latency, i.e. like HFT-- or that the consistency derives from a non-obvious or unknown relationship, i.e. difficult to detect but still present.
With an understanding of consistency and robustness, we can start to imagine a phase space of risk and reward among various market participants. The best models are truly simple because they can be verified. However, such models, become fragile or end up in an "arms race" against HFT competitors. As models become more complex, they also become inherently more risky. You might have something that looks like this:
Simple with ultimate speed (or other Barrier to Entry): Very profitable and verifiable. Consistent, verifiable, and robust. But, subject to technological shifts. This is the space of the market makers and HFT. Risk is very low. Strategy risk is extremely low. Technological risk may be high.
Simple with unknown relationship, non trivial: Rather profitable and verifiable. Consistent, verifiable, and robust. Subject to eventual discovery. Risk is medium to moderate and subject to probability of discovery or unmodeled events. This is the space of the elite hedge fund algo traders. There is rather little known about such models in use today. However, we can get a little idea from the past. It was said that traders who were first to understand the contango/backwardation nature of futures contracts were able to arbitrage them with high profitability. Another example might have been pair trading in the earliest years after discovery.
Simple algo without special advantage: Consistent but not robust. Limited barriers to entry. This is the space of the professional algo trader and some retail algo traders. Risk of model failure is high but cost of model failure might be lower because win ratio would be high and can be discovered/determined when it quits work. Probability of model accuracy is low. Risk is high to very high.