Good question. It's hard for me to pick because I feel that every book I've read on trading has had some effect on my views.
The information contained in the first four books listed below probably have the biggest impact on how I currently see the market. The Williams and Hurst material I've learned primarily through a variety of sources (and I'm still learning). The additional two are must-reads as well, IMO.
Trade Your Way to Financial Freedom - Tharp
Master the Markets - Williams
The Profit Magic of Stock Transaction Timing - Hurst
Trading with the Odds - Kase
Reminisces of a Stock Operator - LeFevre
Market Wizards - Schwager
Last edited by deltason; July 25th, 2009 at 10:10 AM.
The following 2 users say Thank You to deltason for this post:
Among the ones I have read, I can recommend:
High Probability Trading Strategies - Robert (Bob) Miner.
This is a very good book about swing trading, it describes a full trading plan from entry to exit, it also comes with some video examples on CD.
Currently I read an oldie and I must say that after 6 chapters it is quite surprising:
Trading with DiNapoli Levels: The Practical Application of Fibonacci Analysis to Investment Markets - Joe DiNapoli.
Did you try putting Miner's method into practice? I found it to be difficult with all the ratios, retractments, extensions, etc. From what I saw it did see like he had something but it just seemed like so much work. Would probably be easier with his special software.
I did like how he showed examples that didn't work, very few books do that. Seemed like an honest guy.
Fooled by Randomness: The Hidden Role of Chance in the Markets and in Life
by Nassim Nicholas Taleb
The Black Swan: The Impact of the Highly Improbable
by Nassim Nicholas Taleb
I think it important to understand how little we don't understand and how bad we generally are at prediction despite all the "evidence" to the contrary. Take a complex system like weather. An awful lot of technology has been thrown at weather prediction yet there is a huge fall off in accuracy after about 3 days out as there just are too many variables... Local news will still give you a 10 day forecast...it is worthless but it makes you feel better so you can plan your weekend. Then you can express disbelief when it rains on the weekend even though the "forecast" was for sun.... you are getting the comparison with technical indicators predicting price...right?
Anyhow - here is what one reviewer said of Taleb's work. It might encourage you to go read his books.
Sorry for the long post but the thread is about books...
« The Paradox of Power and Perception
My New Crush
You know how kids get a crush on someone, how for a time, their every thought and feeling is enlivened by the uncanny existence of the Object of Desire? Reluctantly, I admit this describes my intellectual life. I live for those moments when I discover a new mind, one that illuminates a facet of the world I have not previously been able to bring into focus. I read, I listen, I obsess over my new crush’s thoughts. I have the feeling that integrating them will change my mind, the way a new couch or table refocuses an entire room. I crave that change, and at the same time, I don’t want my new crush to displace the beloved old crushes lounging in the cozy armchairs of my awareness. Meet Nassim Taleb, a thinker who just might clash with the furniture.
Since I discovered him a few days ago, I’ve been downloading, ordering books and listening to podcasts. Go to Taleb’s Web site for links to most everything, and Wikipedia for an overview. Here are all the caveats: this is a supremely confident (I forbear to say arrogant because I like his mind so much), elite personage whose interest in financial markets I don’t really share, whose politics appear to be of the Olympian libertarian variety and whose grasp of the language of mathematics so far exceeds mine it might as well be Greek (or Lebanese, as he is by birth). But I promise you, the underlying ideas are well worth the journey. What Taleb has already given me are much better reasons than my own instincts to do two things I’ve been advocating loud and long: distrust predictions and question theories.
One of his main rhetorical devices is to imagine two realms. Mediocristan is “the province dominated by the mediocre, with few extreme successes or failures. No single observation can meaningfully affect the aggregate. The bell curve is grounded in Mediocristan.” For example, concrete physical realities with limited ranges, like human height or weight, express Mediocristan. If you sample a thousand human beings, plotting their heights, you’ll get a result that looks very much like a bell curve, with most people clustered within a close range, and few that are markedly shorter or taller than that range. Within that sample, the presence of a few little people or basketball players won’t significantly affect the average, the median or the whole. Extremistan is “the province where the total can be conceivably impacted by a single observation.” For example, there is no intrinsic limitation on income. Randomly choose a thousand humans from the poorest nation to the richest: if Bill Gates is included in the mix, he will significantly throw off the average, the median, and the whole. His presence creates a complexity very different from the regularity of Mediocristan. Taleb argues convincingly that we treat far too much of our reality as if it were Mediocristan when in fact much of it often behaves like Extremistan, where there are occasional “black swans” (his name for the unexpected event and the title of his most recent book) among the white. So, for example, out of the many thousands of books, films and recordings released each year, a small number will account for the largest part of sales, and it is not possible to predict with certainty which of the many works released will find black swan-style success (or failure). Indeed, in any endeavor susceptible to notable, unpredictable exceptions, no amount of examining the past will enable us to foretell the future. What’s going on here?
Taleb discusses many factors contributing to our tendency to see our world as Mediocristan. There is the fact that our brains evolved long ago to deal with a world with many fewer variables, much less organized information, and a vastly smaller number of theories to explain them. The more complex any given situation, the larger number of examples you need to understand what is happening there. For instance, sampling the sales of a few dozen published books each year won’t tell you much about the prospects of the thousands of others not sampled. It’s just as likely as not that your sample would include one or more black swans—unexpectedly huge winners or losers—so anything you might conclude based on it would not be generalizable to the rest. Yet it is plain to see that we have a powerful (one might say inbuilt) desire to figure things out, and we like it best when they fall into stable, understandable and predictable patterns. So over and over, we surrender to that desire, generalizing on the basis of too little information and coming up wrong. If I place a few winning bets, I might conclude I am a skilled gambler, even though all that actually happened was a short run of luck. So many Hollywood careers have that trajectory: an early black swan of success is chased by financiers wanting to invest in the next blockbuster, only the second film is a plain old white swan and the one after that an ugly duckling, and all the investors are puzzled as to how they could have been mistaken…until the next black swan comes along.
I love what Taleb has to say about inventions, how almost all of the discoveries that have had tremendous impact on our culture were accidents in the sense that they were discovered while searching for something else. Because of hindsight bias, he says, histories of economic life and scientific discoveries are written with straightforward story lines: someone set out to do something and succeeded, it’s all about intention and design. But in truth, “most of what people were looking for, they did not find. Most of what they found they were not looking for.” Penicillin was just some mold inhibiting the growth of another lab culture; lasers at first had no application but were thought to be useful as a form of radar; the Internet was conceived as a military network; and despite massive National Cancer Institute-funded cancer research, the most potent treatment—chemotherapy—was discovered as a side-effect of mustard gas in warfare (people who were exposed to it had very low white blood cell counts). Look at today’s biggest medical moneymakers: Viagra was devised to treat heart disease and high blood pressure. It’s interesting to think about this in career or relationship terms, realms full of complex human variables. Taleb points out that tons of books and gurus are based on asking the successful to explain how they got there. Typically, big winners in both business and love say it took good ideas and lots of hard work. But these are just stories people generate out of the need to explain, because many big losers also had good ideas and worked their butts off, with the opposite result. This is so commonsensical that it ought to be obvious, but as Taleb says, we suffer so badly from the “confirmation error” (looking for information to confirm a foregone conclusion or belief system), we are thrilled to the point of stupidity when someone publishes a book or otherwise propounds an idea that confirms our hunches. Guilty as charged. There’s no denying that Taleb confirms some of my pet observations. For example, isn’t the following entry from his glossary delicious?
Empty-suit problem (or “expert problem”): Some professionals have no differential abilities from the rest of the population, but for some reason, and against their empirical records, are believed to be experts: clinical psychologists, academic economists, risk “experts,” statisticians, political analysts, financial “experts,” military analysts, CEOs, et cetera. They dress up their expertise in beautiful language, jargon, mathematics, and often wear expensive suits.
Many such experts have made their reputations by giving retrospective explanations for events, often delivered in the type of neat theoretical package that satisfies the desire for a story confirming our beliefs or reinforcing our sense of security. But being able to make up a good story after the fact is meaningless; the only thing that can count as true understanding, that can truly test a theory, is accurate prediction, and there we have fallen far short of success. Mostly, I think, our theorizing is useless at best, dangerous at worst. How many times have you had the following experience? You become aware of two different physical symptoms at the same time, say a headache and a rash. In the privacy of your own head, you develop a hypothesis that links them: the same naughty bacterium is nibbling at your brain and your epidermis. Then you go to the doctor and are surprised (and relieved) to learn the rash is poison oak and the headache too much wine.
Just so in the news. Because we love things to make sense, because we are always on the lookout for correlations (and willing to settle for cleverly packaged coincidences in their place), any study that seems to satisfy these desires gets column-inches, but there’s no room to print the fact that a dozen other teams of researchers looked at the same phenomena without turning up a meaningful correlation. For every scientific experiment or medical study that produces a startling (if short-lived) conclusion that feeds our desire for orderly sense, there are countless studies that generate inconclusive or negative results.
Lately, I have been thinking about a problem that Taleb alludes to in even the bit of his thinking I’ve already read and heard. We are surrounded by a gargantuan news- and information-generating apparatus. Its appetite is enormous, as it must poop out vast quantities of airtime and newsprint every day. Consequently, we have story after story about results that turn out to be irreproducible (every week bringing its news flash of soon-to-be supplanted miracle cures and diets, instance) and a glut of theories as to why the economy works (or doesn’t), how to reduce crime, how to improve education, and so on. We broadcast a whole flock of black swans every evening—uncanny accidents, rare occurrences, terrible risks gone wrong—which normalizes them in our minds, so that our estimation of their likelihood is amazingly skewed. From what I can see, this glut is making us less and less able to cope. Taleb’s sense of our problem is that we do not know how much we don’t know. He has a challenging task in drawing useful advice out of uncertainty. I’m looking forward to reading his books, but in the meantime, I am entertaining a few ideas his work seems to suggest:
Since we can’t control unpredictable events, we should accept uncertainty and seek to maximize our exposure to serendipity, as by putting ourselves in the way of new ideas. Since there is such danger in accepting conclusions based on too little information simply because they confirm our beliefs, we should try to remain aware in the present of what we are doing, paying attention to what actually happens and refraining as far as possible from imposing theories on our experience. We should recognize our poor record as a species in predicting the future, that we are much better at doing than knowing. Some things are more predictable than others: we are safe enough in expecting tomorrow’s sunrise to plan on breakfast. We can start noticing which situations are most susceptible to black swans, and when we encounter them, remember how little we truly know so our ignorance doesn’t lead us around by the nose.
I hear my old crushes—Paul Goodman, Paulo Freire, Isaiah Berlin—grumbling a little at having to move their armchairs back to make room for this upstart. But really, he fits right in. Goodman wrote eloquently about the slavishness expressed in our devotion to experts; Berlin rejected most theorizing about human beings as “grotesque,” and Freire made us understand the disabling effects of allowing certain ideas about ourselves to dominate our minds. In truth, they seem very happy to meet Nassim Taleb, another uncolonized mind. And so am I.
The following 7 users say Thank You to websouth for this post:
I thought Triana's book wasn't that good but the forward by Taleb is worth reading. The title to me is ingenius - LECTURING BIRDS HOW TO FLY kinda sums up a lot of things especially if you feel central planning is a failure and you could live your life just fine without a central gov't somewhere forcing you to wear a seatbelt or buy a license to fish.... but you are free right...? Taleb has fun being condesending. Take a look at his point of view and see if it helps your trading.
HISTORY WRITTEN BY THE LOSERS
FOREWORD TO PABLO TRIANA’S LECTURING BIRDS
HOW TO FLY
NASSIM NICHOLAS TALEB
January 2009: I am at the World Economic Forum in Davos, looking at the sorry crowd of businessmen, journalists, and bankers. There are also a few finance academics. Many practitioners look like they have just fallen off a bicycle, still confused about how to behave. All these years, they had not realized that their models underestimated the risks of high-impact rare events, allowing the buildup of huge positions that are in the process of destroying free markets, capitalism, and finance. Instead of making probabilistic assessments about Black Swans, they should have insured some kind of robustness to them. I feel sorry for the crowd, as I am certain that most of these people will not be here next year—there is effectively a mechanism called evolution, harsh to humans. But the academics among them, equally wrong about the models (in fact, they were the ones feeding bankers with bad models), wrong about the world, wrong about the very notion of knowledge, wrong about everything, will be back next year—that I guarantee. Unless they are caught seducing graduate assistants, their jobs are safe. Nobody ever lost his tenure in social science for being wrong (the opposite may be true). There is no such thing as evolution in academic settings.
The biggest myth I’ve encountered in my life is as follows: that the road from practical know-how to theoretical knowledge is reversible—in other words, that theoretical knowledge can lead to practical applications, just as practical applications can lead to theoretical knowledge. After all, this is the reason we have schools, universities, professors, research centers, homework, exams, essays, dissertations, and the strange brand of individuals called “economists.” Yet the strange thing is that it is very hard to realize that knowledge cannot travel equally in both directions. It flows better from practice to theory—but to understand it you have nontheoretical knowledge. And people who have nontheoretical knowledge don’t think of these things. Indeed, if knowledge flowed equally in both directions, then theory without experience should be equivalent to experience without theory—which is not the case.
The myth may have all started in a Plato dialogue, Euthyphro, in which Socrates heckled a fellow who claimed to be pious but could not define piety. The flustered fellow, bullied by Socrates, never replied (according to Plato) that babies drink their mother’s milk without being able to define what drinking milk is, or love their mother without being to explain what love or mother mean. This led to the thinking in the primacy and overblown importance of what can be called propositional knowledge—with so many side effects. Alas, it took me a long time to disbelieve in propositional knowledge. Not only do you need to be a practitioner to realize it, but you need to ignore cultural opinions and use the raw, plain, easily obtainable, and somewhat shockingly potent evidence. And if you consider the effect for a moment, you will realize that it is more consequential than you thought.
Let me explain how the problem started screaming at me, around 1998. I was then sitting in a Chicago restaurant with a finance academic, though a true, thoughtful gentleman. He was advising one of the local exchanges on new products and wanted my opinion on the introduction of knock-out options—which I had covered in some detail in my first book, Dynamic Hedging. He recognized that the demand for these options was great, but wondered “how traders could handle these exotics if they do not understand the Girsanov theorem.” The Girsanov theorem is about a change of probability measure, something mathematically complicated that was needed to derive a closed-form formula for the options—though in the well-behaved Gaussian world. But you don’t need it to understand anything about exotic options. For a minute I wondered if I was living on another planet or if the gentleman’s PhD led to his strange loss of common sense—or if people without practical sense usually manage to get the energy to acquire a PhD in financial economics. Nobody worries that a child ignorant of the various theorems of thermodynamics and incapable of solving an equation of motion would be unable to ride a bicycle. Yet, why is it that we made the Euthyphro mistake with our understanding of quantitative products in the markets? Why should traders responding to supply and demand, little more, competing to make a buck, do the Girsanov theorem, any more than a trader of pistachios in the Souk of Damascus needs to solve general equilibrium equations to set the price of his product?
Then I realized that there has to be a problem with education—any form of formal education. I collected enough evidence that once you get a theory in your head, you can no longer understand how people can operate without it. And you look at practitioners, lecture them on how to do their business, and live under the illusion that they owe you their lives. Without your theories and your learning they will never go anywhere. All that can be tested. How? We can look at historical evidence. It is there, in front of our eyes, staring at us.
Let us take what is known as the Black-Scholes option pricing formula. Every person who had the misfortune of taking a finance class is under the illusion that the Black-Scholes-Merton formula is a gift from the three individuals who offered it to mankind and need to be rewarded for their great deed because we otherwise would not have the technology to understand these items. Without it we cannot price options. True?
Well, Espen Haug and I scratched the surface looking for the real evidence going back to the late nineteenth century.1 And we figured out that traders did much, much better pricing options before the option formulas were invented. The solid arbitrages were maintained (put-call parity, no negative butterfly, etc.). Traders, thanks to tinkering and evolutionary pressures, fumbled their way into a heuristic option pricing formula: Those who liked to short out-of-the-money options blew up in time; those who bought them survived. Traders knew what the heuristic “delta” was—about half for an at-the-money option, progressively less for an out-of-the-money option. Indeed, in our paper we interviewed veterans who confirmed that option traders in Chicago priced “off the butterfly,” with “no sheets” (i.e., no pricing formula). I myself was a pit trader in Chicago in the early 1990s and saw that prominent option traders priced options without formulas. Coincidentally, our paper introduced the metaphor: “lecturing birds how to fly.” Traders were robust to the Black Swans, these sudden events that are the scourge of option traders. In that respect, Black-Scholes-Merton was a dangerous regression. It was made only to accommodate the financial economics establishment and portfolio theory by showing how dynamic hedging removed the price of risk—a Platonic thought experiment that was beyond the unnecessary, as it proved toxic. The exact formula they used—narrowing down the distribution to the Gaussian—had been around in its exact form since Ed Thorpe and in a different, no less realistic form since Louis Bachelier, which could accommodate any probability distribution. Various accounts of the history of financial theory ignored the point: It is not just
that history is written by the winners; it is written by the losers—those losers with access to the printing press, namely finance professors. I noted while reading a book by Mark Rubinstein how he stuck the names of finance professors on products we practitioners had been trading and perfecting at least a decade earlier. History written by the losers? A prime example is how the historian managed to downplay his “portfolio insurance,” a method that failed miserably in the crash of 1987.
History is truly written by losers with time on their hands and a protected academic position. In the greatest irony, the historical account of techné in derivatives pricing that Haug and I wrote was submitted in response to an invitation by an encyclopedia of quantitative finance. The editor of the historical section, proceeded to rewrite our story to reverse its message and glorify the epistemé crowd.
I was a trader and risk manager for almost 20 years (before experiencing battle fatigue). There is no way my and my colleagues’ accumulated knowledge of market risks can be passed on to the next generation. Business schools block the transmission of our practical know-how and empirical tricks, and the knowledge dies with us. We learn from crisis to crisis that modern financial theory has the empirical and scientific validity of astrology (without the aesthetics); yet the lessons are forgotten and ignored in what is taught to 150,000 business school students worldwide.
Note that what academics tend to call “practitioners” are often PhD-laden academics who go to practice and fall prey to the Euthyphro problem. This is why Pablo Triana was capable of writing this book: Like a minority of people (Espen Haug, myself), he did not go from theory to practice, but did the reverse.
There is another problem with current researchers in financial economics: They are self-serving—perhaps no more, but certainly no less than other professions. Just as one of the problems with governments is that government officials have an objective function that may deviate from that of the general public, it is a myth, a great myth, that academics are there for the truth. When you hear a tobacco company talk about “health,” you smirk—yet when you hear a finance professor talk about “evidence” and “risk,” you don’t. Alas, academics claim to look for evidence. But they seem to select what evidence they need for their purpose. I have shown that value at risk (VaR) does not work, with mathematical and empirical evidence (20 million pieces of data). But the evidence was ignored. In at least one instance, it was derided. Mandelbrot was completely ignored and his work was hidden from us. Had I shown that it worked, or had other academics produced evidence that fit their point, it would have been called “evidence” and published.
Traditionally charlatans have hidden themselves behind garb, institutions, and language. Now add fancy mathematics. Robert Merton’s book Continuous Time Finance contains 339 mentions of the word theorem (or equivalent). An average physics book of the same length has 25 such mentions. Yet, while economic models, it has been shown, work hardly better than random guesses or the intuition of cab drivers, physics can predict a wide range of phenomena with tenth-of-a-decimal precision. They make you believe that their detractors are quacks by going ad hominem and skirting the arguments altogether. For a strange reason, I saw more solid critical thinking on the part of practitioners than academics. One common argument I’ve heard trying to extinguish my criticism of VaR in The Black Swan: “This is a popular book,” implying that its arguments lack rigor. Now if all arguments lacking in rigor are popular, it does not follow that all popular arguments are lacking in rigor. You rarely find people outside academia making such a mistake. The cost of modelization is the loss of open-mindedness, but in some areas—say, engineering—this can be tolerated owing to the low-error quality of the models and their tracking ability.
My point is not that current academics are bad, but that there is a tendency by nonpractitioners to idealize Platonism and fall prey to the Euthyphro problem – not recognize that knowledge in society aggregates through action (a point made by Hayek but that did not sink in for the economics profession). While Pablo Triana is perhaps the very first person I’ve met who got the point, I highly disagree with his endorsement of the sterile critiques by nonpractitioners such as Derman and others, as their conscience-clearing halfwayness causes more harm than good. I hold that anything that does not start with the basis that techné (knowhow) is superior to epistémé (know what), especially in complex systems, is highly suspicious.
One warning before concluding. You are often told, “This is just a model; it is just an aid; you do not need to use it.” I was often told that value at risk was just one piece of information among plenty—so these people providing it could cause no harm. True?
Do not put cigarettes in front of an addict—even if you give him a warning. I hold that information is not neutral. Never give a (fallible) human sterile information. He will not ignore it. These models led to an increase of risk in society, period. The providers are responsible.
What should we do?
Do not waste time trying to convince academics. They will tell you, “Give me a better model or shut up,” not realizing that giving someone no map is much, much better than giving him a wrong map. Academia does not like negative advice (what not to do). Just put them to shame. Ignore them. Put them down. Discredit business schools. Ask for cuts in funding. We can no longer afford to compromise. Do what some friends have done: resign from the various associations, such as the International Association of Financial Engineers and the CFA Institute. These institutions were promoting wrong models and will not repent naturally, no more than the tobacco industry would fight smoking in public places. You need to shame members, humiliate them. Make fun of these charlatans.
I thank Pablo Triana for his wonderful lucidity, courage, and dedication in the service of the truth. This is the very first book that looks at the side effects of models, at the harm caused by models, and fearlessly points fingers where fingers should be pointed. I am convinced that the reader will come out of reading it much wiser, and that the publication of this book will make society a better, safer, and more risk-conscious place.
—NASSIM NICHOLAS TALEB
The following 6 users say Thank You to websouth for this post:
Thanks for that websouth, the intro to 'Lecturing Birds' was interesting.
I must admit I am guilty of what Tasseb mentions, having a negative bias against 'The Black Swan Effect' without having read it solely because it is a popular book. Based on what I read here I might have to give it a read.
I definitely agree with him on the idiocy of the academic finance community at large. I read a large number of academic publications and many of the academic finance monographs literally make me laugh out loud because they are so disconnected from reality.
I've had Taleb's books on my reading list for a long time, I finally started "Fooled by randomness" and I'm 1/3 way through. It's fantastic. I love it. I'm discovering that I had no idea what I was doing and that all my gains were luck and my losses due to overconfidence. Quite shocking really.
The following user says Thank You to cunparis for this post: