Debating the Computerization of Financial Markets: A Reply to Gasparin, Schinckus, & Green, Franck Jovanovic

Gasparin, Schinckus, & Green (2019) discuss a stimulating idea regarding financial flash crashes: regulators based their decisions on a framework inherited from the financial economics mainstream, while its models and theories haven’t demonstrated their capability to forecast and prevent such flash crashes. We can add that even alternative promising models and theoretical approaches, like econophysics (Johansen, Ledoit, & Sornette 2000; Jovanovic & Schinckus 2017; Sornette & Cauwels 2015), cannot offer a radical change as the one the authors called … [please read below the rest of the article].

Image credit: Alper Çuğun via Flickr / Creative Commons

Article Citation:

Jovanovic, Franck. 2019 “Debating the Computerization of Financial Markets: A Reply to Gasparin, Schinckus, & Green.” Social Epistemology Review and Reply Collective 8 (9): 14-18. https://wp.me/p1Bfg0-4of.

The PDF of the article gives specific page numbers.

This article replies to:

Gasparin, Schinckus, & Green (2019) discuss a stimulating idea regarding financial flash crashes: regulators based their decisions on a framework inherited from the financial economics mainstream, while its models and theories haven’t demonstrated their capability to forecast and prevent such flash crashes.[1] We can add that even alternative promising models and theoretical approaches, like econophysics (Johansen, Ledoit, & Sornette 2000; Jovanovic & Schinckus 2017; Sornette & Cauwels 2015), cannot offer a radical change as the one the authors called. Consequently, the regulators misunderstand the problem and their response (introduction of “circuit breakers”) is not efficient. The epistemological analysis defended in the article opens interesting avenues and debates for improving regulation on financial markets to prevent such crashes.

Epistemology and Practicality

First of all, it is interesting to notice how the epistemological argument followed by the authors may have practical considerations. This is a stimulating aspect of the paper. While articles in financial economics dealing with financial crashes mainly focus on quantitative data and statistical tests (almost exclusively based on normal distribution) and that some of their limits are more and more pointed out (Ausloos, Jovanovic, & Schinckus 2016), authors analysis gives the opportunity to take a step back on the current models by studying them with a mixture of theoretical arguments and epistemological analyses. In this perspective the authors definitively contribute to open the black box.

The main contribution of the article is developed on pages 224-230. Part of the demonstration is based on the argument that the second movement of computerization of financial markets, which has seen the development of trading algorithms, is rooted in the efficient market hypothesis, like the first movement which saw the automatization of recording and pricing activities. This argument allows the identification of computerization as a disruptive innovation and supports the recommendations of the authors to consider epistemological approach based on design theory. However, this argument appears overreaching. Contrary to what the authors suggest, it seems difficult to claim that current trading algorithms are based on efficient market hypothesis or on “an established, taken-for-granted, financial epistemology according to which, when all agents act rationally and seek profit, the market is more efficient” (226). While the idea is seductive, it is difficult to support it. Let me clarify.

I agree with the authors that on the first period of computerization of financial markets, regulators aimed to organize financial markets according to the efficient market hypothesis (Muniesa 2003; Pardo-Guerra 2012); however, I disagree with the argument that in the second period (the current one) trading algorithms are following the same path. Trading algorithms used by traders are motivated by a quest for profits, but it isn’t demonstrated that such goal leads to market efficiency. Quite to the contrary! The authors support their argument by referring to trading algorithms that identify arbitrage opportunities and take advantage of them before human beings can do so (224-226). This a crucial argument in the demonstration. And for a good reason: on an efficient market all arbitrage opportunities have been exploited. However, trading algorithms used for arbitrage trading aren’t very profitable and their impact on financial markets is relatively marginal. Often, trading algorithms, like those that created some flash crashes, are profitable because they 1) manipulate market price, which is in contradiction with the efficient market hypothesis, or 2) trade before other algorithms or other traders but without considering if it is a relevant “information”, which is in contradiction with a key element of efficient market hypothesis defended by Fama 1965 and 1970. Trading with such algorithms is perfectly rational (in short, they allow the profit maximization), but such a trading doesn’t take place in the same context as the efficient market hypothesis.

Trading with Algorithms

Here are some examples. Consider high frequency trading, a cause of flash crashes, as the authors reminded. One of the strategies is to micro-front-running regular orders placed by traders, given that high frequency trading algorithms have a larger access to the market order book than regular traders. Therefore, such algorithms can buy/sell stocks from a trader and immediately sell/buy to the same trader because the possibility to place the orders quicker and to have access to a wider view on market order book leads to influencing the market price and then allow the creation of small profits with such “round trip” orders. The “and” is crucial: it is not just acting quicker that ensures the success here, but it is by taking advantage of the access to a private information from the book orders that is not accessible to the other traders. In other terms, part of the high frequency trading makes profits by taking advantage of the asymmetric information that it purposely contributed to create, which is similar to price manipulation. Rose (2011) detailed several price manipulation technics used with high frequency trading during flash crashes.[2] Of course, such profit strategy isn’t based on new information or arbitrage opportunity (i.e. the detection of some anomalies as the authors suggested on page 227) and it doesn’t lead to a more efficient market (O’ Hara 1995).

Another example is the algorithms based on sentiment analysis on news articles (Xiao & Chen 2018; Zhang & Skiena 2010). Such algorithms developed by the (extremely few) big global information providers, like Reuters or Bloomberg. They allow these companies to offer a new kind of information to traders, which can be called a “filtered information”. Indeed, these algorithms are based on machine learning and filter the original public information (i.e. news) for producing a new information (the interpretation of the original public information thanks to an “artificial intelligence” program). This filtered information is used as a database input within trading algorithms.[3] By doing so, information providers transform a public information into a private information! Traders, including algorithm trading programs, will not react to a new publicly available information but to a filtered information privately accessible.

These examples show that we are far from the efficient market hypothesis: the goal here is not to maximize the profit in a competitive market by trading on new available information, as it is expected with the efficient market hypothesis; on the contrary, the goal is to maximize profit by exploiting a monopoly, using private information, or by manipulating prices. Moreover, as we understand, the question is not if “computers can act more rationally and quicker than human being” (226), because they are used in a different situation (like creating a monopoly in order to take advantage of it). Such examples show that the second movement (automatization of trading activities) doesn’t take place in the continuity of the first movement (automatization of recording/pricing activities).

The Question of Disruptive Innovation

Although the introduction of disruptive innovation to explain some current uses of computers in financial markets may be seductive, it isn’t supported by the previous examples. Therefore, can we still consider computerization of financial markets as a disruptive innovation such as defined by the authors? Probably not. However, the epistemological direction suggested by the authors looks relevant. As they mentioned, “flash crashes are a counter-example of what financial innovation was supposed to offer, their emergence questions the limits of the unknown in finance, inviting scholars to refine and redesign the implementation of their knowledge” (227-228). However, it is worth mentioning that their implementation takes in a framework designed by the regulations.

As the article points out, from an epistemological point of view, regulators or advisers accept the confusion between deductive/ theoretical reasoning (i.e. perfectly rational agents make the market efficient) and its technical implementation in financial reality (227). And in my opinion, the reason is because the two movements of the computerization of financial markets discussed by the authors (the organization and regulation of financial market on one hand, and the trading algorithms on the other hand) don’t follow the same pattern and didn’t occur in the same context, even if we are using the same rhetoric (market efficiency).

Trading algorithms show a radical evolution in financial markets that cannot be considered as the continuity of what happens in the first period. More specifically, Pardo-Guerra (2012) clearly explained that computerization of financial markets has followed different paths due to a tension between regulatory concerns and seeking private profits thanks to competition. At the beginning, contrary to the U.K., France and the United-States introduced automatization with regulatory concerns, which were based on the efficient market hypothesis. In this perspective, trading algorithms could have been embedded in the efficient market hypothesis framework. However, it has not been the case, because the regulatory concerns have been progressively weakened (or have become at the service of a few) giving the predominance to the seeking of private profits, and it is well-known that the best way to maximize the profit is to create a monopoly or an oligopoly. Reuter and Bloomberg’s sentiment analysis algorithms give to these two companies an oligopoly allowing large profits by sending their own private information that comes from their interpretation of the public information. In this perspective, I agree with the authors on the lack of considerations for the consequences of computerization of financial markets. The authors show the risk of losing the original goal, which is “the desire of financial authorities to make the markets more efficient” (223), but their analysis seems to underestimate the brutal shift that took place in financial markets within the second movement.

The idea that computerization of financial market can make financial markets more efficient is relevant when the introduction of automatization of trading activities takes place in a market regulated for maintaining, among other things, the fairness of the market that exists in a perfect competitive market as the efficient market hypothesis is supposed to be. Unfortunately, trading algorithms developed in the context of deregulated markets (or at least weakly regulated) are a way to take advantage of a dominant position on the market for making profits and they don’t make the market more efficient or even aim to do so[4]. In this specific context, innovation based on seeking profit doesn’t create a collective value as we can expect in the context of efficient market hypothesis. Worse, such monopolies exploit their position until the information is known and then they will not have to deal with the consequences of their action which are so costly that the society has to take this burden (last financial crises are telling examples)! Such problem exists in financial crises, but also in climate change crises, among other examples. As the authors mentioned in their articles, we limit our view to the action refusing to consider the reaction of our actions. This shows that the limitations of models and theories in economics and finance are underestimated.

Regardless of the limitations I have mentioned, the article opens a necessary and crucial debate on computerization of financial markets. It appears that automatization of recording/pricing activities was introduced in order to support a competitive market in the context of markets largely regulated (one of the reasons is to ensure the market stays competitive and fair); by opposition, the automatization of trading activities has taken place in deregulated markets that has created oligopolistic or monopolistic positions, leaving far behind the collective benefit that can be expected from a well-regulated competitive market, which is at the service of the majority of the people.

Contact details: Franck Jovanovic, Université TÉLUQ, franck.jovanovic@teluq.ca

References

Ausloos, Marcel, Franck Jovanovic, & Christophe Schinckus.  2016. “On the ‘Usual’ Misunderstandings Between Econophysics and Finance: Some Clarifications on Modelling Approaches and Efficient Market Hypothesis.” International Review of Financial Analysis 47: 7-14.

Fama, Eugene F. (1970). “Efficient Capital Markets: A Review of Theory and Empirical Work.” The Journal of Finance 25 (2): 383-417.

Gasparin, Marta, Christophe Schinckus & William Green. 2019. “Thinking Outside the Box to Get Inside the Black Box: Alternative Epistemology for Dealing with Financial Innovation.” Social Epistemology 33 (3): 218-23.

Johansen, Anders, Olivier Ledoit & Didier Sornette. 2000. “Crashes as Critical Points.” International Journal of Theoretical and Applied Finance 3 (2):  219-255.

Jovanovic, Franck & Christophe Schinckus. 2017. Econophysics and Financial Economics: An Emerging Dialogue. New York: Oxford University Press.

Muniesa, Fabian. 2003. Des Marchés Comme Algorithmes : Sociologie de la Cotation Électronique à la bourse de Paris. Ph.D: Ecole Nationale Supérieures des Mines de Paris.

O’ Hara, Maureen P. 1995. Market Microstructure Theory. Cambridge, Mass: Blackwell Publishers.

Pardo-Guerra, Juan Pablo. 2012. “Financial Automation, Past, Present, and Future.” In The Sociology Of Financial Markets edited by Karin Knorr Cetina and Alex Preda, 567-586. Oxford: Oxford University Press.

Rose, Chris. 2011. “The Flash Crash Of May 2010: Accident Or Market Manipulation?” Journal of Business & Economics Research 9 (1): 85-90.

Sornette, Didier & Peter Cauwels.  2015. “Financial Bubbles: Mechanisms and Diagnostics.” Review of Behavioral Economics 2 (3): 279-305.

Xiao, Catherine & Wanfeng Chen. 2018. “Trading the Twitter Sentiment with Reinforcement Learning.” arXiv:1801.02243v1 [cs.AI], 7 January.

Zhang, Wenbin & Steven Skiena. 2010. “Trading Strategies to Exploit Blog and News Sentiment.” Paper presented at the Proceedings of the Fourth International Conference on Weblogs and Social Media, Washington, DC, USA.


[1] Note that Malkiel and Fama didn’t publish the article together in 1970; Malkiel was the chair of the session in which Fama presented his article. Fama, Eugene F. (1970). “Efficient Capital Markets: A Review of Theory and Empirical Work.” The Journal of Finance 25 (2): 383-417.

[2] The 2010 Dow Flash Crash mentioned by the authors is a telling example of price manipulation. The trader, who pleads guilty, used an illegal tactic known as “spoofing,” consisting in manipulating the market price by falsely building up the price, then quickly selling them for a profit (https://www.justice.gov/opa/pr/futures-trader-pleads-guilty-illegally-manipulating-futures-market-connection-2010-flash).

[3] Competitions are also launched on the Web for developing algorithms using news to predict stock movements. See for instance the Two sigma’s challenge which awarded $100.000 to the winner (https://www.kaggle.com/c/two-sigma-financial-news/overview). It is worth mentioning that Two sigma is an important international hedge fund that uses a variety of technological methods, including artificial intelligence and machine learning, for its trading strategies.

[4] Deregulation doesn’t necessarily mean that all regulations has been removed, but they are extremely weak and/or at the service of extremely few people.



Categories: Critical Replies

Tags: , , , , , , , ,

1 reply

  1. The Development of Financial Economics in France between the Mid-19 and the Early 19 : Import or Rediscovery ? (avec Guy Numa) : accepte pour publication dans

Leave a Reply

Discover more from Social Epistemology Review and Reply Collective

Subscribe now to keep reading and get access to the full archive.

Continue reading