Arrival price algorithms determine optimal trade schedules by balancing the market impact cost of rapid execution against the volatility risk of slow execution. In the standard formulation, mean-variance optimal strategies are static: they do not modify the execution speed in response to price motions observed during trading. We show that with a more realistic formulation of the mean-variance tradeoff, and even with no momentum or mean reversion in the price process, substantial improvements are possible for adaptive strategies that spend trading gains to reduce risk, by accelerating execution when the price moves in the trader's favor. The improvement is larger for large initial positions.
Standard models of algorithmic trading neglect the presence of a daily cycle. We construct a model in which the trader uses information from observations of price evolution during the day to continuously update his estimate of other traders' target sizes and directions. He uses this information to determine an optimal trade schedule to minimize total expected cost of trading, subject to sign constraints (never buy as part of a sell program). We argue that although these strategies are determined using very simple dynamic reasoning---at each moment they assume that current conditions will last until the end of trading---they are in fact the globally optimal strategies as would be determined by dynamic programming.
Extending the work "Adaptive Arrival Price" (appeared in Algorithmic Trading III, see above), we present a technique to determine fully optimal adaptive strategies by solving a series of convex constrained optimization problems. We show that the improvement over static trajectories is due to correlation between the trading gains/losses in each period and the market impact costs in the remainder: If the price moves in your favor in the early part of the trading, spend those gains on market impact costs by accelerating the remainder of the program. If the price moves against you, reduce future costs by trading more slowly.
This article presents an order flow model framework for limit order driven markets. Different from previous models, the article explicitly models a reference price process that .sweeps. the limit order book as it fluctuates up and down. This framework allows the use of any stochastic process to model this reference price and very general specifications of the limit order flow. The authors believe that this framework can fruitfully combine order flow models with well-studied models for stock price processes and provides a step toward developing realistic yet tractable models for complex limit order driven markets. Public order data from SWX is used as an example to estimate the model parameters.
Competitive Search and Pricing of Lookback Options
In the k-search problem, a player is searching for the k highest (respectively, lowest) prices in a sequence, which is revealed to her sequentially. At each quotation, the player has to decide immediately whether to accept the price or not. Using the competitive ratio as a performance measure, we give optimal deterministic and randomized algorithms for both the maximization and minimization problems, and discover that the problems behave substantially different in the worst-case. As an application of our results, we use these algorithms to price "lookback options'', a particular class of financial derivatives. We derive bounds for the price of these securities under a no-arbitrage assumption, and compare this to classical option pricing.
In the standard model of observational learning, n agents sequentially decide between two alternatives a or b, one of which is objectively superior. Their choice is based on a stochastic private signal and the decisions of others. Assuming a rational behavior, it is known that informational cascades arise, which cause an overwhelming fraction of the population to make the same choice, either correct or false. If agents are able to observe the actions of all predecessors, false informational cascades are quite likely. In a more realistic setting, agents observe just a subset of their predecessors, modeled by a random network of acquaintanceships. We show that the probability of false informational cascades depends on the edge probability p of the underlying network. If p=p(n) is a sequence that decreases with n, correct cascades emerge almost surely (provided the decay of p is not too fast), benefiting the entire population.
Wisdom of Crowds
Nicla Bernasconi, J. Lorenz, R. Spoehel:Hand Development in the von Neumann and Newman Poker Models. Discrete Mathematics 311(21), 2337-2345, 2011. The vonNeumann and Newman poker models are simplifed two-person poker models with Uniform(0,1)-hands. We analyze a simple extension of both models that introduces an element of uncertainty about the final strength of each player's own hand, as is present in real poker games. Whenever a showdown occurs, an unfair coin with fixed bias q is tossed. With probability 1-q, the higher hand value wins as usual, but with the remaining probability q, the lower hand wins. Both models favour the frst player for q=0 and are fair for q=1/2. Our somewhat surprising result is that the first player's expected payoff increases with q as long as q is not too large. That is, the first player can exploit the additional uncertainty introduced by the coin toss and extract even more value from his opponent.