Wikio

A breach in the Curse of Dimensionality

I am quite excited to release a work devoted to the Curse of dimensionality, in the context of Financial mathematics. I am actually trying to validate this work by peers, and will publish it as soon as it will be.

This work proposes a framework allowing to consider valuations of complex OTC products, including american exercising, written on a large number of underlyings. The motivation is to provide numerical algorithms able to quickly valuate big investment bank portfolios with sophisticated pricing models, as those coming from risk measures, and we think particularly to CVA (counterpart risk valuation) ones, leading to consider high dimensional Optimal Stopping Problems.

Indeed, it has already been noticed that current numerical methods devoted to pricing algorithms face an open mathematical problem, the curse of dimensionality (see for instance these authors Crepey, Labordere, Reghai), particularly in the context of CVA computations. We notice also that the econometry problem, consisting in finding a risk-neutral dynamic of the underlyings, faces the very same open problem. Indeed, the CVA approach, as some others existing risk measures, although vertuous, can not be credible as long as they will be “cursed”.

The framework introduced has two purposes : firstly to provide a fast calibration algorithm to determine a risk-neutral econometry, using risk-less market prices, able to capture correlation structures between underlyings. Secondly, to design a fast backwardation algorithms upon this econometry. Both algorithms breaks the curse of dimensionality wall. This is achieved using thoroughly a particular context, given by martingales processes, a classical assumption in mathematical finance. From a mathematical point of view, these results are obtained blending together stochastic, partial differential equations, and optimal transport theories, and are a generalization to the multi-dimensional case of this paper.

My perception is that these tools should be the right candidates to provide a standardized framework for computing risk measures of large portfolios or investment strategies, as well as a pedestal to tackle credible systemic risk measures. Moreover, this approach seems to be very performent from an algorithmic point of view. The resulting algorithms should be able to reduce drastically the computational burden for existing measures, as CVA or risk-neutral Front-office valuations.

The curse of dimensionality has been opened roughly sixty years ago as a mathematical problem. This problem is of crucial importance in the context of mathematical finance, where the dimensionality is the number of underlyings of a given portfolio, usually counting thousands of instruments as shares, rates, commodities, default probabilities and so on. Among the authors that tackled this problem in a financial context, let us cite for instance C. Schwab’s PDE approach, or V. Bally, G. Pages, J.Printems stochastic approach. Actually, these authors provide methods working for “low” dimensions. It is usually considered that the algorithmic burden of these methods is too heavy for let say ten underlyings.

For partial differential equations (P.D.E.), the curse of dimensionality is quite easy to illustrate: most numerical methods rely over cartesian meshes, consisting in N^D points, N being the number of points necessary to model one dimension, D being the number of dimensions. Crude methods as the alternate direction implicit one uses N^{D+1} operations to compute a solution, converging at rate N^{-2} in the best case toward the solution of the continuous equation. Hence, the overall computational burden to reach an accuracy of order \epsilon, for a small \epsilon>0, is \epsilon^{-\frac{D+1}{2}} in the best cases, being a clear algorithmic complexity limitation to address higher dimensional problems, i.e. when D becomes large, as it is in Finance.

We designed algorithms based over N points, lying in  {\mathbb{R}}^D. They show rate of convergence of order N^{-1/2}, in the worst case, independently of the dimension D. For convex solutions, this rate can be as good as N^{-2} without american exercising, and N^{-1} with, dimension independent. To reach this order of convergence, N^3 operations are necessary. Hence, the overall computational burden to reach an accuracy of order \epsilon, is \epsilon^{-3/2} in the best case, \epsilon^{-3} for american exercising, \epsilon^{-6} for the worst case. Low number of discretization points should be necessary to get accurate enough, in order to reach the confidence limit of state-of-the-art in finance modeling.

 

Data Heterogeneity with XQUERY++ and BaseX …

… is the title of a conference that I should give tomorrow (14/02/14) for the XML Prague conferencea conference on markup languages and data on the web. My slides can be found here : XMLPrague0214.

Big Data and Fast Dev Technics

During the fall 2013, I focused into the movement so-called “Big Data” , as part of my small company CRIMERE . Like many movements before it (see for instance “Semantic Web” , or “cloud ” ), it seems quite difficult to have a clear definition of what is really “Big Data” beyond a pure marketing concept . This movement, like others elsewhere, is reinventing the wheel by presenting it in a different angle. However, it clearly has a high potential , I recommend for instance this paper (in French), which deals with ” new” professions likely to emerge from these technologies.

In particular, this movement has two distinct facets that I find very interesting , even if the tackled problematic are not so news. The first facet of  ”Big Data” is that associated with the processing of large volumes of data , related to cloud computing. The second, in which I am much more interested in my professional activity, is related to the heterogeneity of data and information systems : how to make sense of large volumes of data using complex models, and especially how to be able transform them in other data models, better suited to a particular treatment, which itself will generate other datas, etc etc … These problems are currently generating huge costs for companies , which support teams of Business analysts / computer scientists, whose task is precisely to make these data transformation.

Studying a few technical aspects related to that second facet ( data manipulation ), it came to me an iconoclastic idea : trying to explore the potential of these new technologies in the context of my professional activity. So I did the following exercise: develop a “professional” application engine of sensitivity VaR ( Value at Risk). This application is very classic, and occupies entire dedicated teams in risk departments of investment banks since decades.

Here is the result : it took me three weeks to develop this type of application , knowing that I am neither a professional developer, nor a VaR specialist. The characteristics of this application are:

- Input: 30,000 trades , described in FpML format version 5.5 , all kind of derivatives.
- Transformation into an internal VaR model data.
- Computing VaR sensitivity .

The complete application cycle ( data integration – 1 GB / data transformation / VaR computation) takes less than five minutes on my PC. The “business” part of this application consists in 400 lines of codes.

Obviously, there is some intelligence in the design of this application. However , the recipe is completely generic and can be applied to the entire software industry .

What I am defending is that the technology proposed by this movement allows, if used smartly, to gain a factor of scale in terms of productivity and hence in terms of costs across the whole industry software . I speak here of the costs generated throughout the life cycle of complex applications using large volumes of data: data transformation , development, maintenance , documentation, harmonization / standardization of information systems.

For instance, if you want to start a project involving the development of a complex application using large data volumes, do not hesitate to ask me for a estimation. This will allow you to compare, for pure information purposes of course, with quotes offered by your internal teams or your external providers.

Big Data et Développement Rapide

Durant l’automne, je me suis beaucoup intéressé au mouvement qu’on appelle “Big Data“, dans le cadre de ma petite société CRIMERE.  Comme beaucoup de mouvements avant ce dernier (cf par exemple le “Web Sémantique“, ou le “cloud“), il paraît assez difficile d’avoir une définition claire du Big Data allant au delà d’un pur concept marketing. Ce mouvement, comme les autres d’ailleurs, réinvente la roue en la présentant sous une angle de vue différent. Cependant, il a clairement une forte potentialité, je recommande par exemple cet article du Monde, qui traite des “nouveaux” métiers susceptibles d’émerger de ces technologies.

Notamment, ce mouvement possède deux facettes distinctes que je trouve très intéressantes, même si les problématiques ne sont pas nouvelles. La première facette du “Big Data” est celle associée au traitement des grands volumes de données, à la distribution du calcul, lié au Cloud computing. La deuxième, qui m’intéresse beaucoup plus pour mes activités professionnelles, est liée à l’hétérogénéité des données et des systèmes d’informations: comment donner du sens à de grands volumes de données, utilisant des modèles complexes, et surtout comment être capable de les transformer dans d’autres modèles de données, mieux adaptés à un traitement, qui lui même va générer de l’information, etc etc… Ces problèmes sont actuellement générateurs de coûts énormes pour les entreprises, qui doivent financer des équipes de type maîtrise d’ouvrage / maîtrise d’oeuvre, dont la tâche est justement de procéder à cette transformation de données.

En étudiant un peu les aspects techniques liée à cette deuxième facette (la manipulation des données), il m’est venue une idée iconoclaste : tenter d’explorer les potentialités de ces nouvelles technologies dans le cadre de mon activité professionnelle. J‘ai donc fait l’exercice suivant : développer une application “professionnelle” d’un moteur de VaR (Value At Risk) en sensibilité. Cette application est extrêmement classique, elle occupe notamment des équipes entières dans les départements de risques des banques d’investissements depuis quelques dizaines d’années.

Voilà le résultat : il m’aura fallu trois semaines pour développer ce type d’application, sachant que je ne suis ni développeur professionnel, ni spécialiste VaR. Les caractéristiques de cette application sont : 

- Input : 30000 trades, décrit sous format FpML, version 5.5, tout types de dérivés.
- Transformation sur un modèle de données VaR interne.
- Calcul de VaR en sensibilité.

Le cycle applicatif complet (intégration des données – 1 Go / transformation des données/ calcul de VaR) prend moins de cinq minutes sur mon PC. Le code applicatif “métier” consiste en 400 lignes de code.

Bien évidemment, il y a un peu d’intelligence dans la conception de cette application. Cependant, la recette est complètement générique, et peut s’appliquer à l’ensemble de l’industrie logicielle.

Ce que je défend, c’est que les technologies proposées par ce mouvement permettent, si elles sont exploitées intelligemment, de gagner un facteur d’échelle en terme de productivité, et donc en terme de coûts, dans l’ensemble de l’industrie logicielle. Je parle ici des coûts générés dans tout le cycle de vie d’applications complexes utilisant de grands volumes de données :  transformation des données, développement, maintenance, évolution, documentation, harmonisation / standardisation des systèmes d’information.

Notamment, si vous désirez vous lancez dans le développement d’une application complexe, n’hésitez pas à me demander un devis. Cela vous permettra par exemple de comparer, pour information bien sûr, avec ceux proposés par vos équipes internes ou vos prestataires externes.

 

electronic surveillance and UDHDR

At a time when the media accumulate articles on the dangers of electronic surveillance, and when a collective of writer calls for the creation of an ” international bill of digital rights ” , I recall that we created and submitted in January 2009 the articles of association under French law, MNEMOSINE . It is a non -profit organization, based precisely in its preamble upon a universal declaration of human Digital Rights (UDHDR) statement, that one can find also in this post. This post explains the methodology used in the preparation of this declaration.

At the time, I contacted UNESCO to know whether this type of initiative was likely to interest this agency, had a conversation with a member of the Senate on this issue, made ​​a guest presentation by FING , etc… etc. …

I also remember that there are ways to protect personal data, whether against non-compliance of privacy (right to be forgotten ) , or otherwise against the ravages of time (right to the digital memory ) . This particular blog contains a number of thought on the subject ( eg MNEMOSINE search on this site) , this post contains some technological thinking on the subject .

To my knowledge, the French innovation structure is not currently able to develop this type of project, although there are financial benefits identified hoped to medium term. The real problem on this topic will be to find any political able to help its funding…

Surveillance numérique et DUDNH

A l’heure où les médias accumulent les articles sur les dangers de la surveillance électronique, et où un collectif d’écrivain demande la création d’une « déclaration internationale des droits numériques », je rappelle que nous avons crée et déposé en Janvier 2009 les statuts d’une association de droit français, MNEMOSINE. Il s’agit d’une association à but non lucratif, reposant justement par son préambule sur une déclaration universelle des droits de l’homme numérique  (DUDNH), déclaration qu’on pourra par exemple trouver dans ce post également. Ce post explique la méthodologie employée pour la rédaction de cette décalration.

A l’époque, j’avais contacté l’UNESCO pour savoir si ce type d’initiative était susceptible d’intéresser cet organisme, eu une conversation avec un membre du sénat sur ce sujet, fait une présentation invité par la FING, etc etc…

Je rappelle également qu’il existerait des moyens de protéger les données personnelles, que ce soit contre le non-respect de la vie privée (droit à l’oubli numérique), ou au contraire contre les outrages du temps (droit à la mémoire numérique). Ce blog contient notamment un certain nombre de réflexion sur le sujet (chercher par exemple MNEMOSINE sur ce site), ce post étant notamment une piste de réflexion technologique sur le sujet.

A ma connaissance, le tissu de l’innovation en France ne permet pas actuellement de développer ce type de projet, bien qu’il y ait des retombées financières identifiées à espérer à moyen terme. Le vrai problème sur ce sujet, me semble-t-il, c’est de trouver une volonté politique capable d’aider son financement.

Finance, Navier Stokes, et la calibration – presentation mathématique

Je met en ligne dans ce post un exposé donné le 6 Juin au LPMA (Jussieu): PresLPMA

Il s’agit d’un exposé relativement technique, qui présente mathématiquement mes posts précédents sur les liens entre la mécanique des fluides et la formation des prix, et également une méthode de calibration en hautes dimensions.

Finance, Navier Stokes, et la calibration

Je met en ligne dans ce post un exposé donné le 3 Avril, et qui sera donnée également le 6 Juin : PresFinance_NS_calibration

Il s’agit d’un exposé relativement technique, qui présente de manière plus mathématique mes posts précédents sur les liens entre la mécanique des fluides et la formation des prix, et également une méthode de calibration en hautes dimensions.

Explicit Entropy and Conservative solutions to non linear conservation Laws

This post is a quite technical post, dedicated to techniques used to compute explicit solutions to non-linear conservation laws. Indeed, in my last post concerning links between Finance and Fluid Mechanic, I am using some results that are not known to my knowledge. Thus I would like to introduce them in this post. Note that I am just stating them, without any proof. If you are interested using this result, take care : the proof has never been checked, thus it is better to contact me. I would also to point out that I am deeply indebted to P.G. LeFloch for invaluable discussions concerning these results.

These techniques allows to compute “explicitly” some solutions (Entropy and Conservative ones) for non linear conservation Laws using a geometric framework, given by the Optimal transport theory. More precisely, this result states that the characteristic method can be used to build explicit solution to non-linear conservation laws for all positive times.

Let us enter into the details : consider the following non-linear conservation law, expressed as a Cauchy problem

\partial_t u + \nabla \cdot f(u) = 0, \quad u(0,\cdot) = u_0(\cdot)

where u(t,x) is the unknown function of space and time, with t \ge 0 and x\in {\mathbb{R}}^d, d\ge 1, u_0 is the initial condition, the non-linearity is a map f(u) = (f_i(x,u))_{1 \leq i \leq d} in {\mathbb{R}}^d, and \nabla \cdot f(u) holds for the divergence operator.

Such equations arises in various equations of physics. In particular, we used them in the previous post, trying to Navier-Stokes equations and financial mathematic. Conservations laws own infinitely many solutions. For instance, entropy solutions u \in L^\infty( {\mathbb{R}}^+,L^\infty({\mathbb{R}}^d) \cap L^1({\mathbb{R}}^d)) were constructed by Kruzkov and can be characterized by entropy inequalities, associated with any pair of convex entropy-entropy flux U(u),F(u), satisfying the entropy inequalities, or Entropy dissipation relations

 \partial_t U(u) + \nabla \cdot F(u) \le 0

This last relation has to be understood in a measure sense. These entropy inequalities are usually used to describe “physical” solutions, and we refer to them in this post as Entropy solutions. Other “non physical” solutions (non interacting) to these equations exists.  For instance, the following relation, that has to be understood in a distributional sense, defines a solution that we call a “Conservative solution” to to the non linear conservation laws.

 \partial_t U(u) + \nabla \cdot F(u) = 0 .

Now suppose without restriction that the initial condition is smooth u_0 \in C^{\infty}({\mathbb{R}}^d)), with support in supp u = {\mathbb{R}}^d. Consider the map S_0 : \Lambda \mapsto {\mathbb{R}}^d, where \Lambda := [0,1]^d is the unit cube, targeting the bounded variation of u_0. We mean more precisely S_0 := \nabla h_0, with h_0 \in C^2(\Lambda) smooth strictly convex, satisfying  (\int_{{\mathbb{R}}^d} |\nabla u_0|_1)S_\#m = |\nabla u_0|_1 , where S_\#m is the transport of the Lebesgue measure, and |\nabla u_0|_1 = \sum_i |\partial_i u_0| is the bounded variation density of u_0.

Consider now t \mapsto S(t,\cdot) explicitely given by the following formula

 S(t,\cdot) := S_0 + tf'(v_0),\quad v_0 \in C^{2}(\Lambda,{\mathbb{R}}),

where v_0:=u_0\circ S_0. Then, for small times S(t,\cdot) remains a smooth map, and we can define a scalar function u(t,\cdot) as satisfying the relation u \circ S = v_0. Indeed, u defines a smooth solution of the conservation law for small times, this is called the characteristic method, and works up to the time where S(t,\cdot) remains a map, i.e. the time for which \det \nabla S(t,\cdot) \ge 0 cease to be verified everywhere in \Lambda.

However, we can define solutions of this conservation law after this time using the characteristic field S(t,\cdot).

The first one is given by the rearrangement theorem, called also polar factorization of maps, of Yann Brennier, that we recall here. Let two convex set \Lambda, \Omega, and any surjective map S \in L^2(\Lambda,\Omega). Then the following factorization holds and is unique

 S = \left( \nabla h \right) \circ T, \quad h \in W^{1,2}(\Lambda)\text{ convex,}\quad T \in A\left(\Lambda\right)

where W^{1,2}(\Lambda) is the standard Sobolev space, and A\left(\Lambda\right) = \big\{T : \Lambda \to \Lambda / T_\# m = m \} is the space of Lebesgue–measure-preserving maps. Note that we can design fast algorithm to find this factorization. The first solution, that is the Conservative solution to the non-linear conservation laws, is given explicitly by the formula

 u(t,\cdot) = v_0 \circ T^{-1} \circ (\nabla h)^{-1}

It defines for all positive time a bounded solution u \in L^\infty( {\mathbb{R}}^+,L^\infty({\mathbb{R}}^d)) to the non-linear conservation law, that is not an entropic solution, neither of bounded variations.

The second one is the entropic solution. It is given by a variation of the rearrangement theorem, that we call entropic rearrangement of maps. Let two convex set \Lambda, \Omega, and any surjective map S \in L^2(\Lambda,\Omega). Then the following factorization holds and is unique

 S^+ = \left( \nabla h^+ \right) \circ T^+, \quad h^+ \in W^{1,2}(\Lambda)\text{ convex,}\quad T^+ \in A\left(\Lambda\right) .

In the one-dimensional case d =1, or if f'(v_0) = \nabla \varphi_0, S^+ is simply the convex hull of S. Then the solution

 u^+(t,\cdot) = v_0 \circ (T^+)^{-1} \circ (\nabla h^+)^{-1}

defines an entropy solution, that is of bounded variations, of the conservation law.

 

Fluid Mechanic models for finance

This short note presents a simple bridge between the stochastic equations theory classically used in Finance and fluid mechanics vision. Indeed, we present a quite simple market model, based over standard stochastic modeling, that leads to consider Navier-Stokes equations to describe prices dynamics. This link might be found interesting, because it opens the path to interpreting crisis phenomena, or market instability, as shock-wave formations and propagation in media (as for instance acoustical shock waves). A more consistent (meaning mathematical one) presentation can be found here (in French): PresFinance_NS_calibration

This post start presenting some evidences toward non-linear dynamic effects in markets. These evidences are found even through the widespread stochastic analysis used in mathematical Finance by practitioners. Namely, classical assumptions in Finance are to suppose that Market prices follows a general Brownian motion, for which the variance is a non-linear function, known as the “local volatility”. Since Dupire works, Finance practitioners computes this local volatility surface, retrieving it from observation of derivative market prices. This leads to an inverse problem, known as the calibration problem. See for instance my last post over this topic. The following picture present a local volatility surface retrieved from J. Frédéric Bonnans —Jean-Marc Cognet — Sophie Volle - Rapport de recherche INRIA N° 4648 (2002)

local volatility surface

local volatility surface

 

Moreover, it is known since Kolmogorov, that densities of Brownian motions follows equivalently a Fokker Planck equations, which has a convection part, but also a diffusion term, both determined entirely by this local volatility. The first argument toward non-linear effect in Market concerns what is called the “smile”. The “smile” for Finance practitioners denotes the convex part of the local volatility. Local volatility surfaces computed from Market observation systematically present a convex part. However, a convex local volatility implies that the Fokker Planck equations owns a compressible regime, usually a sign of non-linear dynamic.

The second argument toward non-linear effect in Market is that the calibration problem is highly unstable.  Indeed, we do not know a reliable method to compute the local volatility surface. For instance the local volatility surface presented above presents negative values for the volatility, that is in this picture a square of a variance. Authors believed that negative values are due to numerical artefacts. However, we notice that negative values of volatilities in Fokker-Planck equations can be handled, and are sign of an accretive regime.

The third argument toward non-linear effects in Market is given by the following picture, presenting the cumulative of limit order book in market, taken from Randi Naes – Johannes A. Skjeltrop, Order Book Characteristics and the Volume-Volatility Relation: Empirical Evidence from a Limit Order Market. (2005).

 

cumulative of limit order book

cumulative of limit order book

Such profiles are strikingly invariant over daily observations. Indeed, they are typical profiles coming in fluid dynamic, more precisely in Burger equations.

Using these observations, we are able to build a very simple Market model based over Navier Stokes equations. It is more precisely describing the dynamic of an irrotational flow, pressure free fluid, that rules out the time evolution of bid and ask orders, i.e. the limit order book. Interestingly enough, we can also compute explicitly the solution of this system of Navier-Stokes : distribution of prices, as well as the limit order book, converges as time goes to infinity toward a Dirac mass. Indeed, this system describes the dynamic of an infinitely liquid market going back to equilibrium.

Note that Navier Stokes equation owns others fields of interest, that are meaningful in Finance. For instance the viscosity tensor is homogeneous to liquidity phenomena s. External forces in fluids (as pressure or gravitation fields) are also most surely significant as external source of uncertainty in Finance.

Finally, there exists also some evidence that brownian motions, that are the very heart of mathematical Finance tools, is not adapted to describe market prices in severe regimes. For instance, correlation between stock prices and credit is quite revealing. Indeed, the rate at which any financial institution can borrow money is given by a standard “risk free” rate, plus a spread. This spread is representative of the default probability of the firm over time, that is a direct function of the borrowing rate. In more simple words, any firm or states has a not null probability to bankrupt.  Equivalently there is a not null probability that future prices of any bonds or shares worth zero. However, standard brownian motion used in Finance can not capture such accretive regimes. We show in this note that Navier Stokes equations can capture these phenomenas. Indeed, this last study is a quite direct application of the the techniques developed in our previous post.

 

 

←Older