Je met en ligne dans ce post un exposé donné le 3 Avril, et qui sera donnée également le 6 Juin : PresFinance_NS_calibration
Il s’agit d’un exposé relativement technique, qui présente de manière plus mathématique mes posts précédents sur les liens entre la mécanique des fluides et la formation des prix, et également une méthode de calibration en hautes dimensions.
This post is a quite technical post, dedicated to techniques used to compute explicit solutions to non-linear conservation laws. Indeed, in my last post concerning links between Finance and Fluid Mechanic, I am using some results that are not known to my knowledge. Thus I would like to introduce them in this post. Note that I am just stating them, without any proof. If you are interested using this result, take care : the proof has never been checked, thus it is better to contact me. I would also to point out that I am deeply indebted to P.G. LeFloch for invaluable discussions concerning these results.
These techniques allows to compute “explicitly” some solutions (Entropy and Conservative ones) for non linear conservation Laws using a geometric framework, given by the Optimal transport theory. More precisely, this result states that the characteristic method can be used to build explicit solution to non-linear conservation laws for all positive times.
Let us enter into the details : consider the following non-linear conservation law, expressed as a Cauchy problem
where is the unknown function of space and time, with and , is the initial condition, the non-linearity is a map in , and holds for the divergence operator.
Such equations arises in various equations of physics. In particular, we used them in the previous post, trying to Navier-Stokes equations and financial mathematic. Conservations laws own infinitely many solutions. For instance, entropy solutions were constructed by Kruzkov and can be characterized by entropy inequalities, associated with any pair of convex entropy-entropy flux , satisfying the entropy inequalities, or Entropy dissipation relations
This last relation has to be understood in a measure sense. These entropy inequalities are usually used to describe “physical” solutions, and we refer to them in this post as Entropy solutions. Other “non physical” solutions (non interacting) to these equations exists. For instance, the following relation, that has to be understood in a distributional sense, defines a solution that we call a “Conservative solution” to to the non linear conservation laws.
Now suppose without restriction that the initial condition is smooth , with support in . Consider the map , where is the unit cube, targeting the bounded variation of . We mean more precisely , with smooth strictly convex, satisfying , where is the transport of the Lebesgue measure, and is the bounded variation density of .
Consider now explicitely given by the following formula
where . Then, for small times remains a smooth map, and we can define a scalar function as satisfying the relation . Indeed, defines a smooth solution of the conservation law for small times, this is called the characteristic method, and works up to the time where remains a map, i.e. the time for which cease to be verified everywhere in .
However, we can define solutions of this conservation law after this time using the characteristic field .
The first one is given by the rearrangement theorem, called also polar factorization of maps, of Yann Brennier, that we recall here. Let two convex set , and any surjective map . Then the following factorization holds and is unique
where is the standard Sobolev space, and is the space of Lebesgue–measure-preserving maps. Note that we can design fast algorithm to find this factorization. The first solution, that is the Conservative solution to the non-linear conservation laws, is given explicitly by the formula
It defines for all positive time a bounded solution to the non-linear conservation law, that is not an entropic solution, neither of bounded variations.
The second one is the entropic solution. It is given by a variation of the rearrangement theorem, that we call entropic rearrangement of maps. Let two convex set , and any surjective map . Then the following factorization holds and is unique
In the one-dimensional case d =1, or if , is simply the convex hull of . Then the solution
defines an entropy solution, that is of bounded variations, of the conservation law.
This short note presents a simple bridge between the stochastic equations theory classically used in Finance and fluid mechanics vision. Indeed, we present a quite simple market model, based over standard stochastic modeling, that leads to consider Navier-Stokes equations to describe prices dynamics. This link might be found interesting, because it opens the path to interpreting crisis phenomena, or market instability, as shock-wave formations and propagation in media (as for instance acoustical shock waves). A more consistent (meaning mathematical one) presentation can be found here (in French): PresFinance_NS_calibration
This post start presenting some evidences toward non-linear dynamic effects in markets. These evidences are found even through the widespread stochastic analysis used in mathematical Finance by practitioners. Namely, classical assumptions in Finance are to suppose that Market prices follows a general Brownian motion, for which the variance is a non-linear function, known as the “local volatility”. Since Dupire works, Finance practitioners computes this local volatility surface, retrieving it from observation of derivative market prices. This leads to an inverse problem, known as the calibration problem. See for instance my last post over this topic. The following picture present a local volatility surface retrieved from J. Frédéric Bonnans —Jean-Marc Cognet — Sophie Volle - Rapport de recherche INRIA N° 4648 (2002)
- local volatility surface
Moreover, it is known since Kolmogorov, that densities of Brownian motions follows equivalently a Fokker Planck equations, which has a convection part, but also a diffusion term, both determined entirely by this local volatility. The first argument toward non-linear effect in Market concerns what is called the “smile”. The “smile” for Finance practitioners denotes the convex part of the local volatility. Local volatility surfaces computed from Market observation systematically present a convex part. However, a convex local volatility implies that the Fokker Planck equations owns a compressible regime, usually a sign of non-linear dynamic.
The second argument toward non-linear effect in Market is that the calibration problem is highly unstable. Indeed, we do not know a reliable method to compute the local volatility surface. For instance the local volatility surface presented above presents negative values for the volatility, that is in this picture a square of a variance. Authors believed that negative values are due to numerical artefacts. However, we notice that negative values of volatilities in Fokker-Planck equations can be handled, and are sign of an accretive regime.
The third argument toward non-linear effects in Market is given by the following picture, presenting the cumulative of limit order book in market, taken from Randi Naes – Johannes A. Skjeltrop, Order Book Characteristics and the Volume-Volatility Relation: Empirical Evidence from a Limit Order Market. (2005).
- cumulative of limit order book
Such profiles are strikingly invariant over daily observations. Indeed, they are typical profiles coming in fluid dynamic, more precisely in Burger equations.
Using these observations, we are able to build a very simple Market model based over Navier Stokes equations. It is more precisely describing the dynamic of an irrotational flow, pressure free fluid, that rules out the time evolution of bid and ask orders, i.e. the limit order book. Interestingly enough, we can also compute explicitly the solution of this system of Navier-Stokes : distribution of prices, as well as the limit order book, converges as time goes to infinity toward a Dirac mass. Indeed, this system describes the dynamic of an infinitely liquid market going back to equilibrium.
Note that Navier Stokes equation owns others fields of interest, that are meaningful in Finance. For instance the viscosity tensor is homogeneous to liquidity phenomena s. External forces in fluids (as pressure or gravitation fields) are also most surely significant as external source of uncertainty in Finance.
Finally, there exists also some evidence that brownian motions, that are the very heart of mathematical Finance tools, is not adapted to describe market prices in severe regimes. For instance, correlation between stock prices and credit is quite revealing. Indeed, the rate at which any financial institution can borrow money is given by a standard “risk free” rate, plus a spread. This spread is representative of the default probability of the firm over time, that is a direct function of the borrowing rate. In more simple words, any firm or states has a not null probability to bankrupt. Equivalently there is a not null probability that future prices of any bonds or shares worth zero. However, standard brownian motion used in Finance can not capture such accretive regimes. We show in this note that Navier Stokes equations can capture these phenomenas. Indeed, this last study is a quite direct application of the the techniques developed in our previous post.
C’est un post en construction, je le compléterais bientôt de la manière habituelle (un post de “vulgarisation”, accompagné d’un papier technique pour nos amis mathématiciens). En voici l’introduction.
On présente dans ce post un modèle de marché simple, qui permet d’interpréter les équations stochastiques de la finance comme des équations de la mécanique des fluides. Ce lien est intéressant, parce qu’il permet d’interpréter les phénomènes de crise ou d’instabilité de marché comme des phénomènes de propagation d’ondes de choc hydrodynamiques (e.g. ondes de choc acoustiques).
Cependant, le modèle présenté soulève une question assez naturelle : comment détecter la formation de ces “chocs économétriques” ?
On donne deux éléments de réponse à cette question :
1) Un premier élément de réponse se trouve dans l’étude des propriétés statistiques des carnets d’ordre dans les marchés listés.
2) Le deuxième élément de réponse se trouve peut-être dans le problème de « l’économétrie calibrée » (cf mon post précédent), i.e. le problème de la calibration risque neutre de l’économétrie sur des prix de marché. En effet, il existe des cas courants pour lesquels ce problème génère des économétries présentant des chocs.
February 8th, 2013 in
This post deals with methods having applications to risk measurement of portfolio, portfolio hedging, or to market arbitraging, that are interrelated topics. For these topics, it is important, even crucial, to define what is called in this post an “econometry”, that is a description of the dynamics of underlying future prices. In this post, I am trying to describe some interesting results with understandable words (at least I hope so !), but the following presentation, more technical, addresses mathematicians readers : Calibration_Tech_Pres.
Working over these topics since some years, I think now that it is quite realistic to generate a “calibrated econometry” over market prices. It means more precisely describing a dynamic of underlying future prices featuring:
- Historically Calibrated, meaning that this econometry is close to one coming from statistical considerations over historical data.
- Market Data Calibrated, meaning that this econometry replicates exactly prices of risk-free instruments (eg listed market, or sufficiently liquid instruments), having European style pay-off (even if calibrating over american style derivatives, as bermudean swaptions or American options might be possible).
APPLICATIONS / BENEFITS
Expected applications concerns the finance industry of course. More precisely, applications should target primarily the CVA (Counterparty Value Adjustment) computations, which requires the generation of such an econometry, but the range of applications might be wider:
- Risk measurement over Large Portfolio. Here we address risk departments, trying to measure some risk over a subset of the bank portfolio.
- Portfolio hedging of small or large size. Here we address Front Office desks. E.g. small desks wishing to hedge their own market risks, or specialized, global hedging desk (eg CVA, Delta-One).
- Market Arbitrage. We address here small Front Office desks, which seeks intra-day market arbitrage on listed, electronic market.
For such applications, it is essential to know generating an econometry that replicates exactly market price of risk measurement instruments (or used for hedging / arbitraging). Indeed, it is a very effective way to know the market price of this risk (or the market price of the hedge / arbitrage strategy). Other benefits for this method are also expected, affecting the teams in charge of the valuation of financial products (quants).
- Reduced model risk – Because portfolio are evaluated using underlying calibrated on market prices.
- Reduced numerical errors – Because the method optimize the convergence of Monte Carlo methods based over the econometry.
- Market implied dynamics. This aspects, essential, is not possible using historical data calibration.
- Sharp Market implied correlations structure – Calibrating this econometry with correlation based products, or portfolios built on structured products.
EXISTING / EVALUATION / STRUCTURE PROJECT
My guess is that such a project is a one-year project:
Existing literature. In the simpler, one underlying framework, there exists a wide literature see for instance [I] and reference therein. However, at our knowledge, the existing methods suffers from stability issues. The paper [II], under construction, try to address stability and performance issues in a muti-underlying framework.
Evaluation. This project is estimated to man one year in research and development, from research to integration.
Structure. A proposal for structuring the project in three phases is as follows, with stop & go at the end of each phase: 0 -> 3-month : study of algorithms devoted to solve the approach described in [II]. 3 -> 6 months prototype development. 6 -> 12 months. Integration into a third-party environment.
[I] JM Mercier, Optimally Transported schemes: application to mathematical finance (2008).
[II] J.M. Mercier, “Calibrated Econometry.” In preparation.
January 19th, 2013 in
, optimally transported schemes
| tags: arbitrage
, market data
, risk measure
Il s’agit d’un post traitant de mathématiques financières, plus précisément de méthodes dont les applications sont la couverture de portefeuille d’actifs ou l’arbitrage de marchés, des sujets intimement liés. Pour ces sujets il est important, voire essentiel, de disposer de ce qu’on appelle une économétrie, c’est à dire une description de la dynamique des sous-jacents dans le futur. Je reprends dans ce post une présentation qui être chargée sous format PDF ici : OnePager (3). La présentation suivante, plus technique, Calibration_Tech_Pres, s’adresse, en anglais, à des lecteurs mathématiciens.
Après quelques études, il me semble maintenant tout à fait réaliste de générer ce qu’on appelle une “économétrie” calibrée sur des données de marché. Il s’agit plus précisément de décrire une dynamique de sous-jacents d’un portefeuille de produits financiers à terme qui :
- Est proche de celle provenant d’une calibration sur des données historiques.
- Réplique exactement des prix de marché d’instruments sans risque (par exemple des produits listés ou suffisamment liquides, types options européennes, mais il semble possible également de calibrer sur des produits à exercice anticipé, type swaptions bermudéennes ou options américaines).
APPLICATIONS / BENEFICES
Deux types d’applications financières sont à attendre. Elles concernent particulièrement le calcul de la CVA (Counterparty Value Adjustment), qui demande la génération d’une économie diffusée, mais il semble que d’autres applications sont envisageables:
- Couverture de portefeuille de taille arbitraire. Dans cette situation, il est nécessaire de générer une économétrie diffusée, qui sert à :
- Evaluer un risque. On s’adresse ici à des départements de risques.
- Couvrir au mieux le risque d’un portefeuille. On s’adresse ici à des desks Front-Office. Par exemple un desk désirant couvrir lui-même certains risques de marché de son activité, ou alors un desk de hedging global (ex : CVA, Delta-One).
- Arbitrage de marchés. On s’adresse ici à des desks Front-Office de taille plus modeste, qui cherche un arbitrage de marché ou de portefeuille en construisant des portefeuilles sans risque, via du trading haute fréquence.
Pour ces applications, il est essentiel de savoir générer une économétrie qui réplique exactement les prix de marché des instruments de mesure du risque (ou de couverture du risque). En effet, c’est une manière très efficace de connaître le prix de marché de ce risque (ou de sa stratégie de couverture). D’autres bénéfices sont également à attendre, qui concernent les équipes en charge de la valorisation des produits financiers (quants).
Diminution des risques de modélisation – Par une évaluation des portefeuilles sur des sous-jacents calibrés sur des prix de marché.
- Diminution des risques d’approximation - Par la génération de scenarii optimisant les convergences des méthodes de Monte-Carlo sous-jacente à une économétrie diffusée.
- Prise en compte des risques anticipés par les marchés. – Cet aspect, essentiel, n’est pas possible par une calibration sur des données historiques.
- Capture fine des corrélations de marchés – Par la calibration sur des produits de corrélation, ou sur des portefeuilles construits sur des produits structurés.
EXISTANT / EVALUATION / STRUCTURATION DE PROJET
En première approche, ce projet peut-être évalué comme suit:
Existant. Dans le cadre, plus simple, d’un seul sous-jacent, une première étude est disponible[i], et testée[ii]. Un article[iii] complète cette étude, et généralise cas mono sous-jacent au cas multi sous-jacents.
Evaluation. Ce projet est estimé à une année homme en recherche et développement, de la recherche fondamentale à un prototype utilisable.
Structuration. Une proposition de structuration du projet en trois phases est le suivant, avec stop & go à l’issue de chaque phase : 0 -> 3 mois, Etude. Rédaction d’un document de recherche et d’algorithmie sur la base de l’article [iii]. 3 -> 6 mois, développement d’un prototype. 6 -> 12 mois, intégration. Intégration dans un environnement tiers.
[iii] J.M. Mercier, “Calibrated Econometry”. En cours de rédaction.
In this post, I present very quickly in the attached document a robust, fast, efficient, algorithm to extrapolate data from missing or poorly defined “matrix-like” data structure. This algorithm might be found useful in some situations, as a fast alternative to existing methods :
1- To fit quickly low-rank approximation in computer vision applications.
2- To fit quickly big Rank Matrices.
3 – To compute quickly poorly defined correlation matrix, as ones observed from historical data sequences. In such cases, it happens that the resulting matrix is not positive defined, because of poorly or non existing historical data, and can not be used as a correlation matrix. However, some historical data are good enough to define a lower rank correlation matrix, i.e. strictly positive defined. In this case, this methods allows a quick extrapolation from this existing lower rank matrix. The extrapolated matrix should have some properties of interest : its eigenvalues are expected to be in range of the eigenvalues of the lower matrix.
I do not know any existing literature over this very simple method, but there surely exists some. Do not hesitate to point me out a reference. Take care, since I did not write out a clean proof, but only a sketch. If someone is interested, I could try to write it out properly.
Note : this is a translation of the following post, written in French. Disclaimer : I am not a native english speaker. Would there be awful sounding english formulations, do not hesitate to suggest a better one.
There is, in the financial markets universe, an aversion to the use of open-source. The red line not to cross is given by the following argument: the heart of business of a financial institution, its know-how, can not and must certainly not be standardized.
Of course, the information system can be based on well-established open-source standards. For example, it is advisable for reasons of cost and operational risk, to rely over solid technical libraries as Boost . And to communicate with a peer, the use of standards for the description and exchange of financial products such as FpML is sometimes unavoidable.
In this article we will examine the place that might have a open-source system calculating risk exposure of a financial institution facing its counterparties : on one hand, such computations are actually part of the core business of a financial institution. But, from an other side, such computations depends mainly on its counterparties.
The defended idea in this post is that an open-source system obviously can not replace an internal computation of counterparty’s risk. However, if one makes such a system modular enough, and couple it with simulation and publishing services, then such an initiative is likely to generate profits for all players in the economic system, that are investors, regulators and rating agencies.
The expected benefits are of course those inherent to the use of open source. However, in the case of calculating counterparty risk, there is more specifically the idea that such an initiative could eventually lead to capture the systemic risk. Indeed, without estimating the systemic risk, does estimating default’s probability of counterparties, which is the heart of the counterparty risk computation, really realistic? One might so easily think that, from an historical point of view, the opposite has systematically been verified. We just note in this post that today, there is a very heated debate regarding the approaches we are using to compute counterparty risk. And this debate seems to lead to the fact that we are underestimating the systemic risk in the calculation of probabilities of default (a link is made for instance in this post ).
In this post, we will try to figure out what could be the benefits for each type of actors. Then we will examine the feasibility of such an initiative, from a purely technical point of view, and conclude with a more general observation.
Actors who should be interested in this type of initiative are:
- Investors First: market and / or investments banks , hedge funds, etc …
The benefits here are of course the notion of cost reductions for development and maintenance of an internal counterparty risk computations system. In the same vein, benefiting from an open-source system would facilitate benchmarking and external audit of internal methods. Moreover, the open-source would allow to the the less fortunate players to access to internal methods of counterparty risk computations . This is financially attractive, because internal methods reduce provisions for counterparty risk imposed by the regulators. On the other hand, from the perspective of an operator market, access to an external service could facilitate the reconciliation of prices process. This applies as well to the so-called bilateral CVA than for the provision for counterparty risk, internally computed from profits and losses considerations.
But the best argument in our view is that it is sufficient that two or more investors use the same counterparty risk calculator on a particular transaction or a portfolio of assets to give them a huge competitive advantage . Indeed, in a configuration of information sharing, we can compute more accurately the default probability. Indeed, in theory, we should be able, would all the information upon the financial system available, as well as unlimited computing power, to write a mathematical model that compute the exact probability of default of each market player . In practice, more are players sharing information, less it is necessary to have recourse to external sources of data to feed the model, and more relevant should be the computations.
From an operational point of view, a sharing information configuration means better hedge against counterparty risk, more accurate risk provisioning. It also means the possibility of detecting arbitrages on credit markets and credit derivatives.
The benefits are for him of course the notion of information transparency. There is also the observation that such an initiative would be consistent with a salutary standardization: there are now as many financial institutions as methods of calculating counterparty risk. We can also say that this would facilitate technology and regulatory transfer to closely related activities, as leasing.
But again, the best argument is to note that the current regulatory methods imposed by Basel III, are certainly heading the right direction, but could be improved. Notably, a direction of improvement, the “next step”, would be to take into account the systemic effects of propagation of counterparty risk in the economy, which is not done today. Debates that are emerging are currently auguring that regulatory approaches may well be soon under fire from critics. Criticisms could be quite naturals: wasn’t the systemic risk the very precise point for which which regulators have been gifted a mandate in 2008? However, to evaluate systemic risk, it means by definition not only have enough information, but also a credible model and an information system capable of treating it. A sharing information system capable to compute systemic risk is likely to respond effectively to this point.
- And finally the rating agencies.
The primary function of a rating agency is to provide an estimate of credit risk. This is an essential function in the financial industry: at what rate can we lend money without this information? To our knowledge, credit risk estimations are currently done observing the fundamental of an institution, as well as its history, but also by observing the markets for credit derivatives or shares.
However, historical examples of underestimating counterparty risk are quite numerous, the crisis of 2008 being one, which somewhat tarnishes the image of these agencies. With some high point of view, it seems quite natural: the point is that nobody can claim computing accurately a default probabiity, because nobody knows how to capture the systemic risk, that is probably dominant in OTC markets. This should make the rating agencies very natural potential users of a systemic risk calculator.
- Technical Feasibility of a counterparty risk calculator.
Today, the recipe to design an internal counterparty risk calculator is made from the following ingredients:
a) An alimentation of financial position module, which contains a description of the financial position to evaluate.
b) A supply block of market data, which contains all the market data necessary for the econometric positions evaluation.
c) A block of econometric diffusion, which will generate the scenarii needed for the calculation of counterparty risk.
d) A pricing library block, which evaluates each position on each scenario given by the block distribution, under the “risk-neutral” assumption.
e) The main calculator, that aggregates the exposures, taking into account the default probabilities from each counterparty, would it calculates these probabilities itself or request these data to the data market block.
For points a), b) and d), there are already open source solutions upon which we can rely. It remains to implement and interface blocks c), and especially e), which contains in particular the systemic risk calculator. Ultimately, these blocks can now be released relatively easily into private or public clouds, in order to benefit from the power storage and computation required for services to third parties. This project is probably heavy to carry, but quite feasible from a technical point of view.
To develop an open-source solution to calculate counterparty risk seems to have somes advantages. Obviously, it remains to study the organizational and financial feasibility of such a project. However, beyond the benefits listed above, we note that it would probably be the first time in history where we would try to really model the underlying nature of financial systems. This would certainly helps understanding the economic underlyings that are at heart of our today’s societies. Beyond this philosophic point, it might help to provide better mechanisms, to effectively mitigate the negative impacts of economic crises phenomena.
November 9th, 2012 in
| tags: Counterparty Value Adjutment
, Debt Value Adjustment
, Fund Value Adjustment
, Wrong Way Risk
Note : this is a translation of the following post, written in French. Disclaimer : I am not a native english speaker. Would there be awful sounding english formulations, do not hesitate to suggest a better one.
Evaluating counterparty risk, in the general sense of the risk that one party to a contract will not be able to honor it, is a virtuous approach in many ways. Its operational purposes are, summarizing, to analyze its contribution to a contract in order to find a sharper evaluation, and, ultimately, to provision or to hedge against this risk.
Particularly, the subprime crisis of 2008, which is considered a crisis of solvency and liquidity, can be seen as a consequence of underestimating this risk. This crisis has had a significant impact on the economy. Within the financial industry, it has led to a strengthening of the banking regulations (Basel III), and banks are investing heavily today to better estimate and cover this risk. The projects focus on the so-called CVA (Credit Value Adjustement), or “issuer risk”, a closely related topic.
However, one may wonder whether the approach and measures used today are appropriate and effective for capturing this risk. Issues that we are entitled to tackle are:
- Do we have the needed informations to evaluate the counterpary risk ?
- Do we have a good model to evaluate it, and what are we measuring today ?
One might also wonder whether the approach that we today use would be able to prevent or mitigate a situation similar to 2008. This essay defends the following thesis: the regulatory and institutional methods used to date are heading in the right direction. However, they still can not pretend to capture this risk completely. It not because we do not have the required informations, but rather because we do not process it properly. Indeed, a (probably dominant) part of this risk is of systemic nature. To estimate it we must, by definition, know the complete economic state to appreciate it. What is missing, the “next step”, is to gather this information, and to design a model capable of processing it to compute the counterparty risk in a global way.
To support this thesis, this essay proceeds as follows: we will recall the big figures undertaking the computations of counterparty risk. We will also recall what are the main discussion related to the currently used methods. We will then propose the construction of a class of systemic models, quite simple to design, and relying on existing practices. By analyzing these models, we will try to give some light in the current discussions related to counterparty risk. For instance, such models should be able to give some light to the fact that the CVA, as measured by financial institutions, is notoriously volatile. Or, it will bring a perspective that seems natural enough in the debates over FVA (Fund Value Adjustment), or WWR (Wrong Way Risk), see below for a quick exposee of these notions.
In this essay, we adopt a very simplistic speech on purposes, to avoid heavy scientific notations and facilitate the reading to the greatest number. Obviously, market practices are much more complex, and we could develop this more rigorously from a scientific point of view. Our goal here is to pass or test some ideas, hoping to help or to motivate research toward a more credible modeling of banking systemic risk.
The basic principles for evaluating the counterparty risk: a local measure.
Today, the regulation requires each player to measure its counterparty risk “locally”, by which is meant that each player measure this risk at the only knowledge of its commitments to its counterparties. The main principles of this risk evaluation can be summarized as follows: a counterparty A evaluate a contract with a counterparty B by breaking its value into two parts
1- One part use the probability that B will not default, mixing this probability to the “risk-neutral” contract value, i.e. without any risk.
2- The second part uses the probability that B will default, and mix this probability to the contract value under this assumption.
The sum of two pieces measures an evaluation by A of a contract with B. The CVA is the difference between the value of the contract without risk, and this sum. This difference is the measure of counterparty risk for a particular contract. This general principle is applied to all contracts, and of course all counterparties that this market participant faces. By summing each contribution, one deduces the counterparties exposures of the financial institution. In this method, some inputs are exogenous. For example, financial markets use the following sources of information:
1- The probability that a counterparty will default is derived from two sources : Rating agencies and derivatives markets (insurance products such as CDS, CCDS, or options market). To simplify, the first data source is preferred when attempting to provision the “regulatory risk”, e.g. Bale III. The second is preferred when seeking to hedge against this risk.
2- The term “risk-neutral” refers to an abstraction of financial mathematics. This branch of applied mathematics relies on the existence of a change of measure, that is nothing else but a borrowing / lending rate, called risk-neutral. Operationally, this rate is taken on the market, given by the interbanking exchange rates (eg LIBOR, EURIBOR, etc).
Main “hot” discussions about counterparty risk
In this section, we assume for simplicity that the contract ends at the very moment when one of the two counterparties defaults. In such a case, the value of the contract to the counterparty is simply A Part 1 described above.
In computing the counterparty risk exposure in the previous section, a first objection can be formulated: Counterparty B does not have a contract evaluation opposite to the one of A. Indeed, B should compute that the risk-neutral value of the contract is opposite to that of A (it is said that this computations is antisymmetric). However, the default probability of A is not equal to the one of B. For the two counterparties to reconcile the price (to avoid arbitrage), it must be antisymmetric, considering the evaluation of the default probability of counterparty A.
The second objection is that financing such a contract is supported by the counterparty that have a negative balance at a given time. However, this funding has a cost: the market lends money at a rate that reflects the borrower’s default probability. This should be reflected in the balance of the contract. Taking these observations into account, we can write a full review of a transaction between two counterparties A and B, which reflects the market practices that market players obeys today:
1- For the first part, which corresponds to the probability that the counterparty A and B does not default, that multiplies the value of the contract without risk, we can identify three components:
a) CVA (Counterparty Value Adjustment), which depends on the probability that B does not default. We have already identified this part as the main component involved in the calculation of counterparty risk of counterparty A.
b) DVA (Debt Value Adjustment), which depends on the probability that A does not default. This component is under heavily discussion actually.
c) WWR (Wrong Way Risk, even though WWR definition is more general), a hidden piece. In the previous analysis, this piece depends on the probability that the two counterparties default (or that the bankruptcy of one causes the other’s). This piece is also under heavy discussion, and will be discussed below.
2 – FVA (Fund Value Adjustement), which corresponds to the funding (treasury) costs of this contract. If the contract balance is negative at a future date for a given counterparty, then this counterparty must borrow the money and repay it at a later date, at a rate determined by a “risk neutral” rate, plus a value that is given by its probability of default. This piece today raises a very heated debate facing pro-FVA (roughly market operators) and anti-FVA (roughly mathematicians and risk managers).
A very simple systemic model
Brazilian Financial Network (Cont, Moussa & Santos 2009)
It is possible to develop a relatively simple dynamic model from the previous assessment. We can do this by using an example quite fascinating idea , developed to our knowledge by Pr. Hull and White: the default’s probability of a counterparty is a direct function of the negative part of its balance sheet. By picking up a function, we can write down a large dynamic system:
1- The nodes of the system are counterparties, the links between each nodes are the quantities (CVA, DVA).
2- The unknowns are the default probability functions of each counterparties.
The characteristics of such systems based on these ideas, that consists in a class of systemic model (each model consists in picking up a particular function), are as follows:
1- It is deterministic: it computes the exact probability of default of individual counterparties.
2 – It seems that such systems can be written independantly of the risk-neutral measure.
3 – From a mathematical point of view, it looks like to a diffusive system. One main property is that probability of counterparty’s default has a tendancy to spread to other connected market participants.
Let us illustrate of how behave this system for a very simple economy (two counterparties A and B, a single stream), and a well-chosen function but very naive. The two counterparties establish the following contract: A lend K units of a fictitious currency to counterpart B over a period of t units of time, at the initial time t = 0. B lends this money at risk-free rate. This model predicts that the probability of default of A, from initial time t=0 to time t (the default probability of B is null) is given by the formula 1-exp (-Kt). This gives the rate at which A will borrow money. To balance this, A must request reimbursement at time t of K * exp (Kt) units of currency. Hence, to be consistent, this model also provides a value to the risk-free contract: K * exp (-Kt). This result does not seem completely absurd.
However, we note that this model is off-track for “classical” financial mathematics: it seems to asses that
1- Measurement without risk is not exogenous: it is given by a measure of probability of default of the system.
2- This is due to a strong assumption: we assumed a strict dependency between the probability of default of a counterparty to the negative balance operation, using the very same idea from Hull and White.
Contribution of systemic models in the current discussions.
This systemic approach gives a somewhat different light on the current discussions.
First light concerns the volatility of CVA computations. As we have already observed, market participants measure “locally” their counterparty risk. This is quite natural, because each actor has the only knowledge of its commitments to its counterparties. But doing so, a measurement error necessarily occurs. It is possible to precise this error: when trying to approach a dynamical system by a local measure, there is a “horizon” of time during which such approximation is valid. In the systemic model presented above, this approximation is valid for a time inversely proportional to the value of a contract without risk, multiplied by the probability of default of the counterparty. It is the horizon from which the other actors in the economic system are likely to have an influence that can not be neglected in the measurement. After this “horizon”, the approximation is biased and may be unstable. With this light, provisioning or hedging exposure to counterparty risk on high maturity is cautionous.
The second contribution is the connection between CVA, DVA, WWR and FVA. There is a first point to make: if counterparty risk was modeled perfectly, as already noticed Prof. Hull and White (see footnote 4), funding market operations should be transparent. But it seems that this is not what are observing traders and market operators today. We should be modest: we necessarily commit errors during counterparty risk measurements. There are at least errors due to pricing models, errors due to information systems, etc etc … Beyond these aspects, we knows in this note that we can not estimate the systemic risk. Moreover, another source of error identified in the literature is the WWR, the “Wrong Way Risk”. But what exactly is the Wrong Way Risk? It depends to whom you are speaking to : this may be a correlation between the credit market and the equity market. It can also be a negative correlation between your exposure to a counterparty and its credit quality. Indeed, we should acknowledge that there does exists a precise definition of what is “WWR”. Moreover, we should also acknowledge that, would the definition of Hull and White be chosen, there is no doubt that the WWR is systemic in nature, and can’t be computed. Indeed, this is quite probably the substantive criticism made on the method of Hull and White, while talking about its calibration problem.
In this context, I propose an alternative definition of WWR: this is the error when estimating the risk of counterparty using any formula, as for instance CVA + DVA + WWR. Why such a definition? Because I know how the market estimate it, using the FVA !
A final observation is that financial mathematics now seems based on very fragile foundations. It might be possible that introducing counterparty risk in financial systems will lead to change this discipline. On the one hand, the risk-neutral measure is still considered today as a gift from the gods, in the sense that it is estimated directly on the market. But recent debates (see LIBOR vs. OIS) show that it is increasingly difficult to consider it as such. It is reasonable to assume that this “risk-neutral” measure should be closely linked to a measure of probability of default in an economic system. On the other hand, another aspect of financial mathematic is the of arbitrage-free market. But, in order to arbitraging a market, one must first borrow the amount needed, which is done at a risky rate. Here again, a notion of default probability measure seems to have considerable influence.
Current approaches to counterparty risk, be they from regulators or from economic points of views, are raising many questions today. It seems clear that these methods could be improved. Among all improvement’s direction, note that the OTC market (which are over-the-counter transactions, the exact scope of transactions sensitive to counterparty risk) is currently concentrated to 90% on a dozen counterparties. In this context, it is likely that systemic risk has a dominant effect on financial systems. For this reason in particular, a systemic approach to this risk would held many advantages over “local methods”, except of course that implementations should be more complex. It would have a more accurate picture of the evolution of the financial system. It would also consist in a salutary effort toward standardization : today there exists as many way of computing counterparty’s risk than financila institutions. From an operational point of view, it would better tackle counterparty’s risk, and would lead to a more effectively hedge against it. From a regulatory perspective, this would allow the regulator to have a more long-term economic sightviews.
 Risk Annual Summit: DVA Creates hedging systemic risk, says Brigo, risk magazine, Mar. 2012
 CVA and wrong way risk. John Hull and Alan White
 Traders v. theorists, risk magazine, Laurie Carver, Oct. 2012.
 The Motions Motions continuous FVA: Hull and White sponds to Their critics. John Hull, Alan White, risk magazine Oct. 2012.
November 9th, 2012 in
| tags: Counterparty Value Adjutment
, Debt Value Adjustment
, Fund Value Adjustment
, Wrong Way Risk
In this post, I am sharing a technical suggestion to design a new file system that could be quite useful to everybody.
The idea is roughly to design a generic, persistent, user transparent Information System (IS) capable to reliably store, trace, grant secure access to, and finally reproduce, digital contents produced by users on their PC’s, for long-period of time. The motivation is easing knowledge transmission, targeting primarily scientific labs or companies. However, in the mid-term, such an Information System could be used by private individuals, for instance as a certifiable service of data exchange with tiers (private individuals, institutions, govies…), or for transmitting digital contents for familial or historical reasons. This allows to consider strong economical models to valor such an IS.
Technologically, a suggestion to achieve such an IS is presented below. This suggestion amounts to define a web of static, persistent digital contents, built on top of the actual web of dynamic, non persistent IP addresses. Allow me to explicit this idea.
Starting from the user PC, the suggested Information system could be summarized as follows:
1) Traceability means unique identification of digital contents. The underlying technology is CAS (Content Addressable System), providing a reliable way to identify unambiguously files and chunk of data coming from them. Indeed, CAS systems provides a unique identification number (UID) to them. To strengthen traceability, a suggestion is to add to the UID the UID of the previous version, if a modification is made to a file, providing the basis to a version systems.
2) Persistent means that file system is “Write-Once”: no deletion of contents is possible at some steps. CAS are write-once file systems.
3) Generic and user transparent means that the system is not ontology based. Users runs on top of a virtual machine, compatible with the user Operating System (OS).
4) Reproducibility means that the whole call stack state is stored : digital contents, programs used to interpret them, as well as the OS.
5) Perennial storage means redundancy. Several CAS file systems of PC’s users are synchronized on a regular temporal basis with a back-up distant server sharing the same CAS file system. These back-up server synchronize themselves with another upper-level back-up server, etc…
6) Accessibility means that all users sharing a common node (back up server) have access to others users contents, provided read access are granted. Within such a IS, a user can access to any digital content, based on the UID number. It means that the user PC local CAS file system looks first on its repository to find the content. If not present, then the file system questions the upper node, itself interrogating other nodes, up to find the path to the data. This organization of the information can be designed in between a centralized way like a tree, and a decentralized way, as the one coming from physical IP address based web. A suggestion is to give to the last step of the rocket in case of a tree, or to the final IS, the name of Mnemosine, goddess of memory in Italian.
The step 6) amounts to design a new protocol of exchange based on UID, explaining why this project amounts to design a web of contents, based over the actual web of IP addresses. Moreover, if a digital content can not be interpreted on a local user PC, then granting a UID link to the whole state of the machine at the time where the digital content has been produced should allow to reproduce it, at least remotely, at any future date. Note also that such an IS provides the natural basis for services as digital strongbox, On-line PC’s, data exchange facilities, version system.
At my knowledge, most of the technology needed to develop such an IS already exists. What is lacking is the content based access protocol of step 6).
An hint to start is to use the existing Venti CAS based file system in Plan9 from bell labs. It might be a good candidate to be adapted to such a purpose. My very first impression is that it could be a two years research project.