Wednesday, March 31, 2010

Polyvalence Defies Commoditization

One of the tacit objectives of commoditization is to disembed knowledge from humans in order to reembody that knowledge into technology (Giddens, 1990). The impact of commoditization on knowledge workers is a well-understood facet of post-modernization and the new economy.

The success of the commoditization movement has been particularly discouraging for professional services as the public sector is increasingly unwilling to pay premium fees for services, and the private sector is too distrustful of professionals to engage them even when doing so is arguably prudent. Joan Capelin's (2005) expressed sentiments are commonplace:
It is cold comfort to realize that all the professions are experiencing this race to the bottom. We tend not to see the effects of commoditization — nor even feel sympathetic — when lawyers, accountants, and management consultants find they are being squeezed for lower fees by their marketplace. They are far more rewarded for their time, to begin with. But I can tell you that there is much handwringing going on across all the business-based professions.
Nevertheless, as the forces of commoditization continue their forays into the realm of human knowledge in search of latent technologies, critical issues are emerging about how and if certain human specific functions can feasibly be transferred into machines. In particular, the expert handling of polyvalent information and data seems to defy commoditization.

Polyvalence denotes something that has multiple values, meanings, or appeals. Polyvalence (which is a synonym for multivalence) is a state or the way something is with respect to its main attributes, as in “the current state of knowledge,” “the state of one’s health,” and “the state of the economy.” Polyvalent symbols and metaphors carry multiple (often esoteric) meanings and can fulfill expressive, transactional, and interactional functions concurrently.

Prof Daniel Bell (2003) first used the term polyvalent to describe human activities that are difficult or impossible to delegate, let alone commoditize. Examples of polyvalent human functions include surgery, piloting an aircraft, coaching a professional sports team, and representing a client in court. Thus, the professions are often associated with polyvalence.

If technology can displace a particular human activity, then the displaced activity is no longer truly polyvalent.

More to follow about polyvalence in future posts.

Sources:

Capelin, J (2005, September 8), Confronting Commoditization, DesignIntelligence.

Cohen, D (2003), Our Modern Times: The New Nature of Companies in the Information Age (S Clay and D Cohen, Trans), Cambridge, MA: MIT Press.

Giddens, A (1990), The Consequences of Modernity, Stanford, CA: Stanford University Press.

Related Posts:

Financial Management Requires Polyvalence

The Topological Landscape between Automation and Expertise

Thursday, March 25, 2010

Financial Management Requires Polyvalence

A frequent problem in advanced financial management is the task of interpreting (and understanding) probabilistic forecasts. The following exercise illustrates the essence of the problem.

Suppose that the following (very simplistic) financial projection for the “next” period arrives in your hands:

The financial analyst who handed you the report then comments “…the projection is based on historical data…” You focus your attention on the positive net profit number and conclude that the projection looks reasonable based on your “gut.” Having reached your conclusion, you head home to enjoy your life.

Unfortunately, a forecasted numerical value is rarely accurate, because the projected “number” is inevitably “off” by some (often significant) amount.

Contrast the above with this second story. In this instance, the analyst hands you a financial projection for the same next period; the report begins again with the worksheet displayed above. However, the report continues with additional information as shown in the tables and graphs that follow:

After looking over the report, you listen as the analyst interjects that a lognormal distribution was used to model gross revenues based on a fitting of available historical data, and the analyst confides that your feedback regarding that decision would be helpful. The analyst also expresses concern about net profitability for the coming period by predicting “…there appears to be a significant chance that we won't be profitable…” (see the last histrogram projecting a 30% chance of losses). With this information, you call your spouse with a message that you will be working late in the office to “deal with some problems…”


The moral of the story is that a “number” in itself is rarely sufficient information and justification for financial decision-making. In many ways, financial managers must be able to “hear” and “feel” the meaning of probabilistic reporting much like the conductor of a symphony orchestra must hear and feel the meaning of a musical score. Effective financial managers maintain a polyvalent sensitivity toward a complexity of factors and dependencies in order to transcend numerics as guidance for decisions. In this manner, the art of financial economics embraces its science.

Financial managers might take comfort in these words by Nobel Laureate Prof Robert J Aumann (2000, p. 141):
The best art is something that strikes a chord with the viewer or listener. It expresses something that the viewer or listener has experienced himself and it expresses it in a way that enables him to focus his feelings or ideas about it. You read a novel and it expresses some kind of idea with which you can empathize, or perhaps something that you yourself have thought about or experienced. Take a sculpture or a cubist painting. It expresses some reality, some insight, in an ideal way. That is what the best mathematical economics does. It is a way of expressing ideas, perhaps in an ideal way.
By the way, probabilistic (stochastic) reasoning is now integral to finance, so if any of the reporting above is a challenge (for you or your staff), please consider my training and software offerings here.

Source: Aumann, R J (2000). Economic Theory and Mathematical Method: An Interview. In Collected Papers (Vol I, pp. 135-144). Cambridge, MA: MIT Press.

Related Posts:

Polyvalence Defies Commoditization

The Topological Landscape between Automation and Expertise

Wednesday, March 24, 2010

Democracy Reborn

Given that healthcare reform in America is now the "law of the land," I continue to ponder the underlying philosophical themes that might explain the political changes now underway. My previous writings have argued that America is witnessing the final collapse of elitism, accompanied by the "beginning of the end" for populism, as pluralism unfolds to become the political "motif" of our time. Whether we like it or not, the old "white male" regimes of past (both elitist and populist) are withering as a pluralist electorate and agenda takes deep root across out nation's landscape. The fledgling "tea party" movement pales in comparison to the pluralist pagentry that is in celebration of the future. By all accounts, the pluralist political machine is grinding down the elitist and populist political structures in detail. From where I sit, American democracy is in reform, as the pundits of the old retire generationally. Indeed, we are witnessing a rebirth of democracy in our time.

Republican Senators Jon Kyl, Judd Gregg, and Mitch McConnell

Related Posts

Corporate Anarchy?

From Populism to Pluralism

Saturday, March 20, 2010

A Regulatory Architecture for Cross-Border Banking Groups

by Stefano Micossi © VoxEU.org

Policymakers and commentators have suggested that large banks should be broken up. This column argues that such an idea risks the very existence of a global financial system. It outlines an alternative framework in which deposit insurance should be covered by banks not taxpayers, banks should not be guaranteed a bailout, and regulators should be mandated to step in when the warning signs begin.

Following the demise of Lehman Brothers, the debate on regulatory reform has led to the conclusion that large banking institutions must be broken up and their risk-taking activities limited by law along the lines of the ‘Volcker rule’ (Gros 2010). Not only are such actions unnecessary, they may be hard to implement and could reduce the availability of credit to the economy (if for example they reduce the ability of banks to hedge their credit positions). The main consequence of such a plan would be the disintegration of global financial markets as they break down into segregated national markets.

In recent research, my colleagues and I set out an alternative solution that can achieve a more stable and resilient financial system without renouncing the benefits of global and multi-purpose financial institutions and innovative finance (Carmassi et al 2010). These are predicated on effectively curtailing moral hazard and strengthening market discipline on banks’ shareholders and managers by raising the cost of the banking charter to fully reflect its benefits for the banks, and restoring the possibility that all – or at least most – large banks can fail without unmanageable systemic repercussions.

Back to basics

The crisis was generated by cross-border banks levering their deposit base and acting, through the wholesale interbank market, as the residual suppliers of liquidity for all the other players in financial markets. This multiplied funds for speculation and helped to sustain a gigantic inverted pyramid of securities made up of other securities and yet again other securities. Without the money-multiplying capacity of the banks, the asset price bubble and the explosion of financial intermediation and aggregate leverage would not have been possible. Extreme investment strategies by bankers obviously reflect moral hazard created by the expectation that governments would step in to bail them out in case of large losses.

The first thing that needs to be done in order to restore corrective incentives for bank managers and shareholders is to eliminate the most obvious pitfall in banking regulation, that is, reliance on capital requirements based on risk-weighted assets. This approach is flawed since asset risk cannot be assessed and measured independently of market conditions and market sentiment. As a result, the need for capital will always be underestimated under favourable market conditions, leading to balance-sheet fragility and precipitous asset sales when market sentiment turns sour. Banks need a capital buffer to overcome the massive asymmetries of information between bank managers on one side, and investors and regulators on the other. This asymmetry currently makes it easy for bankers to accumulate excessive risks in the quest for higher returns, before markets become aware. The way to do this is to set capital requirements in straight proportion to total assets or liabilities of banking groups.

Fixing flaws in prudential capital rules does not remove moral hazard from the banking system, whose specific sources must be tackled separately. These are:
  • the deposit-institution franchise,
  • the implicit or explicit promise of bailout in case of threatened failure, and
  • regulatory forbearance.
The problem associated with the deposit franchise is well known. If depositors have doubts on the bank’s solvency, they will run for the exit, forcing rapid liquidation of banks’ assets, possibly with large losses and contagion spreading instability to other banking institutions.

First pillar for tackling moral hazard

Deposit insurance can reassure depositors, and thus is the first pillar for tackling moral hazard, but it also mutes their incentive to monitor the management of their bank, since they no longer risk losing their money.

More importantly, deposit insurance has evolved in most countries into a system effectively protecting the bank, or the entire banking group, rather than depositors. When a bank risks becoming insolvent, supervisors step in to cover its losses and replenish its capital so as to avoid any adverse repercussions on market confidence. Moreover, most deposit insurance systems are inadequately funded by insured institutions, entailing an implicit promise that taxpayers’ money will make up the difference in case of failure of large banks.

Thus, in order to re-establish a proper price for the banking charter, banks should carry, ex ante, the full cost of deposit protection, making sure that in most circumstances the guarantee fund would be adequate to reimburse depositors when individual banks fail. Of course, no fund could ever be sufficient to meet a general banking crisis, but a fund of an appropriate size would offer adequate protection in normal circumstances, with only a predictable share of banks going bankrupt.

Deposit insurance fees are the right instrument to make banks pay for risk generated by banks. They should be determined on the basis of a careful probabilistic assessment of the likelihood of failure within the overall pool of deposits and risks of the banking system (within appropriately defined market jurisdictions). This is where the risk profile of banks’ asset and loan portfolios can be taken fully into consideration, together with, more broadly, the quality of bank management and risk control, thus creating effective penalties for riskier behaviour. Size itself could be appropriately penalised by higher fees that would incorporate a probabilistic price for the potential threat for systemic stability.

Second pillar: Not “too big to fail”

The second pillar required in order to greatly limit moral hazard in the financial system is credibly removing the promise that large banks cannot fail. To this end, all main jurisdictions should establish special resolution procedures – as already exist in the US and the UK – managed by an administrative authority, with powers to require early recapitalisation, manage required reorganisations and, once all this has failed, liquidating the bank with only limited systemic repercussions. Crisis prevention, reorganisation and liquidation would all be part of a unified consolidated resolution procedure managed for each bank, by one administrative authority.

In order to make resolution feasible, all banks and banking groups would be required to prepare and provide to their supervisors a document detailing the full consolidated structure of legal entities that depend on the parent company for their survival, the claims on the bank and their order of priority, and a clear description of operational – as distinct from legal – responsibilities and decision-making, notably regarding functions centralised with the parent company. This "living wills" document may also comprise "segregation" arrangements to preserve certain functions of systemic relevance even during resolution: for clearing and settlement of certain transactions, netting out of certain counterparties, suspension of covenants on certain operations.

Third pillar: No regulatory forbearance

Finally, the third pillar of an effectively reformed financial system is a set of procedural arrangements that will prevent supervisory forbearance. Supervisory discretion to postpone corrective action would be strictly constrained, so that bankers, stakeholders and the public would know that mistakes would always meet early retribution. To this end it is necessary to establish a system of early mandated action by bank supervisors ensuring that, as capital falls below certain thresholds, the bank or banking group will be promptly and adequately recapitalised. Should capital continue to fall, then supervisors should be required to step in and impose all necessary reorganisation, including disposing of assets, selling or closing lines of business, changing management, ceding the entire bank to a stronger entity.

Should this not work, then liquidation would commence. A bridge bank would take over deposits and other “sound” banking activities, thus ensuring their continuity. All other assets and liabilities, together with the price received for the transfer of assets to the bridge bank, would remain in the “residual” bank, which would be stripped of its banking licence. An administrator for the liquidation of the residual bank would be appointed to determine its value and satisfy creditors according to the legal order of priorities (based on the law of the parent company and other jurisdictions involved).

The attractive feature of mandated corrective action is that asset disposals and change of management will normally take place well before capital falls to zero, so that losses for the insurance fund and ultimately taxpayers are more likely to be much smaller.

References

Carmassi, E L, and Stefano M (2010), "Overcoming too big to fail - A Regulatory Framework to Limit Moral Hazard and Free Riding in the Financial Sector," CEPS-Assonime Report.

Gros, D (2010), "Too interconnected to fail = too big to fail: What is in a leverage ratio?", VoxEU.org, 26 January.

Republished with permission of VoxEU.org

Friday, March 19, 2010

Corporate Anarchy?

by Sunshine Mugrabi © SunshineMug.com

In the last few posts I've been developing an idea that has been rattling around my brain for some time now. It can be summed up in the following questions: Is the social media revolution we're experiencing right now a result of new technology, or is it the other way around? In other words, is the quiet revolution sweeping through our society in response to new technology -- or, this new way of communicating something that we have called into existence? Could it be that the rise of Twitter and Facebook and Friendfeed are a reflection of people's desire to break down the old barriers and speak directly to one another? Are we seeing the end of the "expert" era, in which all knowledge and understanding is filtered to us through a select few?

It seems to me that this is a generational thing. My parents generation, the children of the 1960s, started some kind of revolution. It was in many ways a flawed attempt. As the writer Ken Wilbur has pointed out, for all its good intentions, the boomer generation was immensely narcissistic. It had (and still has) a tendency to blow its accomplishments up out of proportion. And it was very much still stuck in an "us/them" paradigm. In fact, the whole idea of a generation gap is based on that! However, there's no denying that our parents generation -- with their anti-war protests, long hair, and rebellion -- shook up the old order for good and all.

Then came my generation - Generation X. We were a bit lost for a time. They called us slackers, because we tended to be introspective. We couldn't exactly rebel, because our parents had already done that, so we kind of came up with our own way. When I was in college, I took to calling myself an anarchist. This was partly to annoy my parents and professors. But it was also my way of showing my dissatisfaction with the dual options that were being served up as my only choices -- Democratic v. Republican, Left vs. Right, Women vs. Men, etc.

Now, there's a new generation--I believe they're calling it Generation Y. They seem to evolved a whole new stance. It's as if they have taken the best of the boomer generation and my generation, and melded them into something entirely new.

They're not rebelling. They're talking. And, lo and behold, no one is left out. It may have started on the campus of Harvard University, but now Facebook is open to all. Even Walmart has a page (though many of the wall comments aren't terribly kind).

The new generation seems to intuitively understand something that for all its free love, the hippies never completely got. That is, we are a human family -- all of us connected to one another. When we try to deny it, we experience the opposite. Alienation. Loneliness. Anger. All of the things that seem to ail our society today.

The great marketing guru Seth Godin (who I generally like and agree with) showed himself to be stuck in the old paradigm with a recent post, "You Matter." In it, he lists all of the situations that show that you matter. They were all very heartfelt. For example:

"When you love the work you do and the people you do it with, you matter ... When kids grow up wanting to be you, you matter."

The list goes on. However, the new paradigm is as follows: "Everyone matters. Period." You could be having the crappiest day ever, and feeling no love whatsoever for your fellow man. You could be a protestor on the streets Tehran. You could be a tech startup struggling to get noticed. Or, you could be Apple.

No one is left out of this. Everyone matters.

Republished with permission of SunshineMug.com

Thursday, March 18, 2010

VaR Methodologies Compared II

In addition to the Value at Risk (VaR) comparison chart I posted at VaR Methodologies Compared, I found a second equally insightful comparative analysis in table form, this one by Prof Philippe Jorion (2007, p. 270):

As stated in my previous post, I believe that Monte Carlo (i.e., stochastic) simulation provides the most robust approach to estimating VaR, though this approach requires more computing (and brain) power.

Source: Jorion, P (2007), Value at Risk: The New Benchmark for Managing Financial Risk (3rd ed), New York: McGraw-Hill.

See also:

VaR Methodologies Compared

"Model-less" Modeling?

One of the themes that seems to be emerging in some circles of finance is the notion of "model-free" analysis and reasoning. Here is a comment that I recently posted in response to such thinking in a popular public forum:

To all, the case for modeling probabilities and inferential statistics is a bit harder to dismiss than some are presupposing in this forum. Now, I do concede that researchers have many challenges to deal with when validating data and models. Moreover, the theoretical relationships between dependent and independent variables (including proxy measures) are often misspecified and/or interpreted by researchers to the point where some research is outright misleading (especially in the social sciences). Nevertheless, inferential modeling of complex pheonomena cannot be simply dismissed as incredible via declaratory argument and assertions. There is a tremendous burden of proof that is assumed when one takes the path of falsifying probability theory and inferential statistics as a discipline. Thus, I fear that the notion of "model-less" reasoning is problematic as an approach to understanding what is happening in this place called "reality" around us. For the record, I am not as pessimistic about the validity and reliability of such methods as some may be here and in society. My advice to all is "be prepared" to defend your evidence if your goal is to falsify probability theory as a tool for understanding and evaluating the risks (and dangers) that seem to prevail in this universe. Thanks for the opportunity to comment...

VaR Methodologies Compared

I recently came across a succinct comparison of the three accepted approaches to computing Value at Risk (VAR). According to Boris Agranovich of GolobalRiskConsult:
VAR or Value at Risk is a summary measure of downside risk expressed in the reference currency. A general definition is: VAR is the maximum expected loss over a given period at a given level of confidence. VaR does not inform on the size of loss that might occur beyond that confidence level.

The method used to calculate VaR may be historical simulation (either based on sensitivities or full revaluation), parametric, or Monte Carlo simulation. All methodologies share both a dependency on historic data, and a set of assumptions about the liquidity of the underlying positions and the continuous nature of underlying markets. In the wake of the current crisis the weaknesses of VAR methodology became apparent and they need to be addressed....

A VAR system alone will not be effective in protecting against market risk. It needs to be used only in combination with limits both on notional amounts and exposures and, in addition, should be reinforced by vigorous stress tests.
I personally maintain that the Monte Carlo (i.e., stochastic) approach is superior to the other methods because the results provide a more "polyvalent" explanation of the financial risk in the subject investment.

Source: GlobalRiskConsult

See also:

VaR Methodologies Compared II

Wednesday, March 17, 2010

A Global Readership

As of today, almost a third of the "hits" on The Vantage Point have come from Asia and Europe -- around half came from North America -- I guess this means that The Vantage Point has gone global!

Source: SiteMeter.com

Tuesday, March 16, 2010

Risk versus Danger

The line between “risk” and “danger” is very thin, at least according to their definitions -- are either more or less real...?
__________________

risk
n
1. the possibility of incurring misfortune or loss; hazard
2. (Business / Insurance) Insurance
a. chance of a loss or other event on which a claim may be filed
b. the type of such an event, such as fire or theft
c. the amount of the claim should such an event occur
d. a person or thing considered with respect to the characteristics that may cause an insured event to occur
at risk
a. vulnerable; likely to be lost or damaged
b. (Social Welfare) Social welfare vulnerable to personal damage, to the extent that a welfare agency might take protective responsibility
no risk Austral informal an expression of assent
take or run a risk to proceed in an action without regard to the possibility of danger involved in it
vb (tr)
1. to expose to danger or loss; hazard
2. to act in spite of the possibility of (injury or loss) to risk a fall in climbing
[from French risque, from Italian risco, from rischiare to be in peril, from Greek rhiza cliff (from the hazards of sailing along rocky coasts)]
__________________

danger
n
1. the state of being vulnerable to injury, loss, or evil; risk
2. a person or thing that may cause injury, pain, etc.
3. Obsolete power
in danger of liable to
(Medicine)
on the danger list critically ill in hospital
[daunger power, hence power to inflict injury, from Old French dongier (from Latin dominium ownership) blended with Old French dam injury, from Latin damnum]
__________________

Source: Free Dictionary

Monday, March 15, 2010

Advanced Analytics Not Information Technology

I recently fielded a forum question about the cost-creation versus value-adding capabilities of information technology (IT) and advanced (i.e., bespoke) analytics in enterprise. Here is how I responded:

Regarding the linkages between information technology (IT), advanced analytics, and value, I would gently suggest that IT is a cost center, and advanced analytics are the value-adding proposition. In other words, don't go to the IT department if you are seeking to activate value-adding analytics (though I will concede that IT does have an effective role in business intelligence [BI] production, which is very different from advanced analytics in my view).

Unfortunately, IT solution providers know full well that advanced analytics is what creates value, and so IT firms will typically "bundle" various analytic offerings with a proposed IT solution in an effort to bamboozle the client into believing that scarce IT dollars can buy both transaction management and advanced analytical services together in one "big" IT installation deal. Buyers of IT solutions should therefore beware.

What is needed today is for IT managers to yield the analytics space to subject matter experts with analytical solutions that stand separate from the data warehousing infrastructure, while seeking to reduce costs in IT by exploiting the economies of scale that IT solutions typically contribute to the cost analysis.

Again, IT is a cost center, while advanced analytics (separate from BI) are the value-adding activity.

Saturday, March 13, 2010

Risk Assessment

...or is it...?

Why Policymakers Need to Take Note of High-Frequency Finance

by Richard Olsen © VoxEU.org

Why should high-frequency finance be of any interest to policymakers interested in long-term economic issues? This column argues that the discipline can revolutionise economics and finance by turning accepted assumptions on their head and offering novel solutions to today’s issues.

I believe high-frequency finance is turning aspects of economics and finance into a hard science. The discipline was officially inaugurated at a conference in Zurich in 1995 that was attended by over 200 of the world’s top researchers. Since then, there have been a large number of publications including a book with the title Introduction to High-frequency Finance. “High-frequency data” is a term used for tick-by-tick price information that is collected from financial markets. The tick data is valuable, because they represent transaction prices at which assets are bought and sold. The price changes are a footprint of the changing balance of buyers and sellers.

The term “high-frequency finance” has a deeper meaning and is a statement of intent indicating that research is data-driven and agnostic. There are no ex ante theories or hypotheses. We let the data speak for itself. In natural sciences this is how research is often conducted. The first step towards discovery is pure observation and coming up with a description of what has been observed – this may sound easy but is not at all the case. Only in a second step, when the facts are clearly established, do natural scientists start formulating hypotheses that are then verified with experiments.

In high-frequency finance:
  • The first step involves the collecting and scrubbing of data.
  • The second step is to analyse the data and identify its statistical properties.
Here one looks for stylised facts which are significant and not just spurious. Due to the masses of data points available for analysis (for many financial instruments one can collect more than 100,000 data points per day), identification of structures is straightforward, either there is a regularity or there is none.
  • The third step is to formalise observations of specific patterns and seek tentative explanations, theories to explain them.
The abundance of data in high-frequency finance has profound implications for the statistical relevance of its results. Unlike in other fields of economics and finance, where there is not sufficient data to back up the inferences, this is not an issue in high-frequency finance. The results are unambiguous and turn economics and finance into a hard science, just as is the case for natural sciences. This is not a bad thing.

High-frequency data as an answer to singularity of macro events

Today we are all grappling with the global financial crisis and have to make hard decisions. In living memory, we have not seen a crisis of a similar scale, so policymakers are in a vacuum and do not have any comparable historical precedents to validate their policy decisions.

If the global economy had been in existence for 100,000 years, this would be a different matter. We would have had many crises of a similar scale, and we could use these previous events as a benchmark to evaluate the current crisis. The modern economy with financial markets linked together through high speed communication networks trading trillions of dollars on a daily basis is a new phenomenon that did not exist even 20 years ago. People refer to the events of 1929 and subsequent years, but while these events can be used as one possible point of reference, they are not meaningful in the statistical sense. On a macro level, we can make observations but no inferences because we do not have the historical data. There is a void that researchers and policymakers need to acknowledge.

Fractals: Understanding macro structure from micro data

High-frequency finance can fill the void with its huge amounts of data – if we embrace fractal theory that explains how phenomena are the same even if they occur at different scales. Fractal theory suggests that we can search for explanations of the big crisis by moving to another time scale, the short term.

At a second-by-second level, there are an abundance of crises and systemic shocks; just imagine the occurrence of the many price jumps due to unexpected news releases and political events or large market orders. Albeit on a short-term time scale, we study how regime shifts occur and how human beings react. The large number of occurrences allows for meaningful analysis. We study all facets of a crisis, how traders behave prior to the crisis, how they react to the first onslaught, how they panic, when the going gets hard and finally, how their frame of reference which previously was a kind of anchor and gave them a degree of security breaks down and how later, when the shock has passed, the excitement dies down, there is the aftershock depression and then eventually how gradual recovery to a new state of normality begins.

The everyday events sum up and shape the tomorrow

High-frequency finance has another big selling point, one which policymakers should take note of: the study of market events on a tick-by-tick basis brings to the surface the detailed flows of buying and selling that occur in the market. From this information, it is possible to build maps of how market participants build up positions and how asset bubbles develop over time. By tracking price action on a tick-by-tick basis, it is possible to infer the composition of those bubbles similar to the work of geologists studying rock formations. Researchers can identify, who has been buying and selling, on what time horizons they trade, how resilient they are to price shocks, what makes them turn their position and become net sellers as buyers. Based on this information we can make inferences of the likely collapse of those bubbles.

High-frequency finance opens the way to develop "economic weather maps". Just as in meteorology, where the large scale models rely on the most detailed information of precipitation, air pressure and wind, the same is true for the economic weather map. We have to start collecting data on a tick-by-tick level and then iteratively build large scale models. Today, the development of such a global economic weather map has barely started. The "scale of market quake" (a free Internet service) is a first instalment, but the start of an exciting development.

High-frequency finance holds out the hope of turning aspects economics and finance into a hard science by the sheer volume of data and its ability to set events into their appropriate context by mapping rare events into a short-term time scale with a near infinity of events, albeit at a shorter-term time scale. Second, the tracking of events on a tick-by-tick basis opens the door to identify underlying flows and develop economic weather maps. Surely that’s not a bad thing?

References

Bisig T, Dupuis, A, Impagliazzo, V, and Olsen, R (2009), “The scale of market quakes”, working paper, September.

Gençay, R, Dacorogna, M, Müller, U, Olsen, R, and Pictet, O (2001), An Introduction to High Frequency Finance, Academic Press.

Mandelbrot, B (1997), Fractals and Scaling in Finance, Springer.

Mandelbrot, B, Hudson, R (2004), The (Mis)behavior of Markets, Basic Books.

Republished with permission of VoxEU.org

Wednesday, March 10, 2010

Professor Richard Stites (1931-2010)

Professor Richard Stites passed away on Sunday, March 7, 2010, in Helsinki, Finland. He will be buried at Uspenskii Sobor in Helsinki, in view of one of his favorite places, the Slavonic Library. Professor Stites taught Russian and Soviet history at Georgetown University for over 30 years, and will be greatly missed. Professor Stites was my favorite professor during my studies at Georgetown. He once observed during a lecture that "you can't read peoples' minds, but you can read their hearts..."

Rest in peace...

Risk Stratified

Source: The Investment Professional

Risk Managers' Haiku

I saw this on the PRMIA website and had to share it:
Something bad's happened
We have to add new controls
Oh! Happened again

~ Dr Patrick McConnell
By the way, haiku is a Japanese poetry form in which a concept is encapsulated in 17 sounds/syllables, typically in three lines of 5, 7 and 5 sounds.

Source: PRMIA

Risk Taxonomized

Source: Mitsubishi UFJ Securities

Monday, March 08, 2010

Can "Cloud" Computing Meet the Needs of Power Users?

Microsoft CEO Steve Ballmer has now committed his company's future to so-called "cloud" computing, a movement that makes plenty of sense for enterprise in the 21st century. No doubt, cloud computing technology will spread quickly in support of nascent consumer demand. On the other hand, comments such as the following by John Dvorak of PC Magazine remind me that there are constraints in cloud computing that power users should consider before moving away from their current software and systems. Dvorak is scathing in his current evaluation:
The cloud stinks. Its applications have always been much slower than their desktop counterparts. Try to get to the end cell of a large cloud-based spreadsheet. You'll long for the desktop version. The whole process is exacerbated by the speed of the Internet. The Internet is also unreliable. A couple of weeks ago, I was down for two hours. A month ago, I lost my connection for 20-plus hours.
I consider myself to be an Excel (Microsoft) power user. In a few months, I will be installing the new 64-bit version of Excel 2010 onto my Intel Core i5 powered 64-bit computer system. It is doubtful that I will be moving quickly to adopt cloud-based spreadsheet solutions, and certainly not before I confirm that the new systems can reliably and efficiently meet my needs, in detail. For now, that's my stand...

Source: PC Magazine

Big Into Small: The Future of Enterprise Resource Planning (ERP)

With ads such as this running at airports across the US, one begins to contemplate how Enterprise Resource Planning (ERP) might transform itself from big into small...

Saturday, March 06, 2010

The Forecasting Problem

The forecasting problem is integral to successful enterprise and public administration. Yet, many senior executives, managers, and administrators have difficulty interpreting the meaning of a given time-series forecast based on methodology alone. For example, consider the sequence of forecasts that follow; each 30-period extended forecast was created in Excel (Microsoft) using identical historical data with varying methods.

The first chart above depicts a straight linear forecast through the historical data and into the future. In this instance, the trend line slopes downward.

The second chart depicts a logarithmic forecast. Note that the 30-period future forecast here is slightly higher than in the previous example.

The next chart above illustrates the use of a third order polynomial and projects a significantly higher 30-period outcome than the previous methods.

This fourth chart depicts a fourth order polynomial trend line predicting declining results in the future.

The last graph above is a fifth order polynomial forecast and predicts that results will decline sharply in the future.

Observe that the forecasts generated by each method are quite different from each other. Moreover, the methods depicted above illustrate only a small sampling of the most common time-series forecasting techniques in use today. Other forecasting methods include such techniques as autoregressive moving averages (ARMA), generalized autoregressive conditional heteroskedastic methods (GARCH), multivariate forecasting methods, and a long list of other advanced techniques in frequent use by companies and governments around the world.

Given the above, the challenge for executives and managers becomes how best to compare and evaluate the various forecasting methods in use today. A number of research questions come to mind: What criteria does one employ to differentiate and evaluate a specific forecast? What specific conceptual methodologies (e.g., mathematical) underlie each approach? Finally, what message does one send (or not) in choosing a specific forecasting technique (and projection)?

The forecasting problem in enterprise and government is very real, and simply training more analysts to forecast effectively is only part of the solution. We also need more senior executives, managers, journalists, public administrators (including politicians), and leaders in general who can comprehend and interpret the strengths, weakness, and implications of the various state-of-the-art forecasting techniques in use, today. Finally, the public at large would benefit from increased comprehension and understanding of modern forecasting methods as a listening audience, if only to avert poor judgments in their purchasing, advocacy, and voting decisions.

I am reminded of a story told to me by a professor many years ago about the importance of understanding statistical forecasting. The story begins in a US Civil War setting back in 1864 with an officer reading and paraphrasing headlines from a newspaper to some soldiers who had gathered around eager to hear the news of the day.

"It says here that we suffered 50 percent casualties in Vicksburg the other day...," said the officer.

Upon hearing the report, a nearby soldier responded to the officer by asking, "Wow, is that a lot...?"

The moral of the story is that statistical forecasting is everyone's business, but most especially for the management of enterprise in the 21st century.

To learn more about other more advanced time-series forecasting methods, including training and software tools, visit my website linked below.

Learn More

Related Articles:

Tools for Decision and Risk Analysis

Friday, March 05, 2010

Continuum of Pure Uncertainty and Certainty

Prof Hossein Arsham makes an excellent case for using probabilistic (i.e., stochastic) models when confronted with "risky" decisions. His continuum of pure uncertainty and certainty is instructive for analysts and decision makers alike:
The domain of decision analysis models falls between two extreme cases. This depends upon the degree of knowledge we have about the outcome of our actions, as shown below:

Ignorance     Risky Situation     Knowledge
<-------------------------|-------------------------> 
Uncertainty      Probabilistic    Deterministic

One "pole" on this scale is deterministic... The opposite "pole" is pure uncertainty. Between these two extremes are problems under risk. The main idea here is that for any given problem, the degree of certainty varies among managers depending upon how much knowledge each one has about the same problem. This reflects the recommendation of a different solution by each person.
The truth is that few financial decisions are made in an environment of complete knowledge, begging the question as to why deterministic models continue to prevail upon management. I maintain that the future of financial economics is probabilistic.

Source: Tools for Decision Analysis

Thursday, March 04, 2010

Tools for Decision and Risk Analysis

Here is a weblink that every decision modeler and risk analyst will want to save to their favorites, entitled Tools for Decision Analysis: Analysis of Risky Decisions.

The website and tools are produced and maintained by Prof Hossein Arsham (pictured below) who is the Harry Wright Distinguished Research Professor of Statistics and Management Science at the Merrick School of Business at the University of Baltimore.

Wednesday, March 03, 2010

Stochastic Modeling is the Future in Financial Economics

The following is a forum question I recently responded to:

What kinds of courses do you recommend I take in preparation for a career in quantitative finance?

Make certain you include courses in stochastic modeling and simulations. In the past, determinism has dominated financial modeling as a methodology with dire results. A deterministic model is a mathematical construct in which outcomes are precisely determined through known relationships among states and events, without any room for random variation. In such models, a given input will always produce the same output. In contrast, stochastic models use ranges of values for both dependent and independent variables in the form of probability distributions. While deterministic models can still add value in financial economics and risk analysis, stochastic modeling is the future. Good luck!