StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Probability Modeling in Finance and Economics - Essay Example

Cite this document
Summary
This essay "Probability Modeling in Finance and Economics" focuses on probability modeling in finance and economics that provides a means to rationalize the unknown by imbedding it into a coherent framework, clearly distinguishing what we know and what we do not know…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER92.5% of users find it useful
Probability Modeling in Finance and Economics
Read Text Preview

Extract of sample "Probability Modeling in Finance and Economics"

Probability modeling in finance and economics provides a means to rationalize the unknown by imbedding it into a coherent framework, clearly distinguishing what we know and what we do not know. Yet, the assumption that we can formalize our lack of knowledge is both presumptuous and essential at the same time. In economic and business forecasting, the accuracy of predictions is no better. Accrued evidence points out that assumptions made by probability models are in practice violated. Long-run memory undermines the existence of martingales in finance. Further, can stock prices uncertainty or 'noise' be modeled by Brownian motion Commensurate analysis of nonlinear time series has also followed its course in finance. ARCH and GARCH type models used to estimate volatility are also nonlinear models expressed as a function (linear or not) of past variations in stocks. ARCH-GARCH models and more recently the range process have generated an extensive amount of research and papers. Just as chaos the Hurst, exponent and memory modeling have been topics of interest in many areas outside finance and economics. ARCH and GARCH models, which are important for modeling and estimating volatility, are an important part of modern finance. Since the value of an option depends essentially on its volatility and volatility studies are assuming an important role in financial modeling. A primary feature of the autoregressive Conditional heteroscedasticity (ARCH) model as developed by Engle (1982), is that the conditional variances change over time. Following the seminal idea, numerous models incorporating this feature have been proposed. Among these models, Bollerslev's (1986) generalized ARCH (GARCH) model is certainly the most popular and successful because it is easy to estimate and interpret by analogy with the autoregressive moving average (ARMA) time series model. Analyzing financial and economical time series data with ARCH and GARCH models has become very common in empirical research, with a huge literature having been established. Several excellent surveys on ARCH/GARCH models are available, such as Bollerslev, Chou and Kroner (1992). More recently, the stochastic Volatility model of Taylor (1986) offers an alternative to GARCH. ARCH models have had a prominent role in the analysis of many aspects of financial econometrics such as the term structure of interest rates, the pricing of options, the presence of time varying risk premia in the foreign exchange market. The quintessence of the ARCH model is to make volatility depend on variability of past observations. An alternative formulation initiated by Taylor (1986) makes volatility be driven by unobserved component, and has come to be known as the stochastic volatility (SV) model. AS for the ARCH models SV models have also been intensively used in the last decade, especially after the progress accomplished in the corresponding estimation techniques, as illustrated in the excellent surveys of Ghysels ET al (1996) and Shepard (1996). Early contributions that aimed at relating changes in volatility of asset returns to economic intuition include Clark (1973) who assumed that a stochastic process of information arrival generates a random number of intraday changes of the asset price. The Black-Scholes model for instance assumes that the price of the asset underlying the option contract follows a geometric Brownian motion and one of the most successful extensions has been the continuous time SV model. In these models, volatility is not a constant as in the original Black-Scholes model; rather, it is another random process typically driven by a Brownian motion that is imperfectly correlated with the Brownian motion driving the primitive asset price dynamics. In technical terms, the volatility process generated within arch type models converges in distribution towards a well-defined solution of a stochastic differential equation as the sampling frequency increases. One concomitant reason is that the continuous record asymptotics developed for the ARCH models do not deliver a theory for the estimation of the relevant parameters rather such methods typically take the parameters as given, and study the limiting behavior of the stochastic difference equations in correspondence of fixed, well-chosen sequence of parameters. The methodology introduced by nelson, however, revealed useful to show that appropriate sequences of ARCH models are able to estimate consistently the volatility of a given continuous time stochastic process as the sample frequency gets larger and larger even in the presence of serious misspecifications. ARCH model can be regarded as merely a device, which can be used to perform filtering or smoothing estimation of unobserved volatilities. In addition to the point estimates of the parameters of the stochastic differential equations systems, indeed an essential ingredient for the practical implementation of any continuous time stochastic volatility model is obviously the knowledge of the volatility at some dates of interest. Bollerslev presents the GARCH model adding the concept that the volatility for tomorrow depends not only on the past realizations but it depends too on the errors of the volatility predicted. The number of extensions to the basic GARCH model is very large. When the GARCH model is viewed as an ARMA type model for the squared residuals, it can be extended or modified toward allowing for long memory (Fractional GARCH), for seasonality (Seasonal or Periodic GARCH), and for non-regressive ness of the variances (Exponential GARCH) to mention a few. In addition, we may allow for additional error process in the GARCH model, as is done in the stochastic volatility model. Often considered extensions of the basic GARCH model these are motivated by the empirical observation that the estimated residuals display non-normality. This may be caused by some extreme outliers, which cannot be described by the GARCH model. . The simulation of the implied distribution is a natural tool for implied volatility consistent option pricing, for economic policy planning, for Value-at-Risk (VaR) and hedging purposes. The implied volatility consistent models started developing after 1992. They constitute an alternative approach to option pricing and hedging, as compared to the so-called traditional approach. The traditional approach starts building the model by assuming a way that the underlying asset's price evolves. Such an example is the Black-Scholes (BS) model. However, the empirical evidence shows that the BS model misprices the options (especially the deep out-of-the-money and deep in-the-money options. Equivalently stated, the BS implied volatilities vary with the option's strike (smiles/skews) and maturity (term structure), forming a two-dimensional implied volatility surface. Moreover, the implied volatility surface changes stochastically over time, exhibiting certain dynamics. These empirical findings imply that one or more of the Black Scholes assumptions are violated in practice. As a result more complicated such as stochastic volatility models, jump models such as stochastic volatility models, jump models were developed in order to explain the market option prices. All of them start from the assumption about the asset's stochastic process. Unfortunately, none of them is able to account for all the observed properties of the implied volatilities. For example, stochastic volatility models can explain the short-term implied volatilities behavior (pronounced smiles/skews), but they cannot explain the long-term behavior (smile attenuation). As a result, a second approach, the smile consistent approach, was developed. The models falling within this approach do not attempt to explain today's observed option prices. Instead, they take them as given, and they try to explain their dynamics over time. The smile consistent approach can be divided in two categories. The smile consistent deterministic volatility models and the smile consistent stochastic volatility models. The former category assumes that the price instantaneous volatility is a deterministic function of the asset price and time. This function is not specified exogenously, but it is extracted from today's option prices (or equivalent from today's implied volatilities). The latter approach takes also as an input the market option prices, but it assumes that the volatility changes stochastically over time. Empirical studies reveal that the smile consistent deterministic volatility models perform satisfactory within samples, but they perform poorly out of sample. In other words, they have to be recalibrated every day in order to fit the smile. On the other hand, the performance of smile consistent stochastic volatility models has not been suited so far. However, these models are very promising since they try to capture the stochastic nature of the evolution of the volatility surface. A stochastic volatility model can be obtained by discretization of a plain vanilla continuous time model. This demonstrates that in handling theoretical models for practical ends and discretizing model we may introduce problems associated with stochastic volatility. Stochastic volatility models presume that a process's volatility (variance) varies over time following some stochastic process, usually well specified. As a result, it is presumed that volatility growth increases market unpredictability, thereby rendering the application of the rational expectation hypothesis, at best, a tenuous one. Modeling volatility models might require then a broad number of approaches not falling under the "random Walk Hypothesis". Techniques such as ARCH and GARCH might be used to estimate empirically the volatility in such cases. Stochastic volatility introduces another 'source of risk', a volatility risk, when we model an assets price (or returns). This leads to incompleteness and thus to non-unique asset price. Risk-neutral pricing is no longer applicable since the probabilities calculated by the application of rational expectations (i.e. hedging to eliminate all sources of risk and using the risk-free rate as a mechanism to replicate assets) do not lead to risk-neutral valuation. When buying and selling causes option prices to rise or fall, mathematically the price changes are attributed to changes in implied volatility. Since implied volatility can fluctuate during short periods - even minutes- option prices can rise and fall without any meaningful changes in the actual volatility or price of the stick. In short, implied volatility is a key factor that causes the value of an option to fluctuate over time. To understand why option prices sometimes change without meaningful changes in the stock price, recall that there are two types of volatility: historical and implied. Like their names suggest, historical volatility is the average volatility shown by the underlying security in the past. Implied volatility, in contrast, is computed by using an option-pricing model. Implied volatility gives a sense of what traders and market makers believe the volatility of the stock will be in the future. As expectations, changes regarding the stock's volatility going forward, so will the implied volatility change. In sum, implied volatility is always in a state of flux. Sometimes the implied volatility in a stock option becomes quite high, which causes the option premiums to increases in value. For that reason, when traders say premiums are high or low, cheap or expensive, for a particular option they are talking about implied volatility. Therefore, the first step in determining whether options are cheap or expensive is to compare implied volatility over time. Charting implied volatility on a specific stock or index is relatively straightforward. Since implied volatility is a measure of expectations, if implied volatility is higher than statistical volatility, traders expect the future volatility of that stock to be high relative to the stock's past level of volatility. In this case, options are said to be expensive. However, if implied volatility is low compared to historical volatility, market players are expecting lower levels of volatility and options are said to be cheap. Therefore, statistical volatility is an important gauge for judging whether current levels of implied volatility are appropriate. Just because you can calculate implied, volatility does not mean that the calculation is a good estimate of forthcoming volatility. The options market does not really know how volatile the instrument is going to be, anymore than it knows the forthcoming price of the stock. There are clues, of course, and some general ways of estimating the forthcoming volatility, but the fact remains that sometimes options trade with an implied volatility that is quite a bit out of line with past levels and therefore may be considered to be an inaccurate estimate of what is really going to happen to the stock during the life of the option. Implied volatility is a forward-looking estimate, and because it is based on a trader's suppositions, it can be wrong, just as any estimate of future events can be in error. The question posed above is one that should probably be asked more often than it is: Is implied volatility a good predictor of actual volatility Financial derivatives or contingent claims are specialized contracts whose intention is to transfer risk from those who are exposed to risk to those who are willing to bear risk for a price. Derivatives are heavily used by different groups of market participants, including financial institutions fund managers and corporations. While, speculators intend to benefit from the derivatives leverage to make large profits hedges want to ensure their positions against adverse price movements in the derivatives underlying asset and arbitrages are willing to exploit price in effectiveness between the derivatives and the underlying the assets. During the last two decades, the market for the financial derivatives has experienced rapid growth. From 2000 to 2002, alone global exchanges traded derivatives volume nearly doubled to reach almost 6 billion. Exotic derivatives developed as advancements to standard derivative products with specific characteristics tailored to particular investors needs. The latest developments in the area are volatility derivatives. These contracts written on realized or implied volatility; provide direct exposure to volatility without inducing additional exposure to the underlying asset. Although most risk models produce a complete loss distribution at the risk horizon, the primary output is generally seen to be the Value-at-Risk (VAR). The value-at-Risk is the maximum loss that a portfolio incurs with a given probability over the prespecified risk horizon. The most intuitive back test to test the accuracy of the Value-at-Risk is based on the frequency of Value-at-Risk violations. First a test based on the time until the first violation and second, a test based on the relative frequency of violations over an observed period is carried out. Both tests are based on the assumption that the risk forecasts are efficient, which means that they incorporate all information known at the time of the forecast. Therefore, risk forecasts will be independently distributed over time. Further distributional assumptions are not needed. A VaR measure is just an operation - some set of computations- designed to support VaR metric. To design a VaR measure, we generally have some financial model in mind. Models take many forms, embracing certain assumptions and drawing on fields such as portfolio theory, financial engineering, or time series analysis. Such models are the assumptions and logic that motivate a VaR measure. They are called VaR models. Finally, to use a VaR measure, we must implement it. We must secure necessary inputs, code the measure as software, and install the software on the computer and related hardware. The result is a VaR implementation. Portfolio insurance refers to a collection of techniques for managing the risk of an underlying portfolio. With most portfolio insurance strategies, the goal is to manage the risk of a portfolio to ensure that the value of the portfolio does not drop below a particular level, while at the same time allowing the portfolio's value to increase. Portfolio insurance strategies are often implemented using options. However, stock index futures are equally important tools for portfolio insurance. Implementing portfolio insurance strategies using futures is called dynamic hedging. It is possible to create an insured portfolio without using options. These alternative strategies are called "dynamic hedging" strategies. With a dynamic strategy, the insurer must rebalance the portfolio very frequently, leading to a trade off between having an exactly insured portfolio and high transaction costs. It is vitally important to have a structured approach to trading to achieve a long-term success. One must establish and follow some reasonably well thought out guidelines to achieve consistent success. Markets can turn on a dime. Starting in the early 1980s portfolio insurance which was developed based on the option pricing theory of Black and Scholes (1973) and Merton (1973) was widely implemented by many institutional investors in an attempt to produce a floor or guaranteed minimum portfolio return. This dynamic asset allocation strategy became even more popular when its implementation was greatly simplified by the introduction of constant proportion portfolio insurance (CPPI) by Perold (1986) and black and Jones (1987). In evaluating portfolio approaches, the main issue is how the input data are obtained particularly the correlation. Although these approaches are illustrative of the once being used in banks today, they are not the only models in existence .For instance, Adamidou et al. (1993) present a portfolio optimization technique for fixed income securities that uses a scenario approach. This technique is driven by a total return approach over a specific horizon with a stochastic interest rate path, and it does not consider default, credit drift, or correlation. The development of different models for volatility is guided by the stylized facts observed in the data. This leads to a large array of alternative models available to practitio0ners. However, alternative models should be considered as compliments for each other than competitors. Although ARCH-type models and SV models were developed independently, the interpretation of ARCH models as approximate filters for SV models and nelson's (1990) finding that GARCH models converge to a continuous time diffusion model bridges the gap between these two different approaches. Inspection of the data and testing for stylized facts appear to be important first steps for practitioners in order to determine which model is best suited for any given situation. Fitting more than one model for any given data set is not uncommon as it permits comparison of different models in terms of in-sample fit and out-of -sample forecast performance. The most attractive class of models in application has been the ARCH type models. Among the applications of SV models, although it is well known that several underlying assumptions are violated, formulas derived from Black-Scholes models and its various extensions have been the most widely used. Considering regime switching models and threshold models, their application governs the applicability of the models to real data. Problems associated with estimation and testing, and unavailability of software largely prohibits the wide application of later models. Given the importance of the issues addressed in models that allow for structural breaks, future research is likely to focus on these models and develop new techniques that will make them more readily available to practitioners. Sources Akgiray, V. (1989) Conditional heteroscedasticity in time series of stock returns: evidence and forecasts, journal of Business, Vol. 62. Andersen, T. G. and Bollerslev, T.: 1998, Answering the skeptics: Yes, standard volatility models do provide accurate forecasts, International Economic Review 39, 885{905. Baille, R.T. and Bollerslev.T. (1990) A multiplicative generalized ARCH approach to modeling risk premia in forward rate markets, Journal of International Money and Finance, Vol.9 Black,F. and Scholes, M. (1973) The pricing options and corporate liabilities, Journal of Political Economy. Bollerslev.T. (1987) A conditional heteroscedastic time series model for speculative prices and rates of return, Review of Economics and Statistics, Vol. 69 Bollerslev,T., Engle,R. and Nelson, D. (1994) ARCH models, in R.F. Engle and D. MacFadden (eds), Handbook of Econometrics, Vol.IV, North-Holland, Amsterdam. Bollerslev. T. (1986) Generalized autoregressive conditional heteroscedasticity, Journal of Econometrics. Ding, Z., Granger, C. and Engle, R.: 1993, A long memory property of stock market returns and a new model, Journal of Empirical Finance pp. 83{106. Dowd, K. (1998), Beyond Value at Risk: The New Science of Risk Management, New York: Wiley. Engle, R., y T. Bollerslev, 1986, Modeling persistence of conditional variances, Econometric Review 5, 1-50 Greene, William H., 1997, Econometric Analysis, Prentice Hall, New Jersey H.R. Stoll. Friction. Journal of Finance, 55(4):1479-1514, 2000. J. Hull. Options, futures and other derivatives, 3rd ed. Prentice Hall Int, J.P. Morgan, 1994 Riskmetrics, Technical document Nro 4 Reuters Jorion Phillippe, 2000, Value at Risk, Mac-Graw-Hill, New York P. Jorion. Risk management lessons from long-term capital management. European Financial Management, 6:277-300, 2000. R. Gibson, editor, Model Risk, pages 125-136. Risk Publications, London,2000 S. Emmer, C. Kluppelberg, and R. Korn. Optimal portfolios with bounded Capital-at-Risk. preprint, Dept. of mathematics, TU-Munchen, 2000. forthcoming in Mathematical Finance. T.R. Rockafellar. Convex Analysis. Princeton University Press, 1970. Tsay Ruey S. 2002, Analysis of Financial Time Series, John Wiley and Sons Englewood Cliffs, 1997. T. Rolski, H. Schmidli, V. Schmidt, and J. Teugels. Stochastic Processes for Insurance and Finance. Wiley, Chichester, 1999. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Risk Management (Probability modeling in finance and economics) Essay”, n.d.)
Risk Management (Probability modeling in finance and economics) Essay. Retrieved from https://studentshare.org/miscellaneous/1507341-risk-management-probability-modeling-in-finance-and-economics
(Risk Management (Probability Modeling in Finance and Economics) Essay)
Risk Management (Probability Modeling in Finance and Economics) Essay. https://studentshare.org/miscellaneous/1507341-risk-management-probability-modeling-in-finance-and-economics.
“Risk Management (Probability Modeling in Finance and Economics) Essay”, n.d. https://studentshare.org/miscellaneous/1507341-risk-management-probability-modeling-in-finance-and-economics.
  • Cited: 0 times

CHECK THESE SAMPLES OF Probability Modeling in Finance and Economics

Financial Modeling Of Value At Risk Portfolio

This following report aims to analyze, justify, explain, recommend and conclude on the financial modeling outcomes of four shares of four different companies.... The analysis is carried out on the value at risk of a portfolio of four shares employing the techniques deployed in financial modeling.... The paper uses probability measures to determine how frequent we anticipate various results to take place should we repeat a provided experiment repeatedly....
20 Pages (5000 words) Essay

ARCH modeling: forecasting the return in the UK stock market

What is the magnitude or prevailing conditions at the beginning of an observed series which will be subject to later change, in the case of finance that change is most likely to be financial market volatility, or stock performance.... What is the magnitude or prevailing conditions at the beginning of an observed series which will be subject to later change, in the case of finance that change is most likely to be financial market volatility, or stock performance....
55 Pages (13750 words) Dissertation

Business Intelligence System modeling

his system modeling technique is essential in making future projections that are essential for risk aversion and developing financial strategies that can withstand economi9c shocks.... This business modeling technique takes into account seasonal changes when projecting possible business performance....
4 Pages (1000 words) Essay

Behavioral Finance in Books' Benoit Mandelbrot and Edgar Peters

Wiener Brownian Motion can be applied in finance and physics via modeling random behavior, which exists over time.... This report discusses the subject of behavioral finance, which has an effect on markets operation in books Chaos and Order in the Capital Market by Edgar Peters and Fractals and scaling in finance by Benoit Mandelbrot.... In discussing, the book highlights financial and econometric models, the book creates a strong platform for readers to understand economics weightily....
8 Pages (2000 words) Book Report/Review

Advanced Financial Modelling

Prospect theory is an example of such theories that have been widely applied in financial modeling.... By classification, prospect theory is an example of a typical behavioral economic theory.... ... ... ... s theory actually describes the behavior of people when choosing between risk involved alternatives with unknown probabilities of outcome (Barber and Odean, 2011, pg....
4 Pages (1000 words) Essay

Business Modelling and Linear Regression Analysis

This essay "Business Modelling" will look at some of these models and their strengths and limitations.... Business modelling has been used for so long to solve business problems and develop strategies for the success of these businesses.... There are so many models that can be used for this purpose....
8 Pages (2000 words) Essay

PhD Finance: 10 Articles Annotated Bibliography

The philosophical approach and purpose are tied to modeling methods hat utilize Occam's razor to simplify variable selection, while for the modeling process itself improvements in the predictions or the estimations based on a single point benefitted from the addition of probability distributions.... Loss functions on the other hand were captured as actual quantities by making use of Bayesian decision theory as a philosophical approach to the derivation of the modeling error (Aziz and Percy, 2009)....
10 Pages (2500 words) Annotated Bibliography

Application of Statistics in the Analysis of Financial Accounts

Therefore, more detailed financial modeling techniques are required in order to deal with the problems in the new globalization era.... he first tool used in financial analysis is probability.... probability is defined as the chance that something will happen....
6 Pages (1500 words) Coursework
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us