25 February 2011

To linearise, or to log-linearise

by Jaromir Benes

In non-linear models, IRIS lets you choose for each individual variable if you want it linearised or log-linearised. This is what the !log_variables and !allbut keywords are good for. Here are a few tips regarding the choice. First of all, from a numerical accuracy point of view, it really does not matter much in most circumstances. Because it's a safer bet and it's more convenient (both is explained later in this post) it's not a bad idea to opt for log-linearisation whenever possible.

Having said that, there are though two rules you must obey:

  1. Obviously, you cannot log-linearise variables that can get negative. If you have a variable that can be only negative (and never positive) you can always make it fit for log-linearisation by re-defining the variable with the opposite sign. Otherwise, variables that can flip their signs (for instance, net rates of change: inflation, output growth, and so on) can be only linearised.

  2. The choice is critical for non-stationary variables in models that have deterministic trends or stochastic trends (unit roots) or both. Very loosely speaking, for a first-order approximate solution to be valid, all non-stationary variables that grow at a constant rate or maintain a stable ratio to some other non-stationary variables along their balanced-growth paths must be log-linearised. Note also that you can have as many unit roots as you wish provided certain conditions are met.

An example to illustrate the second point (whether the model is backward-looking or forward-looking is actually, and for some of you maybe a bit surprisingly, irrelevant – I mean completely irrelevant, as irrelevant as irrelevant can be):

Y =  A^gamma B^delta;
log(A) = log(A{-1}) + a + epsilon;
log(B) = log(B{-1}) + b + omega;
where rho, gamma, delta are positive parameters, a, b are some log-growth rates (positive or negative or zero), and epsilon and omega are shocks. The model has a valid first-order representation only if you log-linearise all of its variables. Even if you make a and b zeros (removing thus the detereministic trends but retaining the stochastic trends, or unit roots) you must log-linearise them all: Y, A and B. Note that Y keeps in constant proportion to A^gamma B^delta along its balanced-growth path. That's what I meant above.

A digression: What on earth do we mean by "linearising" or "log-linearising" when the model does not have a fixed point around which we could linearise or log-linearise? Around what? Don't we need to stationarise the model first? I'll explain in another post some day. But it's surprisingly simple, as simple as simple can be. In fact, look just at the example above. In this particular case, no-one would question that the model has a valid log-linear solution, would they?

Now, there are a small number of delicate differences in the way IRIS inputs and outputs results associated with linearised versus log-linearised variables. Some of them might seem a bit confusing or peculiar at first but you'll like them eventually :). Here are the most important:

  • Simulations with 'deviation' set to true return a difference between a linearised variable and its steady state (or balanced-growth path) but a ratio of a log-linearised variable and its steady state (or BGP). In other words, 1.05 returned for a log-linearised variable means it is 5 percent above its steady state (or BGP) in that period, 0.91 means 9 percent below, and 1 means it is back on the steady state (or BGP). This is convenient, isn't it?

  • Accordingly, the zerodb function (which is basically only used to create an input database for simulations with 'deviation' set to true), creates time series filled with zeros for linearised variables (because in a 'deviation' type of simulation, 0 means a linearised variable is on the steady state) but filled with ones for log-linearised variables (because in a 'deviation' type of simulation, 1 means a log-linearised variable is on its steady state).

  • The steady states of linearised variables is described by their levels and steady-state first differences, while the steady states of log-linearised variables are described by their levels and gross rates of change. Hence, if the imaginary part of a steady state for a log-linearised variable is 1.01 it means it grows 1 percent period-by-period.

  • Simulations of shock contributions (i.e. simulations with 'contributions' set to true) return additive contributions of shocks (and of constant terms and initial conditions) for linearised variables but multiplicative contributions for log-linearised variables. In other words, to get the original path of a linearised variable, you must add all the contributions up. For log-linearised variables, you must multiply them up. And 0.97 means that the respective shock contributed by -3 percent to the overall path of the variable.

  • Lastly (and this one is almost incomprehensible...:), the means and standard deviations returned by filter and forecast for log-linearised variables are not what you would have thought. Their true interpretation is this: When you take the logarithms of them (the log of the values returned in the mean database and the log of the values returned in the std database for log-linearised variables) you get the characteristics of the underlying normal distribution associated with the logarithmic transformation of the original variables. In other words, log(mean) (where mean is a value returned in the mean database) is the mean of log(X), and log(std) is the std deviation of log(X). To get the correct charecteristics of the log-normal distribution, i.e. the mean and std dev of X itself, you can use the funtion lognormal&ndash see help model/lognormal.

1 comment:

  1. Please cite more examples of variables that should not be log-linearised

    ReplyDelete