Sei sulla pagina 1di 35

EC320 FINAL

STUDY GUIDE:
1. Section1: Econometrics
a. Purposes
b. Process
c. Model
d. Properties of variables
2. Section 2: Review of statistics
a. Random variables
b. PDFs
c. CDFs
d. Expected Value
e. Variance
f. Correlation coefficient
3. Section 3: Simple Regression
Analysis
a. Assumptions
b. Statistical inference
c. OLS method
d. Properties of estimators
e. Deriving regression coefficients
(simple model)
f. Residual
g. Variances (precision) of
regression coefficients
h. Standard error of regression
coefficients
i. Sum of squares
j. R2
k. Random/nonrandom components
l. Types of data
4. Section 4: Hypothesis testing
a. Formation of null
b. One vs two-tailed tests
c. testing hypotheses relating to
regression coefficients
d. Confidence intervals
e. Type 1 / 2 Errors
f. P-value
g. F-tests
h. Interpreting Stata Tables
i. ANOVA table
5. Section 5: Multiple Regression
Analysis
a. Definition

b. Assumptions
c. Deriving a fitted model
d. Efficiency
e. Precision
f. Standard errors of coefficients
g. Multicollinearity
h. Further Analysis of Variance
i. Adjusted/Corrected R2
j. Hedonic pricing (prediction)
6. Section 6: Nonlinear models &
transformation of variables
a. Linearity in var/par
b. Linearizing functions nonlin in var
c. Linearizing functions nonlin in
params
d. Log model
e. Semilog models
f. Economies of scale
g. Linear/semilog/loglog table
h. Disturbance terms
i. Comparing linear & logarithmic
specifications
j. Making RSS comparable: Box &
Cox
k. Models with quadratic and
interactive variables
l. Quadratic model
m. Interactive explanatory variables
n. Ramseys RESET tests
o. Nonlinear regression
7. Section 7: Dummy Variables
a. Definition
b. reasons for use
c. Combining multiple functions into
one
d. Standard errors & hypothesis
testing of dummies
e. More elaborate dummies
f. Fitting individual implicit costs
g. Changing reference categories
h. Dummy trap
i. Multiple sets of dummies
j. Slope dummy (interactive)
k. F-testing dummies
l. Dummies in log/semilog
m. Chow test
n. Dummies vs chow

8. Section 8: specification of regression variables


a. Consequences (table)
b. Omitted variable bias
c. Invalidation of statistical tests
d. Proxy variables
e. Testing a linear restriction
f. F-testing a restriction
g. T-testing a restriction
h. Multiple restrictions
i. Zero restrictions
Some notes: While there will obviously be repeated information from my previous guides, I did my
best to alter it in a way that would possibly make it more understandable (as I came to understand
the topics better in time), less repetitive, and more relevant to the course. We have built on topics
such as Z-tests, and our methods of processes (such as testing hypotheses) has changed.
This is not be a definitive list, although with this many pages I clearly put in everything that I
could think of as relevant. I do want to note that Chapter 6 (Section 8) was I didnt cover Chapter 6
as well as I could have, and I apologize. However, I spent a lot of time compiling the rest of the
information, and I hope that you find it helpful.
Other study tips: Review the HW. Even if you cant practice the actual Stata work, practice the
general information that it covers by hand.
Review old midterms and finals. So far, weve seen pretty similar tests from the past this term. I
wanted to point out a few topics on the test that seem to be covered (especially on True/False) on
every single test:

Properties of R2 (even the obscure ones). Know when, why, and by how much R2 changes in
specific
F-tests and degrees of freedom
Sums of squares
Variances
Correlations
Differences between simple and multiple regression assumptions and processes
Estimators: finding them, properties, etc.
OLS methods
R2! It seems to come up on every test
Errors (probabilities, definitions, etc)
Properties of the null hypothesis

Be sure to make a good note sheet. Try to analyze the trends of test questions in the past, and
figure out what to expect on this.
Good luck! Feel free to email me or send a message directly through Notehall with questions, rather
than leaving a nasty review if I made a small mistake. If you liked the guide, please rate it!

Section 1: Introduction
Econometrics: the application of statistical and mathematic methods to the analysis of economic data, with a
purpose of giving empirical content to economic theories and verifying/refuting them (Maddala)
Purposes of econometrics
1. Put empirical content to theory
2. Conduct hypothesis tests
3. Forecast
Process of econometrics

Problem: What are you doing?

Theory: Identify a theorized relationship, figure out what you need from the theory to address your
problem

Represent this need in an econometric model

Collect data

Estimate the model


o

Measure the accuracy of the estimated model with specification tests/diagnostics

Answer the problem by describing it, testing your hypothesis, and forecast

Econometric model: = 1 + 2 2 + 3 3 + + +

Y: dependent/explained/caused/endogenous variable

's: unknown but constant parameters (estimated as bs)

Xs: known and observed variables. Independent/explanatory/causal/exogenous. Variables that


explain Y

u: A random error to capture unknown influences on Y

Properties of variables

Y is caused by Xs, and Xs are not caused by Y

Ys and Xs are measured w/o error

The Xs are not correlated with us

Section 2: Review of statistics


Random Variables (RVs): any variable whose value cant be predicted exactly

Discrete RVs have a specific set of possible values

EX: The outcomes of rolling a die are specific integers. There is no possibility of getting a 1.5

Continuous RVs can take on any value in a given range (infinite # of possibilities)
o

EX: The temp in a room can fall at any value between 55 and 85 degrees. This means that it
could be 60.74274 or 55.1123 degrees

The chance of getting a specific value of X in this case is 0, because the prob would equal 1/

Probability distributions: pdfs are formulas that give the prob of getting different values of the RV

Prob density FNs (pdfs) show this graphically

Uniform distributions: the chances of getting specific values of the RV X are equal for all outcomes.

If a pdf is distributed uniformly, the chances of getting a specific X value is constant across a range,
and is zero outside of the range

This should be review, so Im not going to cover it extensively

This can calculated geometrically (base * height = 1) or with


integrals:

Cumulative distributions: a function derived from the pdf

Gives the prob of X being less than or equal to some constant

Remember that the prob of X taking a specific value is zero if its


continuous, and the prob of X taking a value between - and is 1

The base * height approach works for uniform dists, but youll need integrals for cdfs

To derive a cdf:
o

1

Take the integral of 0 4 = 4 | = 4 , where C is any given X value, and is the height of
0

the associated pdf


o

The height of the cdf (f(X)) at a given X value gives the chance of getting a value between - and X

Expected values (EV): the EV of an RV is often described as its population mean

EV of a discrete RV: The weighted avg of all outcomes times their associated prob

() = 1 1 + +
o

If g(X) is any FN of X, then E{g(X)} = ( ) = g(X1)p1++g(XN)PN

EV of a continuous RV: ( ) =

( )

Population variance (var): measures the spread of a prob dist

Var(X) =

)2 ]
2 = [(

)2 + + (
)2
= (1
1

)2
= (

2 = (

) 2

Popvar of an RV can also be written:

Since all of the variation in X is due to u, var(X)=var(u)

Sample variance: If the distribution of X has variance of

has a sample
2, then its sample mean

(
variance of 2 =
1 )

Population covariance (cov): measures whether or not X&Y move together

[( ) ( )]

(, ) =

Ranges from -1 to 1

Negative cov: X & Y move in opposite directions

Two variables are independent if cov=0

1
) (
)
Sample cov: = 1 (

Population correlation coefficient:

(2 2 )

cov(X,Y) is unsatisfactory bc it depends on the units of measurement of X & Y

The pop corr coef is better because it is not subject to changes in the units of measure

Ranges btwn-1 and 1, w/ neg numbers showing a negative correlation, pos showing positive

If equal to zero, X&Y are independent (no relation)

Sample corr coef: =

(
)(
)
2
2
(
) ( )

Section 3: Simple Regression analysis


Assumptions of the classical model for simple regressions (one X value)
* Note that these assumptions will be referred to by number later in the study
guide
1) The model is linear in parameters (bs) and correctly specified

= 1 + 2 +

Linear:

Not linear:

= 1 2 +

2) There is some variation in the regressor in the sample, and is measured without error
o

While bs and Ys may be estimated, X is a true value

A population that is unique and varied could be referred to as heterogeneous, one with little
variation is homogenous

Notice to the right that the homogenous pop would NOT provide data that would allow us to
extrapolate to low & high values of X

3) The disturbance term (u) has an expectation of zero

( ) = 0
4) The disturbance term is homoscedastic (its values will have a constant pop variance)
2

( ) = 2

If assumption 4 is not satisfied, the OLS reg coefs will be inefficient

5) The values of u have independent dists (independence assumption)


o

This means that u is not subject to autocorrelation (no systematic association btwn ui & uj

This implies that the cov of

is equal to zero

6) ui has a normal distribution


Assumptions 3-6 might be referred to as properties of errors
Statistical inference: is defined as drawing conclusions based on data to describe the population

While your sample wont describe the entire population perfectly, estimating an econometric model
allows you to get a good guess of population characteristics

Note that a fitted model fits the sample data better than it fits the population. This is by design.

Ordinary least squares method: best way to obtain estimators. The Gauss-Markov theorem states that OLS
is BLUE:

Best (smallest variance)

Linear (combinations of the Yi)

Unbiased [E(bj)=j]

Estimator of regression parameters

Estimator properties:

Since we can never know their true values, we estimate

1 2 with 1 2

1) Unbiasedness: we want the EV of the estimator to be equal to the pop characteristic.


a. E(b1)=1
b. See p118 for proof

2) Efficiency: want pdf to be as concentrated as possible around the mean (want pop var to be as
small as possible)
3) Consistency: an estimator is said to be consistent if it
has a prob limit so that its dist closes into a spike
around the pop mean. The spike is located at the true
value of the characteristic you are trying to estimate.
This is called the central limit theorem (right)
Deriving estimators of regression coefs (simple model)

1 = 2

We know that

Weve already covered this extensively, but if you want further proof check out pages 85 through 92

Define the residual for each observation. The residual is the vertical distance between the actual and

(known as intercept) and

(slope)

fitted values of y

Our goal is to minimize the residuals

We do this by calculating RSS: residual sum of squares:

2
( )

2 = 12 +22 + 32

To get the best fit, we want to minimize the residual

First-order conditions for a minimum: take partial derivates of RSS and set them equal to zero
o

= 0
=0
1
2

Variances (Precision) of the reg coefficients

21 = 2 (1 +

22 =

2
)2
(

MSD(X) =

The size of the variations in X around its mean

)
(

)2 : The size of the variations in X around its mean


(

Standard error of the reg coefficients:

In reality, you cant calculate the popvars of b1 or b2, bc


estimator of it with 2 (derivation on p130)

2 is unknown. However, we can find an

2
]
( )2

(1) = 2 [

(2) = ( )2

SE only gives a general guide to the likely accuracy of a reg coefficient


o

SE gives you an idea of the narrowness of the estimators pdf

However, it does not tell you whether your estimate comes from the middle of the pdf, or one
of the tails

With a greater

2 the sample variance of the residuals is likely to be higher

This means the SE of the coefficients be higher

This reflects the risk of getting inaccurate estimators

Sum of squares (SS)

ESS is the variation explained by the model


o

Explained SS


)2
ESS = (

Fitted value - sample mean

RSS is the variation not explained by the model


o

Unexplained SS

)2
RSS = 2 = (

True value - fitted value

TSS = ESS + RSS


o

)2
TSS = (

True value - sample mean of Y

Measuring Fit: R2:

coef of determination, Widely used measure of fit


2
(
)

=
2 = 1

= 1
( )

)2
(

)2
(

R2 is constrained to values between 0 and 1

Its value can be thought of as the percent of data explained by the model

Adding variables to the model (increasing K) will not decrease R2

The addition of variables to a model will typically increase R2, but often by negligible amounts
o

R2 will increase as K goes up, until N=K

Added observations can decrease R2. If you add an observation thats an outlier in the population (very
different from the rest), it can contribute to the fitted model being less accurate

Note that in the case of multiple reg models, it is impossible to measure each explanatory variables
contribution to the overall R2

I noticed that the properties of R2 (even some obscure ones) have shown up in the T/F questions in
every test (old and current). Make sure you are very comfortable answering questions about it

Random/nonrandom components: With the model were using, Y depends on the non-random X
according to

= 1 + 2 + , and we fit it with = 1 + 2

Stochastic (random) components: random, unpredicted variation

Nonrandom component of Y:

Random component: u

To decompose an estimator into fixed & random components, see p116

1 & 2 may be unknown, but they are fixed constants

Types of data

Cross-sectional data: observations relating to units of observation at one moment in time (units could
be ppl, households, companies, countries, etc)

Time series data: repeated observations through time on the same subjects (ex: quarterly GDP)

Panel data: basically a hybrid of the two above, repeated observations on the same elements through
time

Section 4: Hypothesis testing


Forming a null hypothesis

If you can, always specify the null as the thing that you dont believe (Straw persons principle),
and the HA as the thing you want to prove to be true

from is so great that we cannot accept it


We believe the null until the departure of
0

Closed parameters: you must account for all possible values of the test stat in H0 & H1.

0 : 1 0

Closed:

Not closed: :

0 : 1 = 0

1 : 1 < 0
1 : 1 < 0

Note that this null/alternative does not account for positive values of

One vs two tailed tests

Two-tail test: testing whether your test stat has a specific value or not EX: HA: 2 3

lies 1) below or 2) above a specific value


One-tail: you wish to prove that
o

EX: HA: 2>0

Tcrit will be lower in the one sided test, making it easier to prove your belief

Testing hypotheses relating to reg coefficients:

We originally looked at hypotheses through the z-test, but now weve realized that there are too many
factors at play to use such a simple test (ex: df)
o

In the case of simple regression coefficients, we test hypotheses with the t-distribution

The t dist is symmetric and bell shaped, but has heavier tails. This means that its more likely to
produce values that are very far from the true mean

As before, reject the null if the difference between

2 2 is too great.

To find the critical value, refer to the t table, and find where the confidence interval (top of the

table) intersects the degrees of freedom (DF) (left of table)


DF=N-K: sample size (N) minus the number of parameters estimated (K)

K is equal to 2 in a simple regression, since we estimated 2 variables (1 &2 )

Measured in terms of standard errors

Where

2 20
(2 )
0

2 is the value stated in the null


Reject if || >

The t-test helps you to determine whether X has some effect on Y, rather than a specific effect

Confidence intervals: a range of values that lead to acceptance/rejection of the null

Sometimes different confidence intervals will produce different results.

Percentage level measures how sure you are with your result. This means that at the 5% level youre
pretty sure of the result, but at the 1% level, youre VERY sure

If the t stat is VERY high, try testing it at the 0.1% level (99.9% confident). This reduces the risk of a
Type 1 Error

True model: = 1 + 2 +

=
Fitted model:

The reg coefficient


0

22
>
(2)

1+ 2,
0

2 is incompatible with hypothetical value 2 = 2 if either


0

22

<
( 2 )

- in either case, reject b2

2 must satisfy this double inequality to be valid (two-tail)

2 (2 ) 2

2 + (2 )

Ex: 1.999 2 2.911. In this case, we would reject hypothetical values below 1.999
or above 2.911

Error types

Type 1 error: where the null is rejected when its actually true
o

Think of a type 1 error as the conviction of an innocent person

Prb (type 1) is the size of the rejection region. In a 5% test, a true null is rejected %5 of the
time

Type 2 error: when the null isnt rejected, but its false
o

Think of a type 2 error as letting a guilty person go free

Prb(type 2) is beta. Find your rejection regions t or z value, then subtract it from 1 (ex: z=1,
prb(type 2)=1 - 0.8413 = .1587

To see type 1 / 2 errors graphically, look at problem 3 on the winter 2010 MT2, and IV(d) on this
years MT2

P-value: an alternative approach to reporting the significance of reg coefficients

||

Notation in stata table: >

P is the prb of obtaining the corresponding t stat as a matter of chance (if null is true)

A p-value less than 0.01 means that the prb is less than 1%, which means that the null would be
rejected at the 1% level

The p-value approach tells you more than the 5%/1% approach, bc it gives the exact prb of a type 1
error if the null were true

F-tests:

Even if there is no relationship between Y & X, in any sample there may appear to be one

R2=0 Only by coincidence for a reg of Y on X

F-tests test whether R2 is reflecting a true relationship, or if its just a coincidence

In other words, F-tests give you a critical value for R2, which gives a cutoff point for when you can
declare that X causes Y (for a given confidence interval)
o

=
o

H0: 2=0 : no relationship between Y & X

/(1)
/( )

2 (1)
(1 2 )/( )

2
(1 2 )/()

where k: # of parameters in fitted reg model

The critical value of F gives a cutoff point (just like t), at which point you can conclude that the
variables are correlated
o

To find the critical value of F, refer to the corresponding table which matches your confidence
interval

Find where the df(num) and df(denom) intersect. This gives you Fcrit

If your calculated value of F is greater than Fcrit, you can reject the null

For our purposes we only need to find Fcrit that lies to one side of the distribution (think of it as
a one-tailed test)

Only in simple regressions:


o

Fcrit & tcrit have same Ho: 2=0 and HA: 20

Fcrit equals the squared tcrit of a two-tailed test

This is only true in simple models! F & t play very different roles in multi reg models

Proof on P147

* Note that the ANOVA table below


was removed from the top left
portion of the stata output to make
room

ANOVA table (stata): analysis of variance

Source

SS

df

MS=SS/DF

ESS

K-1

ESS/(K-1)

K=number of parameters estimated (including the intercept)

Model

K-1=Number of RHS (right hand side) variables (not including

Residual RSS N-K RSS/(N-K)

the intercept)

Total

TSS

N=Number of observations

Section 5: Multiple Regression Analysis


Definition:

Multiple reg models are 3D. There are multiple independent variables

EX: True model:

Our objective is to discriminate between the effects of each variable on Y


o

= 1 + 2 2 + 3 3 +

We do this by varying the X of interest, while holding the others constant

N-1

TSS/(N-1)

= 12 + 2.2 + 0.7
EX: if

-12 implies that one with no exp and no education would have to pay $12 per hour to work.
This is not realistic (solution later)

The coef on S implies an extra $2.20/hr for each year of education completed

The coef on EXP implies extra $0.70/hr for year of experience

Assumptions for multiple reg coefficients


1) Model is linear in parameters and correctly specified
( = 1 + 2 2 + 3 3 + )
2) There is no exact linear relationship btwn regressors in the sample
a) This is the only assumption that differs between the simple & multi reg models
b) See section on multicollinearity
3) Disturbance term has zero expectation
i)

( ) = 0

4) u is homoscedastic (constant popvar)


i)

2 = 2

5) ui is distributed independently of uj for all ji


6) Disturbance term has normal dist (unbiasedness)
Deriving a fitted model w/ RSS for multiple reg models

we still use RSS to measure goodness of fit

= = 1 2 2 3 3

Thus,

To get first order minimums, take the partial derivatives for b1, b2, b3

= 2 ( 1 2 2 3 3 ) = 0
1
= 2 2 ( 1 2 2 3 3 ) = 0
2
= 2 3 ( 1 2 2 3 3 ) = 0
3
1 = 2
2 3
3

( 1 2 2 3 3 )2

3 : switch all of the X2' s with X3s and vice versa in the b2 above

Efficiency

Gauss-Markov theorem proves that OLS yields the most efficient linear estimators of the parameters
(lowest possible variance)

Applies to multi & simple reg models

Precision (multireg)

True model: = 1 + 2 2 + 3 3 +

= + +
Fitted model:
1
2 2
3 3

Popvar of b2:

22 =

2
1
2
2

(2 )
23

2: pop variance of u
23 : corr btwn X2&X3

2
2
Replace (2
2 ) / (3
3 ) to get pop variance of b3

Standard errors of coefs (multiple reg models)

(2 ) (2 )

Standard error is the estimate of the standard dev of b2

SE of a coef is an estimate of its standard deviation

(2 ) = (

2 )

1
123
2

Multicollinearity (multico):

The greater the 23 , the greater the

2.

Greater risk of picking a coefficient that doesnt represent the sample

If N & MSD are large, and var(u) is small, you could get good ones

Multico must be caused by a combination of high corr and one of the other components being
unhelpful

multico is an issue of degree, not a yes/no question

Multico is most common in time series data, since the same subjects are analyzed over time

Overcoming multico:

There are 4 factors responsible for variance of the disturbance term:

2
1) 2 2) 3) () 4) 23

Direct methods attempt to improve the conditions that are responsible for the variances in the reg
coefs. E

2:

1) Reducing

a. u is the joint effect of all of the variables influencing Y


b. If you remember an important omitted variable (thereby contributing to u), adding it
back will reduce the popvar of u & the coefs
c. Adding more variables typically improves the model fit, but the effect is usually
insignificant
d. If additional variables are correlated with the old ones, SE could increase
2) Increase N: In time series data, take surveys in shorter intervals (quarterly instead of annually)
3) Increase MSD(X) in the survey design phase
a. Stratify sample (ex: all social statuses represented in housing survey)
4) In design phase, obtain a heterogeneous sample rather than a more homogenous one (easier
said than done)

Indirect methods
1) If correlated variables measure a similar concept, it might make sense to combine them into an
overall index
a. Ex: Vocabulary and grammar should be correlated
2) Drop some correlated variables that have insignificant coefs
a. You risk introducing bias if said variable really should be in the model
3) Use extraneous info concerning the coef of one of the variables
a. Ex: Using data found in a cross-sectional study in a time series data
b. See pg 174&175 for an extended example
4) Use a theoretical restriction (hypothetical relationship among the params of a reg model
a. EX:

1 + 2 2 + 3 3 +

b. Assume that 2 (grammar skills) are equally important to 3 (vocabulary) when determining
overall public speaking skills
i. 2 = 3
c. Now,

1 + 2 (2 + 3 ) +

d. See example on p174 for more


Further Analysis of Variance

Using an F-test to see if joint marginal contribution of a group of variables is significant (when adding
more)

Given model:
o

1 + 2 2 + + + , with K variables

Where ESS = ESSK

Next, add M-K variables and fit the model:


o

= 1 + 2 2 + + + +1 +1 + + +

With ESS = ESSM

You now have explained an additional SS equal to ESSM - ESSK, using up an additional M-K df

Is the increase due to chance? Test it.

F-test verbally:

Since RSSM = TSS - ESSM, and RSSK = TSS - ESSK,

the appropriate F-stat is:

Under the H0: additional variables contribute nothing to the original equation:

( , ) =

( )/()

/()

H0: BK+1 = BK+2 = BM = 0

The f=stat is distributed with M-K and N-M df

See table below. The upper half gives the ANOVA for the explanatory power of the original K-1
variables. The lower half gives it for the joint marginal contribution of the new variables

Explained by original

Sum of Squares (SS)

df

SS/df

ESSK

K-1

ESSK / (K-1)

RSSK = TSS - ESSK

N-K

RSSK / (N-K)

ESSM- ESSK =

M-K

(RSSK - RSSM) / (M -

variables
Residual
Explained by new variables

RSSK - RSSM

Residual

RSSM = TSS - ESSM

K)
N-M

RSSM/(N-M)

F-stat
ESSK /(K 1)

/( )
(RSSK RSSM )/(M K)

/( )

: Adjusted/corrected R2:

Remember how R2 cant decrease with additional variables? Adj. R2 compensates for this by imposing a
penalty for increasing K

= 1 (1 )

1
1
1
=

=
(1
2

It can be shown that the addition of a new variable to a model can cause adj R2 to go up, but only if
the absolute value of its t-stat is greater than 1

This means if adj R2 increases when K goes up, it doesnt necessarily mean that the coef of the new
variable is significantly different than zero

This does not mean that the fit has improved

Hedonic pricing: an example of prediction

assumes that the value of a good is determined by the combined values of its components

Hedonic pricing model: = 1 + +


o

= +
Fitted model:
1

Suppose a new variety of the good comes out with characteristics: {2 , 3 , , }

Its natural to predict that the price of the new variety should be given by:

= 1 + Xj

Start by assuming that the good only has one relevant characteristic and we have fitted the simple reg
model Pi = 1 + b2 Xi

= 1 + b2 X
Because the new variety of the good has the characteristic X=X*: P

)
Prediction error (PE): difference btwn actual (P*) and predicted price (P
o

PE= P* - P

Assume model applies to the new good, therefore the actual price is

= 1 + 2 +

(1 + 2 + ) (1 + b2 X )

( ) = (1 + 2 + ) (1 + b2 X )
1 + 2

+ ( ) (1 )

= 1 + 2 1

( ) = 0

(2 )

You assume that there is no prediction error, and that your prediction was exact

This assumes that the reg model assumptions are met

Popvar of the PE
o

= {1 + +
1

)
(

)2
(

Obvious implications: the further the value of X* is from the sample mean, the larger its
popvar. Also, var(PE) goes down as N goes up

Confidence interval for actual outcome P*

( ) <

< + ( )

Chapter 4: Nonlinear Models & Transformation of variables


Linearity in variables/parameters

This model is:

Linear in variables (lin in var): every term consists of a straightforward variable times a parameter

Linear in parameters (lin in par): every term consists of a straightforward parameter times a
variables

Linearizing functions that are nonlin in vars:

Given

This transformation is only cosmetic, but it gives us a function that is linear in var & par

Linearizing functions that are nonlin in params

Defining Z=1/g, its now linear in both categories:

1 +

1 + 2 +

Logarithmic/loglinear model

- This model is nonlinear in parameters AND variables


When you see a function that looks like this, you can immediately say that the elasticity WRT (w/
respect to) X is constant and equal to 2

Regardless of how Y & X are related mathematically, or their definitions, the elasticity of Y WRT X is
the proportional (%) change in Y for a given proportional change in X:

EX: if Y is demand for a commodity, and X is income, this defines the income elasticity of
demand for that good

Rewrite:

. In the demand example, this could be seen as the marginal

divided by avg propensity to consume


o

Example: If the relationship btwn Y & X takes the form:

Therefore,

, then

Semilog models

Common functional form:

, where 2 is the proportional change in Y per unit change in X.

This is shown by differentiating:


o

Therefore,
o

Note that this same function can be made linear in params by logging both sides:

Only the left side is in logarithmic in variables (Log1 means its logarithmic in
parameters), still making it a semilog model

Economies of scale

In class, there have been a few questions involving economies of scale

The Economist defines economies of scale as factors that cause the average cost of producing
something to fall as the volume of output increases
o

Example: cell phone providers. Smaller companies are unable to compete due to the expensive
infrastructure that the market requires (like service towers)

However, these seemingly large costs are negligible for very large firms. Through their ability to
make large investments, they are able to exploit the fact that not everyone is able to produce
as cheaply as them

In economies of scale, LR average costs go down as Q


goes up (opposite in perfectly competitive ones)

This is most commonly found in markets where fixed


costs are significant.

With higher output, these fixed costs are insignificant to


larger firms. If a small firm is a part of this industry, they
would be unable to afford the production level required
to stay competitive

In the question on Midterm 2, the function was given as LN(cost) = 1 + 2 ln(Q)+u, with a null
hypothesis being that it was a competitive market. You believed that economies of scale were present,
estimating a slope of 0.6

As can be seen in the table below, the slope coefficient of this log model (2) tells you the effect of a
1% increase in quantity on the percentage change in costs

On the test question, you found a slope of 0.6, implying that increasing Q by 1% would be
accompanied by a 0.6% increase in cost

This would imply that economies of scale are present, since in a perfectly competitive market, the
increase in cost would be equal to the change in output
o

This would mean that under the null (economies of scale NOT present), 2 1, and in the
alternative (economies of scale ARE present), 2 < 1

Note: I dont understand why we use LN sometimes, and LOG sometimes. I figured that theyre
interchangeable in this context, but I have not yet confirmed it. I suggest asking before the test if youre
confused
Overview:
Linear model

1 + 2

Semilog A

= 1 + 2 log

Semilog B

Log/Log

1 + 2

2: effect of X on Y

1 + 2 log

2: effect of a 1% X

2: effect of X on

2: effect of a 1% X

on the level of Y

%Y

on %Y

Anything w/ a log is a percent change

Anything w/o a log is a unit change

Disturbance terms: weve been ignoring them so far

u needs to be an additive term (+u) that satisfies conditions of reg model. If this is untrue, least
squares reg will not have normal properties, making tests invalid

EX: In 4.6, we were given


o

, which transformed into

In both of these cases, the disturbance is additive. No problems

But, what happens when we start with a model like

After taking logs, the reg model is


original Y should be rewritten as

when u is included. Therefore, the


, where logV=u

V modifies the model by increasing/decreasing it by a random proportion, not an amount


o

If v=1, then the random factor is 0 (as multiplying it by 1 doesnt change the model)

This showed that to obtain an additive disturbance term, we need to start with a multiplicative one in
the original equation

If the term was additive in the original, then taking the log of

would be impossible, as

we cant calculate this complex term mathematically. You would have to use a nonlinear technique
(later)
Comparing linear and logarithmic specifications

Should you model a relationship with a linear of nonlinear function?


o

If nonlin, what kind?

Sometimes looking at the scatter plot can tell you if its linear or not, but not
always (ex to the right)

The problem with choosing btwn 2 models is that RSS and R2 cant be
compared between different functional forms of Y
o

But, if R2 is much larger in one function, choose that one

If theyre both similar, you can scale the observations of Y so that RSS in
lin/log models are directly comparable (Box & Cox)

Making RSS comparable between a linear and log model (Box & Cox):
1. Calculate geometric mean of Y values in the sample. This is equal to the exponential of the mean of
logY:

2. Scale the observations on Y by dividing by this figure:

= , where

Y* is the scaled value in observation i


3. Regress the linear model using Y* instead of Y, and reg the log model using logY* instead of logY,
leaving the models otherwise unchanged

The RSSs are now comparable, and the lower value provides the better fit

Do not use this method to find coefs. This is solely for deciding preferred model

Models w/ quadratic and interactive variables:

These models can be fitted w/ OLS w/o modification

Interpretation of coefs:

Quad: cant follow normal rule of holding other variables fixed to measure anothers effect on Y
because its not possible to change X2 while keeping X22 fixed

This is also the case in the inter model, since X2 also appears as X2X3

Quadratic model:

Differentiate quad model:

Viewed this way, the impact of a X2 on Y (B2+2B3X2) changes with X2

This means that B2 has a different interpretation than the ordinary model (Y=B1+B2X2+u), where B2 is

- This is the change in Y per unit change in X2

the unqualified effect of a unit change in X2 on Y

In the quad model, B2 is the effect of a unit change in X2 on Y for the special case where X2 = 0
o

For nonzero values of X2, the coef will be different

B3 also has a special interpretation. Rewriting the model as:


o

B3 is the rate of change (RoC) of the coef of X2 per unit change in X2

Only 1 has a conventional translation. It is the value of Y (apart from random component) when X2=0

Theres another problem. Weve seen that the intercept of a regression usually doesnt have a sensible
meaning if X2 =0 is outside the data range
o

Note that in this scatter, the quadratic model predicts a


wage of $15 with zero schooling. This is not realistic

The linear model is also unrealistic, because it predicts


that someone with zero schooling would have to PAY $10
an hour to work

The Log model makes the most sense in this example

Why do we stop at quadratics? Why not a cubic? Or one with

even more coefs?

Quadratics are justified due to the concepts of diminishing marginal returns (parabolic shape)

As higher order terms are added, the fit will be improved slightly, but it will be sample specific

Economic theory rarely justifies higher order polys

Interactive explanatory variables

Ex:

This is lin in par, but not in var

To properly interpret the coefs, rewrite as:

Makes it explicit that the Marginal effect of X2 on Y (B2+B4X3) depends on X3


o

Special interpretation: B2: marginal effect of X2 on Y when X3 = 0

Rewrite:

: this shows that the marginal effect of X3 on Y (holding X2

constant) is (3 + 4X2), and that 3 may be seen as the marginal effect of X3 on Y when X2 = 0
o

If X3 = 0 is a long way outside of the range of X3 in the sample, the interpretation of the
estimate of 2 as an estimate of the marginal effect of X2 when X3=0 should be treated with
caution.

Sometimes the estimate will be completely implausible, like giving a literal explanation of the yintercepts of a model

We have just ran into a similar problem here, with the interpretation of 2 in the quadratic
specification

Its often of interest to compare estimates of the effect of X2 & X3 on Y in models excluding and
including the interactive term
o

Changes in meanings of 2 & 3 make this difficult

Solution: Rescale X2 & X3 so that theyre measured as sample means

2 = 2
2

3 = 3
3

Subbing these into the original model for X2 and X3:

I couldnt figure out how to get asterisks to go directly above the Xs with the software I used to type these
functions. Note that in the functions above and below, the star near the middle of the terms should be over
the X (as in

2 )

Where

Now, coefs of

2 & 3 give the marginal effect of their variables if the other is held at the

sample mean

Rewrite:
o
o

It can be seen that

2 gives the marginal effect of 2 (and therefore X2)) when X3 is at

its sample mean


o

3 is interpreted in a similar fashion

RESET test (Ramsey): tests for nonlinearity

Adding quad terms of Xs, and interactive terms to the specification is one way of investigating the
possibility of nonlinearity in Y

If there are a lot of explanatory variables in the model, we might want to have some sort of evidence
of nonlinearity before spending too much time manipulating them

Ramseys RESET Test of functional misspecification is intended to provide a simple indicator


o

Run reg in original form, save the fitted values of depvar (Y-hat)

By definition:
o

Yhat2: linear combo of squares of X variables & their interactions

If Yhat2 is added to reg specification, it should pick up quad or inter nonlinearity, without
necessarily being highly correlated with any X variables (and consuming only one DF)

If the t-stat of the coef of Yhat2 is significant, some kind of nonlin is likely to be present

This does not tell you WHICH kind of nonlin your data is represented by, and may fail to detect
other types of nonlin

However, its easy to implement and potentially helpful

In principal, we could include higher powers of Yhat, but most dont think this is worthwhile

Nonlinear Regression: You believe that Y depends on X according to

, and you want to obtain estimates of the betas, given data on Y & X

Note that this cannot be transformed to obtain a linear relationship, so its not possible to apply the
regular reg procedure

But, we can still use the process of minimizing RSS to estimate params

This nonlinear regression algorithm is a simple method that uses the principle of RSS minimization
1) Guess plausible values for the params
2) Calculate the predicted values of Y from the data on X using these values as the params
3) Calculate residuals for each observation and find RSS
4) Make small changes in one or more of your estimates of the params
5) Calculate the new predicted values of Y, residuals, and RSS
6) If the new RSS is smaller than the original, your new estimates of the parameters are better.
Take them as your new starting point
7) Repeat steps 4, 5, and 6 again and again until you are unable to reduce RSS any further
8) Conclude that you have minimized RSS, and describe the final estimates of the params as the
least squares estimates

CH5: Dummy Variables


Dummy variables (DVs): used for categorical data as opposed to numerical

Assigns a 0 or 1 for true/false


o

Dummies are treated just like ordinary variables, despite the fact that they only have 2 possible
values

Better than having to use more than one reg model within sample

The example of types of schooling in Shanghai that persists throughout Chapter 5 is great for
explaining dummies, and will be used here

General cost FN (for population, no DVs yet):


o

2 is the MC (marg cost) or slope, because it

changes as N (#

students) goes up.


o

1 is the FC (fixed cost) or intercept because it doesnt vary with the # of students

We could denote COST as the cost of regular schools, and COST


as the cost of occupational (OCC) schools (shown at right), but
using dummies makes this much easier

Combining multiple qualitative measurements into one function (using dummies)

EX:

, where OCC is a dummy. If OCC=true, then OCC=1. If not,


OCC=0, negating its associated (delta) value

is the associated increase in overhead when an OCC school is being looked at (changes intercept, but
not slope. This will be addressed later)

:
, 1 , 2 ,

From the Stata output of the data we get fitted values for

Setting OCC equal to 0 and 1, respectively, we can obtain the implicit costs for the two types of
schools:

the intercept implies an annual overhead cost of -34000 Yuan for regular schools The negative value is
not at all realistic, and you should immediately realize that the model is misspecd (doesnt pass the
laugh test)

the N-coef of 331 implies a constant slope in both categories (fixed later)

Standard errors & hypothesis testing of dummies

= 0, : 0

Perform t-test on the DV, with H0:

H0: no difference in overhead costs

If the t stat is greater than tcrit, conclude that the special schools are significantly more expensive
than regular ones

Standard errors are usually given in the outputs. Make sure they are included in the appropriate ttests

More elaborate dummies: extension to more than 2 categories and multiple sets of DVs

Now, we are going to make models with 4 possible categories of schools: General (GEN), Occupational
(OCC), Skilled Worker schools (WORKER), and Vocational (VOC)

Pick one reference category (refcat) to which the basic equation applies. You should always start with
the dominant or most normal category (unless you have good reason to do otherwise)

In the school example, general schools are picked as the first refcat

New model:
o

Where

= extra overhead

required by specific special schools in addition to the overhead of general ones, and
TECH/WORKER/VOC are dummies which will equal 1 when true and zero otherwise
o

Two dummies cannot equal 1 simultaneously in this model. One will equal 1, the rest zero. This
will change later when we use 2 separate qualitative measurements of individual schools (with
the RES

Do not make a DV for the reference category. This is why its called the omitted category. Note that
GEN is the reference category in the case above

Regression results (from book):

Notice that TECH schools require an additional 154,000 Yuan of overhead over general schools

Overhead: costs not related to level of output or labor

MC of each student: 343 Yuan

Notice that the general school has a negative number again (despite being measured against
itself). Somethings wrong w/ the model

Finding individual implicit costs of each type of school

Changing the reference category (refcat): replace the variable and its associated parameter () with one
for your previous omitted category

EX: changing reference category from GEN to WORKER:

When refcat changes:

R2, coefs for other variables, t-stats for other variables, F-stat for the whole equation all stay
the same

Standard errors and the interpretations of the t-tests are the only things that change

DV trap:

Happens if you include a DV for the refcat

If it were possible to calculate coefs, you couldnt interpret them. Dummies change the intercept, and
there would be no definition of the base-level intercept (b1

In general models, theres actually an X1 value that we have ignored

Ex:
o

= 1 1 + 2 +

X1 is equal to 1, having no effect on 1 or Y

Suppose there are M dummy categories, and you define DVs D1, DM

Since one DV will equal one, and the rest zero, the sum of DVs will always be 1

Because the intercept is a product of 1 and a special variable equal to 1 in all observations
o

This means that for all observations, the sum of the DVs is equal to this special variable

As a consequence, this model is subject to a special case of exact multico, preventing the calculation of
coefs

Multiple sets of DVs

EX:

This model proposes that schools of any type will


have a higher cost if theyre in a residential area.
Note that were just using RES/non-RES OCC and GEN
schools for simplicity
o

: extra cost of residential schools

Effects the intercept

the reference category now has 2 dimensions ,


one for each qualitative characteristic

Note that it is assumed (and intuitive) that the


increase in cost for a residential location will be the same for OCC & regular schools

Regression output:

This implies that OCC schools cost 110,000 more Yuan, and residential schools as a whole cost 58,000
more

The 4 combos of OCC and RES can be broken into individual implicit cost functions by the 4 combos of OCC
and RES:

* I didnt include the other 2 because theyre pretty easy to find, and I think this process has been
pretty well-covered. Check pg 238 if you need clarification
Slope DV (interactive):

drops assumption that the slope of the reg is the same for each category of qualitative variables, since
it would be unrealistic for MC to remain fixed by each school
Slope DV: N*OCC

Effect: allows the coefs of N for OCC schools to be greater than that of regular schools

Setting OCC equal to zero gets rid of the OCC conditions (steeper slope and intercept) and gives
you the original cost function for reg schools:

Setting OCC equal to 1 makes NOCC=N:

is the incremental marginal cost associated w/ OCC schools, just like is the incremental
overhead cost

The addition of these variables has allowed the OCC and regular schools to have their own
slopes (MC) and intercepts (FC) while remaining a part of the same function:

If the Y-int is negative for a data set with no negative values, its probably misspecd. Likely due
to an MC which was a compromise between the slopes of regular and occ schools

T & F tests can still be used

F-tests of dummies

The joint explanatory power of the intercept and slope dummies can be tested with the normal F-tests,
comparing RSS when dummies are included and excluded

H0: ==0

HA: at least one


o


Numerator: Cost in df: additional K estimated (# of DVs)

DVs in log/semilog forms


* I dont think that these will be
covered, but there was about a
half page on it (230) and knowing
how to manipulate logs is
important in other areas

The term

multiplies Y by e0 (equal to 1) when D=0 (reference category) and when

D=1 (other category)


o

If

is small, (1 + ). This means that Y is a proportion larger in the other category

than the reference category


o

If

is not small, the proportional difference is ( 1)

Chow Test: Type of F-Test for 2+ subsamples

Should you run 2 regs of A & B, or together as P? (pooled)


Since the subsample regs already minimized RSSA & RSSB, pooling them will produce a regression that
doesnt fit as well (usually)

This means that


o

Therefore, (

equal to the sum of

, where RSSP (total sum of residuals in pooled reg) is

In general, there will be an improvement (RSSP - RSSA - RSSB) when the sample is split up

However, theres a price to pay when splitting it. K extra df have been used up, since instead of K
params for the pooled regression, there are now 2K params

After breaking up the sample, we are still left with (RSSA+RSSB) (unexplained) sum of squares of the
residuals, and N-2K df remaining

We can now test whether the improved fit due to splitting the sample is significant w/ a special
F-test known as a Chow Test

We use the F-stat

Which is distributed w/ K and (N-2K) df under the null (no improvement in fit)

If FStat > FCrit, use the separate regressions (because the improvement is significant)

DV method vs Chow test

Chow is quick. You just run 3 regs and calculate test stats
o

However, it doesnt tell you how the functions differ (if they do)

DV gives you more info bc you can perform t-tests on individual dummy coefs. This may show where
the FNs differ (if they do)
o

DV takes longer bc you have to define a DV for each intercept and each slope coef

Chapter 6: Specification of Regression Variables


Model Specification: If we know exactly which variables need to be included in a relationship, the goal is
only to calculate estimates of their coefs, confidence intervals for these estimates, and so on. However, we
can never know if we specified the model correctly. We might be leaving out variables that should be included,
or including ones that should not
1. If you leave out a variable that should be included, the reg estimates will generally (not always) be
biased. This makes the SE of the coefs and corresponding t-tests generally invalid
2. If you include a variable that should not be included, the coefs are generally (not always) inefficient,
but not biased

Consequences of variable specification


True Model

= 1 + 2 2 +

Fitted Model

= 1 + 2 2

= 1 + 2 2
+ 3 3

= 1 + 2 2 + 3 3 +

Correct specification. No

B3X3 left out. This means coefs are biased (in general)

Problems

and SEs invalid

Coefs are unbiased (in

Correct specification. No problems.

general), but inefficient. SEs


are valid (in general) due to
inclusion of b3x3

Effect of omitting a relevant variable:

Suppose that Y depends on X2 and X3 according to

But, you are unaware of the importance of X3. You think the model should be (simple reg)

You then calculate b2 using the expression:

Instead of:

b2 is unbiased only if E(b2)=b2:

*proof of unbiasedness can be found on p253

Assuming the model is correctly specified in that it has multiple regressors, b2 is subject to omitted
variable bias

If X3 is omitted from the reg model, X2 will appear to have a double effect

It will have a direct effect and a proxy effect by mimicking the effects of X3

The direction of the bias will depend on the signs of B3 and

2 ) (3
3 )
(2

(which

is the numerator in the sample correlation btwn X2 and X3, ), and the denom of the corr coef
2 3
will always be positive

If 3 and the correlation is pos, the bias will be positive, and b2 will tend to overestimate 2

The direction of the bias could just as likely be negative. It depends on the sign of the true
coef of the omitted variable, and on the sign of the correlation btwn the included and omitted
variables

Invalidation of statistical tests

Omitting a variable that should be included makes the SEs of the coefs and the test stats invalid
(generally)

Suppose that the true model is represented by a simple regression, but you think its a multiple reg model. You
estimate b2 with

Instead of

Generally, adding a redundant variable doesnt cause bias, it just causes inefficient estimations

However, you could just rewrite the true model as


o

If you regress Y on X2 and X3, b2 will be an unbiased estimator of 2, and b3 will be an unbiased
estimator of zero (as long as reg models are correct)

Proxy Variables

Used when you are unable to get data on a variable that you think should be included, or if its too
difficult to measure
o

Ex: Intelligence is vaguely defined, practically impossible to measure definitively

In this case, its usually a good idea to use a proxy variable to stand in for the missing variables
(instead of simply dropping it)
o

EX: Since socioeconomic status (SES) isnt easily measurable, use income to stand in

2 good reasons to use a proxy:


o

Leaving the variable out can cause reg to suffer from omitted variable bias, making statistical
tests invalid

The results from your proxy reg may indirectly shed light on the influence of the missing
variable

Testing a linear restriction

Linear restrictions: the parameters conform to a simple linear equation


o

EX:

2 = 3 2 + 3 = 1
2 = 3 4

Nonlinear restriction:

These procedures only relate to linear restrictions

F-test of a linear restriction

Run reg on restricted and unrestricted forms, denoting RSS as RRSS for the restricted model, and URSS
for the unrestricted one

The restriction makes it harder to fit a model, so RRSSURSS, generally being greater

We want to test if the improvement in fit when going from the restricted to unrestricted model is
significant

If it is, the restriction should be rejected


o

As in chapter 3,

In this case, the improvement is RRSS-URSS, and one additional df is used up in the unrestricted model
(bc theres one more parameter to estimate), and the RSS remaining is URSS
o

The F-stat in this case is (1,

) =

/()

(under H0: restriction is valid)

F is distributed with 1 & N-K df


o

First argument for the distribution of the F stat: we are testing ONE restriction

Second: df in the unrestricted model

T-test of a linear restriction

Suppose your hypothetical restriction is


o

Where is scalar. Define

will become the coef of one of the variables in the model, and a t-test of H0: =0 is
effectively a t-test of H0:

And reparameterize

= , hence the restriction

Basically, were seeing if we need to include a reparameterizing term ()

Example on p273

If the estimate of is not significantly different than zero, we can drop it and use the restricted
version

If it is significantly different, we cant drop it

Multiple restrictions: F-test can be applied to test whether several restrictions are valid simultaneously

Suppose there are P restrictions

Let URSS be RSS for the fully unrestricted model

Let RRSS be for the model where all P restrictions have been imposed
o

The test stat is now

(, ) =

()
()

where K is the number of params

in the original unrestricted version


o

The t-stat can only be used to test single restrictions in isolation (one at a time)

Zero restrictions

Zero restrictions means that a particular param is hypothesized to be zero

Since its in isolation, use the t-test


o

Special case, no need for reparam

The testing of multiple zero restrictions is a special case of testing multiple restrictions
o

The test of the joint explanatory of a group of explanatory variables can be thought of in this
way

The F stat for the equation as a whole can also be thought of in this way

Here the unrestricted model is

The restricted model is


o

Since all of the slope coefs are hypothesized to be zero

If this model is fitted,

the OLS estimate of

The residual in observation i is

RRSS is

The F-stat is now

( )2 which is the TSS for Y

Where URSS & UESS are for the original unrestricted model

Potrebbero piacerti anche