Sei sulla pagina 1di 4

Econometrics is the application of mathematics, statistical methods, and computer

science, to economic data and is described as the branch of economics that aims to
give empirical content to economic relations.[1] More precisely, it is "the quantitative
analysis of actual economic phenomena based on the concurrent development of

theory and observation, related by appropriate methods of inference."


Econometric Issues:
Multicollinearity:
In multiple regression analysis, the regression coefficients (viz., b1 b2) become
less reliable as
the degree of correlation between the independent variables (viz., X1, X2)
increases. If there is a high degree of correlation between independent
variables, we have a problem of what is commonly described as the problem of
multicollinearity. In such a situation we should use only one set of the
independent variable to make our estimate. In fact, adding a second variable,
say X2, that is correlated with the first variable, say X1, distorts the values of
the regression coefficients. Nevertheless, the prediction for the dependent
variable can be made even when multicollinearity is present, but in such a
situation enough care should be taken in selecting the independent variables to
estimate a dependent variable so as to ensure that multi-collinearity is reduced
to the minimum
Heteroscedasticity:
heteroscedasticity (also spelled heteroskedasticity) refers to the circumstance in
which the variability of a variable is unequal across the range of values of a
second variable that predicts it.
A scatterplot of these variables will often create a cone-like shape, as the
scatter (or variability) of the dependent variable (DV) widens or narrows as the
value of the independent variable (IV) increases. The inverse of
heteroscedasticity is homoscedasticity, which indicates that a DV's variability is
equal across values of an IV.
For example: annual income might be a heteroscedastic variable when
predicted by age, because most teens aren't flying around in G6 jets that they
bought from their own income. More commonly, teen workers earn close to the
minimum wage, so there isn't a lot of variability during the teen years. However,
as teens turn into 20-somethings, and 20-somethings into 30-somethings, some
will tend to shoot-up the tax brackets, while others will increase more gradually
(or perhaps not at all, unfortunately). Put simply, the gap between the "haves"
and the "have-nots" is likely to widen with age.
AUTOCORRELATION:
A mathematical representation of the degree of similarity between a given time
series and a lagged version of itself over successive time intervals. It is the
same as calculating the correlation between two different time series, except
that the same time series is used twice - once in its original form and once
lagged one or more time periods.
The term can also be referred to as "lagged correlation" or "serial correlation".
Endogeneity:
In a statistical model, a parameter or variable is said to be endogenous when
there is a correlation between the parameter or variable and the error term.[1]
Endogeneity can arise as a result of measurement error, autoregression with
autocorrelated errors, simultaneity and omitted variables. Broadly, either an

uncontrolled confounder causing both independent and dependent variables of


a model or a loop of causality between the independent and dependent
variables of a model leads to endogeneity.
For example, in a simple supply and demand model, when predicting the
quantity demanded in equilibrium, the price is endogenous because producers
change their price in response to demand and consumers change their demand
in response to price. In this case, the price variable is said to have total
endogeneity once the demand and supply curves are known. In contrast, a
change in consumer tastes or preferences would be an exogenous change on
the demand curve.
Typical Problems Estimating Econometric Models
If the classical linear regression model (CLRM) doesn't work for your data
because one of its assumptions doesn't hold, then you have to address the
problem before you can finalize your analysis. Fortunately, one of the primary
contributions of econometrics is the development of techniques to address such
problems or other complications with the data that make standard model
estimation difficult or unreliable.
The following table lists the names of the most common estimation issues, a
brief definition of each one, their consequences, typical tools used to detect
them, and commonly accepted methods for resolving each problem.
Problem
Definition
Consequenc Detectio Solution
es
n
High
multicollineari
ty

Two or more
independent
variables in a
regression model
exhibit a close
linear
relationship.

Large
standard
errors and
insignificant t
-statistics
Coefficient
estimates
sensitive to
minor
changes in
model
specification
Nonsensical
coefficient
signs and
magnitudes

Pairwise
correlatio
n
coefficien
ts
Variance
inflation
factor
(VIF)

1. Collect
additional
data.
2. Re-specify
the model.
3. Drop
redundant
variables.

Heteroskedast
icity

The variance of
the error term
changes in
response to a
change in the
value of the
independent
variables.

Inefficient
coefficient
estimates
Biased
standard
errors
Unreliable
hypothesis
tests

Park test
GoldfeldQuandt
test
BreuschPagan
test
White
test

1. Weighted
least
squares
(WLS)
2. Robust
standard
errors

Autocorrelatio
n

An identifiable
relationship

Inefficient
coefficient

Geary or
runs test

1. CochraneOrcutt

(positive or
negative) exists
between the
values of the
error in one
period and the
values of the
error in another
period.

estimates
Biased
standard
errors
Unreliable
hypothesis
tests

DurbinWatson
test
BreuschGodfrey
test

transformati
on
2. PraisWinsten
transformati
on
3. NeweyWest robust
standard
errors

Types of Statistical Tests


TYPES OF STATISTICAL TESTS
Now that you have looked at the distribution of your data and perhaps
conducted some descriptive statistics to find out the mean, median or mode, it
is time to make some inferences about the data. As previously covered in the
module, inferential statistics are the set of statistical tests we use to make
inferences about data. These statistical tests allow us to make inferences
because they can tell us if the pattern we are observing is real or just due to
chance.
How do you know what kind of test to use?
Types of statistical tests: There are a wide range of statistical tests. The
decision of which statistical test to use depends on the research design, the
distribution of the data, and the type of variable. In general, if the data is
normally distributed you will choose from parametric tests. If the data is nonnormal you choose from the set of non-parametric tests. Below is a table listing
just a few common statistical tests and their use
Type of Test:

Use:

Correlational

These tests look for an association between variables

Pearson
correlation

Tests for the strength of the association between two continuous


variables

Spearman
correlation

Tests for the strength of the association between two ordinal


variables (does not rely on the assumption of normal distributed
data)

Chi-square

Tests for the strength of the association between two categorical


variables

Comparison of Means: look for the difference between the means of variables
Paired T-test

Tests for difference between two related variables

Independent Ttest

Tests for difference between two independent variables

ANOVA

Tests the difference between group means after any other


variance in the outcome variable is accounted for

Regression: assess if change in one variable predicts change in another variable


Simple
regression

Tests how change in the predictor variable predicts the level of


change in the outcome variable

Multiple
regression

Tests how change in the combination of two or more predictor


variables predict the level of change in the outcome variable

Non-parametric: are used when the data does not meet assumptions required for
parametric tests
Wilcoxon ranksum test

Tests for difference between two independent variables - takes


into account magnitude and direction of difference

Wilcoxon signrank test

Tests for difference between two related variables - takes into


account magnitude and direction of difference

Sign test

Tests if two related variables are different ignores magnitude of


change, only takes into account direction

Potrebbero piacerti anche