CRDW Test

The term “CRDW” (Cointegration Regression Durbin Watson) does not correspond to a specific test or method in econometrics. It seems to be a combination of two different concepts.

First is the cointegration in the regression. Then, use the Durbin-Watson test to find the serial correlation for the existence of cointegration among the series, and to find the stationarity among non-stationary time series.

Let me provide a step-by-step approach for testing cointegration using the Engle-Granger two-step procedure and the Durbin-Watson test for serial correlation. This combination may address your requirement. Here are the steps:

Step 1: Formulate the cointegration hypothesis

Start by specifying the cointegration hypothesis. In cointegration analysis, the null hypothesis typically assumes no cointegration, meaning the variables are not co-movements over time. The alternative hypothesis assumes the presence of cointegration, indicating a long-term relationship between the variables.

 

H0: No cointegration (variables are not co-movements)

Ha: Cointegration exists (variables have a long-term relationship)

 

Step 2: Estimate the cointegration relationship

Apply the Engle-Granger two-step procedure to estimate the cointegration relationship. This involves the following steps:

 

Select two or more variables that are believed to be cointegrated.

Estimate the individual regression models for each variable, including appropriate lags and other relevant variables.

Calculate the residuals for each regression.

Step 3: Perform the Durbin-Watson test

Conduct the Durbin-Watson test on the residuals obtained in Step 2 to check for serial correlation. The Durbin-Watson test is used to assess whether there is autocorrelation in the residuals. Autocorrelation can undermine the validity of the cointegration results.

 

Step 4: Interpret the Durbin-Watson test results

The Durbin-Watson test statistic ranges from 0 to 4. A value around 2 indicates no serial correlation, while values significantly different from 2 suggest the presence of serial correlation. The test statistic can be compared to critical values from a Durbin-Watson table or a statistical software output to determine the significance.

 

Step 5: Draw conclusions about cointegration

Based on the results obtained from the Engle-Granger procedure and the Durbin-Watson test, draw conclusions about the presence or absence of cointegration between the variables. If the residuals exhibit no serial correlation (Durbin-Watson test statistic close to 2) and the cointegration relationship is statistically significant, you may conclude that cointegration exists.

Example

 

Let there two non-stationary variables Xt ~ I(1), Yt ~ I(1), both are integrated of order one.

Yt = α + βXt + εt

Get DW statistic from OLS-fitting Yt = α + βXt + εt

Now lets form the hypothesis about residuals  

H0: residuals et have a unit root, i.e., et ~ I(1), i.e., Xt and Yt are not cointegrated 

Rule of thumb 

If CRDW < R2, cointegration likely to be false; do not reject H0 

If CRDW > R2, cointegration may occur; reject H0

Further resources:

R  Eviews

SARG Test

The SARG test (Sargan’s test) is a statistical test used in econometrics to assess the validity of instruments in an instrumental variables (IV) regression model. It checks whether the instruments are exogenous (uncorrelated with the error term) and provides a measure of over-identification. Here’s a step-by-step approach to conducting the SARG test:

 

Step 1: Set up the IV regression model

Start by specifying your econometric model with instrumental variables. The general form of an IV regression model is:

 

Y = Xβ + e

 

where Y is the dependent variable, X is the matrix of endogenous explanatory variables, β is the vector of coefficients to be estimated, and e is the error term. The model also includes instrumental variables, Z, which are used to address endogeneity.

 

Step 2: Estimate the IV regression model

Use appropriate estimation techniques, such as two-stage least squares (2SLS), to estimate the IV regression model. This involves performing two stages of regression. In the first stage, regress the endogenous variables (X) on the instrumental variables (Z) to obtain the predicted values of X. In the second stage, regress the dependent variable (Y) on the predicted values of X and other exogenous variables.

 

Step 3: Obtain the residuals

Calculate the residuals (e_hat) from the second-stage regression. These are the differences between the observed values of the dependent variable and the predicted values from the IV regression.

 

Step 4: Obtain the predicted values of the endogenous variables

Using the instrumental variables (Z) and the estimated coefficients from the first stage of regression, obtain the predicted values of the endogenous variables (X_hat).

 

Step 5: Run an auxiliary regression

Run an auxiliary regression by regressing the residuals (e_hat) from Step 3 on the predicted values of the endogenous variables (X_hat) obtained in Step 4. The auxiliary regression has the following form:

 

e_hat = X_hat * δ + v

 

where δ is the coefficient vector to be estimated and v is the error term.

 

Step 6: Perform the SARG test

The SARG test statistic is the chi-square test statistic based on the residuals from the auxiliary regression. You can calculate it by multiplying the sample size by the R-squared value from the auxiliary regression. The formula is:

 

SARG test statistic = (Sample size) * R-squared

 

Step 7: Compare the test statistic with the critical value

Compare the SARG test statistic obtained in Step 6 with the critical value from the chi-square distribution table for the desired level of significance. If the test statistic is greater than the critical value, you reject the null hypothesis that the instruments are valid (i.e., exogenous) and conclude that there is a problem of overidentification.

Note: It’s essential to follow appropriate assumptions and conditions for instrumental variables regression, such as instrument relevance and exogeneity, as violating these assumptions can lead to invalid test results.

Remember, this is a general step-by-step approach to conducting the SARG test. The specific implementation may vary depending on the software or statistical package you are using for your analysis.

Further resources:

Eviews       STATA 

KPSS Test

The KPSS (Kwiatkowski-Phillips-Schmidt-Shin) test is a statistical test used in econometrics to assess the stationarity of a time series. It helps determine whether a series has a unit root (indicating non-stationarity) or is stationary. Here’s a step-by-step approach to conducting the KPSS test:

 

Step 1: Define the null and alternative hypotheses

The null hypothesis (H0) for the KPSS test is that the time series is stationary (i.e., it does not have a unit root). The alternative hypothesis (Ha) is that the series is non-stationary.

 

H0: The series is stationary (no unit root).

Ha: The series is non-stationary (has a unit root).

 

Step 2: Choose the lag length

Decide on the appropriate number of lags to include in the test. The lag length determines the number of lagged differences to include in the test regression equation. The selection of lag length can be based on statistical criteria such as AIC (Akaike Information Criterion) or BIC (Bayesian Information Criterion).

 

Step 3: Specify the regression model

Set up the regression model to test for stationarity using the selected lag length. The general form of the KPSS test regression model is:

 

yt = δ0 + δ1 * t + Σ(δi * Δyt-i) + εt

 

where yt represents the time series, t is a time trend, Δyt-i is the first difference of yt at lag i, and εt is the error term.

 

Step 4: Estimate the regression model

Estimate the regression model using ordinary least squares (OLS) or any appropriate estimation method. Obtain the coefficient estimates for the lagged differences (δi) and the error term (εt).

 

Step 5: Calculate the test statistic

Calculate the test statistic, which is based on the sum of squared residuals (SSR) from the estimated regression model. The KPSS test statistic is given by:

 

KPSS test statistic = SSR / (T^2 * σ^2)

 

where T is the number of observations, and σ^2 is the estimated variance of the error term.

 

Step 6: Determine the critical value

Determine the critical value for the test statistic based on the desired level of significance (e.g., 5% or 1%). The critical values are available in statistical tables or can be obtained from statistical software packages.

 

Step 7: Compare the test statistic with the critical value

Compare the calculated test statistic from Step 5 with the critical value from Step 6. If the test statistic is greater than the critical value, you reject the null hypothesis of stationarity (H0) and conclude that the series is non-stationary. If the test statistic is less than the critical value, you fail to reject the null hypothesis and conclude that the series is stationary.

 

Note: It’s important to ensure that the assumptions of the KPSS test are met, such as independence and homoscedasticity of the error term. Additionally, the choice of lag length in Step 2 can impact the test results, so it’s advisable to consider different lag lengths and assess the stability of the test results.

 

This step-by-step approach outlines the general process of conducting the KPSS test. The implementation may vary depending on the statistical software or package you are using for your analysis.

Further resources:

Python   R   Eviews  Gretl  STATA 

 

It’s important to note that the above steps provide a general framework for these tests. However, depending on your specific research question and data, additional steps or alternative tests may be required. If you need assistance in your research or any query, you can reach us through email at learneconometricsfast@gmail.com or WhatsApp +91 8820 490 289.

Thank You

Chat now

Exit mobile version