51. The least square estimates have maximum
variances among all the linear unbiased biased estimates.
A. True.
B. False.
C. No comments.
D. Depending upon situation.
52. The answer to why we need general linear
regression model in deviated form is that, in phenomena of response and
explanatory variables the researcher is always interested in the intercept
parameters. We have interested only in regression co-efficients i.e. β1 β2
β3 ……………….. βk. Then the model can be reduced by taking
deviation from mean. We estimate β1 β2 β3 ………………..
βk from deviated model by the
A. Method of moments.
B. Method of maximum likelihood.
C. Moments method.
D. Method of least square.
53
At a time data collected from different
commodities is called
A. Crossectional
data.
B. Time series data.
C.
Primary data.
D. Secondary data.
54.
The data collected for one commodity at
different times is called
A. Crossectional
data.
B. Time series data.
C. Primary data.
D.
Secondary data.
55. Extraneous information is the information
obtained from any other source outside the sample which is being used for
A. Examination.
B. Projection.
C. Population.
D. Estimation.
56. The ……………………….. information may be available
in the following cases.
a) We may have information on the exact
value of some parameter from institutional information eg in the taxation law
on the cigarette consumption defines that taxation is 70% of the retail price
of tobacco manufactures.
b) We may have information on the exact
linear relationship between parameters.
A. General.
B. Extraneous.
C. Endogenous.
D. Special.
57. There are three methods through which
extraneous information are utilized.
a) Method of transformation of variable.
b) Method of pulling crossectional data and
time series data.
A. Method of using prior information.
B. Moment method.
C. Method of maximum likelihood.
D. Method of least square.
58. Refers to the case in which two or more
explanatory variables in the regression model are highly correlated, making it
difficult or impossible to isolate their individual effects on the depended
variable.
A. Correlation Matrix.
B. Multicollinearity.
C. Collinearity.
D. Bicollinearity.
59. With multicollinearity , the estimated OLS
coefficient may statistically
A. Significant.
B. Insignificant.
C. Least.
D. Maximum.
60. Multicollinerity can sometimes be overcome
or reduced by collecting more data, by utilizing prior information, by
transformating the functional relationship or by dropping one of the highly
A. Correlated numerical.
B. Correlated parameters.
C. Correlated variables.
D. Correlated values.
61. One of the possible sources of Multicollinearity
is that the data collected method can lead to the problem of mulitcollinearity
when the investigator samples only a subspace of the K- dimensional space of
the predictor variable.
A. Projected variable.
B. Estimated variable.
C. Unknown variable.
D. Predictor variable.
62. Some procedures can tell us about
multicollinearity phenomena.
a) Examination of correlation matrix.
b) Variance inflation factor.-
A. Gauss method.
B. The Farrar-Glauber test.
C. Test Theorem.
D. Examination of rank matrix.
63. In case of perfect multicollinerity , the
least square estimates
A. Does not exist.
B. Exists.
C. Some time exists sometimes does not.
D. Becomes very high.
64. In case of strong multicollinerity it is
possible to get OLS estimators but their SEs tend to be very large as the
degree of correlation between explanatory variable
A. Decreases.
B. Increases.
C. Sometimes increases and sometime
decreases.
D. Remains constant
65. Because of large SEs the confidence
interval for the relevant population parameters tends to be larger, hence the
probability of accepting a false hypothesis
A. Decreases.
B. Increases.
C. Sometimes increases and sometime
decreases.
D. Remains constant
66. For strong multicollinearity the
coefficient of multiple determination may also be
A. High.
B. Very high.
C. Very low.
D. Low.
67. In case of strong multicollinearity, the
OLS estimator and S.E becomes very sensitive to slight change in the
A. Population data.
B. Primary data.
C. Secondary data.
D. Sample data.
68. If the OLS assumption that the variance of
the error term is constant for all the values of independent variables does not
hold, we face the problem of
A. Hetrocedasticity.
B. Homocedasticity.
C. Parocedasicity.
D. Herocedastictiy.
69. This leads to unbiased but inefficient
estimates of the S.Es.
A. Hetrocedasticity.
B. Homocedasticity.
C. Parocedasicity.
D. Herocedastictiy.
70. Following are the few consequences of
using OLS in the presence of ……………………………
a) The least square estimate will have no
bias.
b) Variance of OLS coefficient will be
incorrect.
A. Herocedasticity.
B. Homocedasticity.
C. Parocedasicity.
D. Hetrocedasticity..
71. Name of tests for hetrocedasticity.
a) The Spearman Rank Correlation test.
b) The Gold Fled –Quant test.
A. Run test.
B. Q-test.
C. The Park test.
D. Test statistic.
72. May be defined as “Correlation between
members of series of observations ordered in time or space”.
A. Auto Correlation.
B. Exact Correlation.
C. Inexact Correlation.
D. Simple Correlation.
73. There is slight difference between Auto
Correlation and Serial Correlation. Auto Correlation is used for relation
between lag values of the same series, while Serial Correlation is used for the
relationship between current values of one series with the lag values of other
series.
A. No comments.
B. False.
C.
True.
D. Depending upon situation.
74. Sources of ……………………………
a) Omitted explanatory variables.
b) Miss specification of the mathematical
form of the model.
c) Interpolation in statistical observations.
d) Miss specification of true random
error.
A. Correlation.
B. Auto Correlation.
C. Rank Correlation
D. Simple correlation.
75. When the disturbance term exhibits serial
correlation, the values as well as S.Es of parameter of estimates are
A. Affected.
B. Not affected.
C. Correlated.
D. Not correlated.
51
|
B
|
52
|
D
|
53
|
A
|
54
|
B
|
55
|
D
|
56
|
B
|
57
|
A
|
58
|
B
|
59
|
B
|
60
|
C
|
61
|
D
|
62
|
B
|
63
|
A
|
64
|
B
|
65
|
B
|
66
|
A
|
67
|
D
|
68
|
A
|
69
|
A
|
70
|
D
|
71
|
C
|
72
|
A
|
73
|
C
|
74
|
B
|
75
|
A
|
Im obliged for the blog article.Thanks Again. Awesome.machine learning course in hyderabad
ReplyDelete