Sunday, May 3, 2020

Inferential Statistics And Their DiscontentsFree Sample Paper

Question: 1.Jackson even-numbered Chapter exercises (pp. 220-221; 273-275) 2.What are degrees of freedom? How are the calculated? 3.What do inferential statistics allow you to infer? 4.What is the General Linear Model (GLM)? Why does it matter? 5.Compare and contrast parametric and nonparametric statistics. Why and in what types of cases would you use one over the other? 6.Why is it important to pay attention to the assumptions of the statistical test? What are your options if your dependent variable scores are not normally distributed? Part II Part II introduces you to a debate in the field of education between those who support Null Hypothesis Significance Testing (NHST) and those who argue that NHST is poorly suited to most of the questions educators are interested in. Jackson (2012) and Trochim and Donnelly (2006) pretty much follow this model. Northcentral follows it. But, as the authors of the readings for Part II argue, using statistical analyses based on this model may yield very misleading results. You may or may not propose a study that uses alternative models of data analysis and presentation of findings (e.g., confidence intervals and effect sizes) or supplements NHST with another model. In any case, by learning about alternatives to NHST, you will better understand it and the culture of the field of education. Answer the following questions: 1.What does p = .05 mean? What are some misconceptions about the meaning of p =.05? Why are they wrong? Should all research adhere to the p = .05 standard for significance? Why or why not? 2.Compare and contrast the concepts of effect size and statistical significance. 3.What is the difference between a statistically significant result and a clinically or real world significant result? Give examples of both. 4.What is NHST? Describe the assumptions of the model. 5.Describe and explain three criticisms of NHST. 6.Describe and explain two alternatives to NHST. What do their proponents consider to be their advantages? Answer: Solution: Here, we have to write the null and alternative hypothesis for the given test. As we know that this is a one sample t test. This test is two tailed test as the alternative hypothesis is of not equal to type. Solution: According to the definition of the degrees of freedom, it is defined as the sample size minus one. The degree of freedom is nothing but the number of independent variables or the values which can be assigned to the statistical pattern of the data. Thus the degrees of freedom denote the number of independent quantities from the statistical calculations (David, 1997) and it is calculated as below: D.F. or degrees of freedom = n 1 n = sample size Solution: As we know that the statistics is divided in the two parts such as descriptive statistics and inferential statistics. Descriptive statistics gives the descriptive information about the variables under study. The inferential statistics gives the information about the inferences or the estimation of the population parameters based on the sample. By using the inferential statistics we can infer about the different population parameters. The inferential statistics have the significant role in the estimation of the population parameters (George, 2001). In the inferential statistics, we use the testing of hypothesis for checking the different types of claims or the hypothesis. By using testing of hypothesis we can check the claim whether we have to reject or do not reject the claim. The estimated results for the population parameters would be helpful for the future prediction of the variables. Solution: If there are more than two variables under study and we want to construct the linear model for the purpose of the prediction or inference about the population parameters, then we use the general linear model. In this general model we use the least squares techniques such as ANOVA, regression, correlation, etc. The general linear model helps us in finding the linear association or the correlation between the dependent variables and the independent variables under study (George, 2001). The general linear models have the significant role in the statistical data analysis for the social sciences. In this general linear model we used the different linear combinations of the variables and find out the relationship among the different pairs of the variables. The general linear model is used for the purpose of predicting the population parameters and by using these types of general linear model we get the idea about the variables which would help in making better decisions. The general linear model is used when there are more than two variables and there are a linear association exists between the different variables. By finding out the type of the linear associations or other types of associations exists between the given variables we decide which model we need to use. In some cases, there is a lack of the linear association or relationship but there are other types of relationship present between the variables. Solution: We have to compare the parametric and nonparametric statistics. In general, for the inferential statistics which have the parameters is called as the parametric statistics and the inference without the population parameters is called as non-parametric inference or nonparametric statistics. As we know that the parametric test is entirely depends upon the population parameters. If there is no any advanced information about the population or the parameters, then the inference about the hypothesis regarding the population is called as the nonparametric test (David, 1997). In the parametric statistics, there are specific assumptions regarding the population while in the non-parametric statistics, there are no any specific assumptions regarding the population. The parametric statistics is applicable for the variables while the nonparametric tests are applicable for the variables and attributes. Parametric test do not use for the nominal scale data but nonparametric test use for the nominal and ordinal scale data. According to the comparison, the parametric tests are more powerful than the nonparametric tests. Solution: In the statistical inference or the hypothesis test, it is very important to pay attention towards the assumptions of the statistical test because we use the testing of hypothesis under certain conditions and assumptions. Because we do most of the tests by assuming these conditions were met. If the assumptions for the testing of hypothesis do not meet, then the results based on these statistical tests would be biased (David, 1997). The decisions based on these tests would not be helpful for further use. As we know that the main assumption in the testing of hypothesis is the normality of the data or the variables under study. We know that when the dependent variables do not distributed according to the normal distribution then the results for the statistical tests would not be valid and we cannot use these results for the further use. If the assumptions for the statistical tests cannot be satisfied then we cannot use the results for these tests for the purpose of taking decisions abou t the claims or hypothesis. Solution: There are two main concepts are always used in the inferential statistics which are nothing but the level of significance and the p-value or the significance value. This significance value is denoted by the p and so many times we consider this value as 0.05 or the 5%. For most of the research studies the researcher uses the 5% level of significance and sometimes it would be 1% or 2% (0.01 or 0.02). The value of the p as 0.05 is not standard for the research at all time but most of the time use the p as 0.05. Solution: Here, we have to compare the effect size and the statistical significance. As we know that the effect size and statistical significance are two different concepts mainly used in the inferential statistics. By using the effect size of the statistical test, we get the idea about the effect of the test and statistical significance tells us the reliability of the test (Hays, 1973). The effect size is nothing but the magnitude of the difference between the different types of groups while the statistical significance is nothing but the probability of the difference between the different types of the groups. This difference is generated by the purely chance factor. There is a significant difference between the effect size and the statistical significance. In some situations we use the effect size for measuring the effect of the hypothesis test and sometimes we use the statistical significance for measuring the effect of the hypothesis test. In some situations we use the both for the compari son purpose. Solution: The statistically significant result is the result when we reject the null hypothesis or the claim under study. Most of the time, the null hypothesis is established as opposed to the claim we want to prove (Babbie, 2009). So, if we reject the null hypothesis then we do not reject the alternative hypothesis and this means we concluded that as we want to prove and this is called as the statistical significant. For example, if we reject the null hypothesis that the average income of the shop per year is $5000, this means we do not reject the alternative hypothesis that the income of the shop per year is more than $5000. In this example the test is statistically significant. Also, see the another example as we reject the null hypothesis that the average speed of the specific brand car is 72 km then we do not reject the alternative hypothesis that the average speed of the car is less than 72 km. In this case the test is statistically significant. For taking the decision we use the two typ es of different approaches. In the first approach, we compare the critical value of the test and the test statistic value of the test and then if the critical value is greater than the test statistic value then we do not reject the null hypothesis otherwise we reject the null hypothesis. In the other approach we compare the alpha value and p-value and then if the p-value is less than alpha value then we rejects the null hypothesis otherwise we do not reject the null hypothesis. Solution: The abbreviation NHST stands for the word Null Hypothesis Significance Testing. As it is described in the name null hypothesis significance testing, this testing of hypothesis is used for checking the null hypothesis with specific level of significance (Hays, 1973). In most of the testing of hypothesis or statistical tests, we assume the level of significance as specific alpha level and then we check the null hypothesis at the given level of significance. Note: Solution: In this part we have to see the criticisms of the null hypothesis significance testing. As we know that there are mainly two types of error occurs when we take decision about the null hypothesis. The first is type I error which is the probability of rejecting the null hypothesis even it is true and second is the type II error which is the probability of do not rejecting the null hypothesis even it is not true. There are so many reasons for the type I and type II errors. Sometimes due to miscalculations the error is possible. It is important to reduce these types of errors from the statistical testing for getting unbiased results for the statistical tests (Leonard, 1972). The type I and type II errors have the much significance in the statistical analysis of the different statistical hypothesis tests. Based on these values we can take the proper decision about the claim or hypothesis made for testing. Solution: Here, we have to see the two alternatives to the NHST. First alternative would be elimination of the errors and the second alternative would be increasing confidence level. We can eliminate the errors by taking the proper actions while performing the testing of hypothesis. Also, (Antony, 2003) by increasing the level of confidence we would get the proper results and the advantage of this would be more reliable results for the tests. By using the first alternative i.e. by eliminating the errors we can improve the performance of the hypothesis test and by using the second alternative we increase the confidence level so that we get the more reliable results for the purpose of estimation or prediction which are useful for future reference. References Antony, J. (2003). Design of Experiments for Engineers and Scientists, Butterworth Limited, U.S.A Babbie, Earl, R. (2009). The Practice of Social Research. 12th ed. Wadsworth. David, F., Robert, P., Roger, P., (1997). Statistics, 3rd ed., W. W. Norton Company. David, R. Cox, D., Hinkley. (1979). Theoretical Statistics, Chapman Hall. Ferguson, T. (1967). Mathematical Statistics: A Decision Theoretic Approach, New York: Academic Press, Inc. George, C., Roger, L., Berger. (2001). Statistical Inference, 2nd ed., Duxbury Press. Hays, William, L. (1973). Statistics for the Social Sciences, Holt, Rinehart and Winston Leonard, J., Savage. (1972). The Foundations of Statistics, 2nd ed., New York: Dover Publications, Inc.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.