Stats Archives - Inquiries Journal - Blog ArchivesNovember 6th, 2015 How to Use Regression Analysis EffectivelySo, you want to use regression analysis in your paper? While statistical modeling can add great authority to your paper and to the conclusions you draw, it is also easy to use incorrectly. The worst case scenario can occur when you think you’ve done everything right and therefore reach a strong conclusion based on an improperly conceived model. This guide presents a series of suggestions and considerations that you should take into account before you decide to use regression analysis in your paper. The best regression model is based on a strong theoretical foundation that demonstrates not just that A and B are related, but why A and B are related. Before you start, ask yourself two important questions: is your research question a good fit for regression analysis? And, do you have access to good data? 1. Is Your Research Question a Good Fit for Regression Analysis?This depends on many different factors. Are you trying to explain something that is primarily described by numerical values? This is a key question to ask yourself before you decide to use regression. Although there are various ways to use regression analysis to describe non-numerical outcomes (e.g., dichotomous yes/no or probabilistic outcomes), they become more complicated and you will need to have a much deeper understanding of the underlying principles of regression in order to use them effectively. Before you start, consider whether or not your dependent variable is numerical. Some examples:
At the same time, you need to make sure that there is sufficient variation in your dependent variable and that the variation occurs in a normal pattern. For example, you would have a problem if you tried to predict the likelihood of someone being elected as president because almost no one is elected as president. As a result, there is virtually no variation on the dependent variable. 2. Do You Have Access to Good Data?Before you can conduct any type of analysis, you need a good data set. Not all data sets are easily suited to regression analysis without considerable manipulation. Some things to consider before you decide to use regression:
Keep in mind that your independent variables need to meet the same criteria for normality and variability as your dependent variable. Once you decide to proceed with a regression model in your analysis, there are a three key concepts to keep in mind as you design your model to avoid making an easily preventable mistake that could send your conclusions way off track.
Each is described in more detail below. ParsimonyIn statistics, the principle of parsimony is based on the idea that when possible, the simplest model with the fewest independent variables should be used when a model with more variables offers only slightly more explanatory value. In other words, one should not add variables to a model that do not increase the ability of the model to explain something. Only add variables to a model if they significantly increase the ability of the model to explain something. If you add too many variables to your model, you can unwittingly introduce major problems to your analysis. In the extreme case, you must consider that your R2 value will always increase with the addition of new variables: so if you examine R2 alone, you can be duped into thinking that you have a great model simply by dumping in more and more predictor variables. There are two good ways to address this problem: use an Adjusted R2 to compare models with different numbers of predictors, and use stepwise regression to analyze the explanatory impact of each variable as it is added to the model.
A good rule of thumb as you consider different models is that you should always have a good reason to add a predictor variable to your model, and if you can’t come up with a good theoretical explanation as to why A influences B, then leave out A! Internal ValidityInternal validity is the degree to which one factor can be said to cause another factor based on three basic criteria:
In many cases, internal validity becomes an issue in the form of a “chicken and egg” problem. For example, let’s say you are considering the relationship between obesity and depression (a common example). If you want to include depression as an independent variable to explain obesity in your model, you first need to consider the question: Does depression lead to obesity, or does obesity lead to depression? If you have no clear theoretical guidance to show that, in fact, depression usually precedes obesity (temporal precedence), you could introduce a significant problem to your model if the relationship is in fact the other way around: depression being the result of obesity. Therefore, as you craft your model it is important to have a theoretical basis for the inclusion of each variable. MulticollinearityMulticollinearity occurs when the independent variables in a multiple regression model are highly correlated with one another. This can be a problem in several ways:
An example of variables that are going to be highly multicollinear are any variables that effectively measure the same thing. One way to show this, for the purposes of an example, is to imagine converting categorical data into a series of binary variables. Any variables that effectively measure the same concept are likely to have high collinearity. For example, let’s say that we have a variable measuring memory where respondents are able to choose very good, average, or poor as a response. One way to use this data in a regression model would be to convert the data into three dichotomous (yes/no) variables indicating a person’s response. However, if you then include all of these dichotomous variables in your model, you will have a big problem because they will become perfectly multicollinear. This is because anyone who indicated that they had a very good memory, by default, also indicated that they do not have a poor memory. The two variables measure the same thing: a person’s memory. Another common example can be found in the use of height and weight variables. Although the two variables measure different things, broadly speaking they can both be said to measure a person’s body size, and they will almost always be highly correlated. As a result, if both variables are included as predictors in a model, it can be difficult to discern the effect that each variable has individually on the outcome (measured by the coefficient). Thus, as you build your model, you need to be aware of the potentially confounding impact of using highly similar predictor variables. In an ideal model, all independent variables will have no or very low correlation to each other, but a high correlation with the dependent variable. Conclusion: Use Regression Effectively by Keeping it SimpleRegression analysis can be a powerful explanatory tool and a highly persuasive way of demonstrating relationships between complex phenomena, but it is also easy to misuse if you are not an expert statistician. If you decide to use regression analysis, you shouldn’t ask it to do too much: don’t force your data to explain something that you otherwise can’t explain! Moreover, regression should only be used where it is appropriate and when their is sufficient quantity and quality of data to give the analysis meaning beyond your sample. If you can’t generalize beyond your sample, you really haven’t explained anything at all. Lastly, always keep in mind that the best regression model is based on a strong theoretical foundation that demonstrates not just that A and B are related, but why A and B are related. If you keep all of these things in mind, you will be on your way to crafting a powerful and persuasive argument. Tags: academia, Modeling, Regression, research, Research Guide, Stats |