Next Upcoming Google+ Hangout: Tuesday, August 27 @ 7PM (CST) - To Participate CLICK HERE

Search For Topics/Content


STATISTICS QUESTIONS FROM YOU FOR THE STATS MAKE ME CRY GUY!


This page page features questions previously submitted by users on the "Ask the Stats Make Me Cry Guy" page. Although we now use the forum for these questions instead, I decided to leave these posted so that the information was available!

Entries in Alpha (2)

Thursday
Dec092010

How do I determine if a questionaire I want to use has 'good psychometric properties'? (Jessica, United Kingdom)


Thanks for the great question, Jessica! Typically, when the term psychometrics is used in the context of survey research, it is in reference to a survey instrument's reliability. The most common statistic used to determine a survey instrument's reliability is Cronbach's alpha. Cronbach's alpha is a statistic that evaluates how much individual survey items covariate with one another to predict a single construct. In English, that means it is a test how much a group of survey questions measures the construct they are intended to represent.
As an example, if an assessment of depressive symptoms contains 10 items, a Cronbach's alpha for those 10 items is a measure of how well the group of those 10 items (as a whole) represents a respondent's level of depressive symptoms . If you've not yet collected data, the best way to determine an instruments psychometrics is to review previous studies that have used the survey and calculate what the average Cronbach's alpha was among them. Cronbach's alpha typically ranges from 0 to 1 (although negative numbers are possible, they are usually meaningless), with values closer to one indicating stronger reliability. There is no official value that indicates strong reliability, but a review of the literature does show some general conventions on the topic.
Commonly, a Cronbach's alpha in the range of .70 to .79 is considered adequate, a value in the range of .80 to .89 is considered good, and a Cronbach's alpha in the range of .90 to .99 is considered excellent (an alpha of 1.00 is most likely an error or an indication that something is wrong with your data). Another test of reliability includes test-retest reliability (which uses a correlation to test for agreement between two measures of the same construct). If you've already collected your data, the statistic can generally be easily obtained using any statistical program (such as SPSS or SAS). I hope that was helpful and please keep the questions coming!

Thursday
May202010

Data Analysis Question: When dealing with significance levels, should I use p < .001 or p < .05? Also, should it change between tests or stay consistent throughout my analyses? (Matt, Chicago, IL)

In statistics, the level at which one seeks to find a significant p-value is known as "alpha". The most common levels of alpha are p < .05, p < .01, and p < .001. The decision about which to use is a difficult one, and is somewhat subjective. Essentially, alpha is the degree of chance that a researcher is willing to accept that the inferences they take from any given analysis are made in error. In other words, if I choose to use an alpha of .05, I'm accepting that there is an approximately 5% chance that I'll make assumptions from my results that may be an inaccurate representation of the population I am seeking to analyze. An alpha of .05 is the most commonly used alpha, although lower values of alpha (such as p < .001) are considered more conservative. There is no right or wrong answer about which to choose, although it is typically encouraged to keep the alpha statistic consistent across each analysis conducted in any given project, and to make the decision about which to use prior to running your analysis.