Next Upcoming Google+ Hangout: Tuesday, August 27 @ 7PM (CST) - To Participate CLICK HERE

Search For Topics/Content
Help Me, Help You...
This form does not yet contain any fields.

    Misc. Stats Topics Discussion > Standardization of Scales and Comparing Means

    Hi, I have a question. If I have a measure of bullying and use SPSS to make 3 new sub-scales with each sub-scale has X number of item (questions) in each category, like this:

    Scale A (5 items) - ( each question scored on a 5 point likert scale )
    Scale B (3 items)
    Scale C (7 items)

    ...if I compute the average of verbal and get (M=14.92; SD=5.95)...this is telling me that in my sample of 51 people verbal abuse was reported to be the highest type of bullying...

    My question is, is there a way to make these mean scores easier to interpret like converting to percentage? Should I divide the 14.92 by 5, where 5 is the number of points on the Likert scale so I have the average of each question and if so, how do i work out the SD?

    November 9, 2010 | Unregistered CommenterAnonymous

    Greetings and thank-you for your message!

    I'll be happy to help, if I am able. There are a few ways that you can compute descriptive statistics for your subscales that will make them comparable to each other (I assume that is what you mean by interpretable, but let me know if that assumptions is incorrect).

    1) You can indeed create a mean score per item, instead of using a sum score across the items.
    • to accomplish this you would simply use the "compute MEAN" function in SPSS (go to do data, compute, and find the "mean" function in the box on the lower right side of the dialogue box). Alternatively, you could use a syntax, such as this:

    COMPUTE VERBALTOT=MEAN(item1, item2, item3, item4, item 5).
    EXECUTE.

    **WITH THE "item#" VARIABLES OBVIOUSLY BEING REPLACED BY YOUR ITEM VARIABLE NAMES.

    2) The other option would be to use SPSS to create standardized scores (or z-scores). To do this, you would simply go to "Analyze"-> descriptives->(move over the items of the scale into the box on the right)-> check the "save standardized values as variables" box-> and click "OK". New variables should be created that are z-scores for your variables (scores for your variables that are on a common scale). Alternatively, you could use the following syntax:

    DESCRIPTIVES VARIABLES=scaleA scaleB scaleC
    /SAVE
    /STATISTICS=MEAN STDDEV MIN MAX.

    ***WITH THE "scale#" variables replaced by your scale variable names.

    November 9, 2010 | Registered CommenterJeremy Taylor

    Sir thanks for the reply, ( about to try it).

    All the best.

    November 9, 2010 | Unregistered CommenterAnonymous

    You're welcome! Good Luck!

    January 8, 2011 | Registered CommenterJeremy Taylor

    Hi.. I am having a similar problem and I am crying already :( I trying to put a scale together (a 5 point likert scale), but the responses distribution is skewed with all participants agreeing with the most positive trait so I am having to collapse the categories in to a 3 point Likert scale (agree, neither agree nor diagree, disagree). My supervisor had once asked me to standardise the scores before scaling. I am wondering if I still have to use z scores if I am collapsing them down to 3 categories.

    your advise/help is very appreciated. Many thanks,

    Sim

    December 13, 2012 | Unregistered Commentersimna

    standardizing scores won't give you more variability in your responses, so if you needed to collapse before standardization (because there was little variability), then the same will likely be true if you use z-scores.

    December 17, 2012 | Registered CommenterJeremy Taylor

    Good day to you

    Thank you very much for your reply. I am very grateful... I tried to do it.. after I standardised, I obtained some z scores that were negative. I ran a reliability test to see if my scale fits and I got a negative cronbach's alpha. I double check if I coded it OK. All was well. Do you think I got negative cronbach's because I used the z scores in the reliability analysis? I wouldn't get a negative cronbach's if I used the collapsed categories score (as described above). Or would it be purely due to wrong choice of items on my scale? Still trying to figure out where I went wrong

    December 18, 2012 | Unregistered Commentersimna

    Hi Simna,

    A negative cronbach's alpha is usually indicative of very poor reliability in your scale. I might take a look at a correlation matrix of the items or use the "alpha if item deleted" option in the reliability analysis to try to pinpoint where things are going wrong. I hope that helps!

    February 11, 2013 | Registered CommenterJeremy Taylor

    Hi Jeremy,

    I have a similar problem. Could you please give me a suggestion? Since my questionnaire includes individual likert items (ordinal data), I am planning to do Wilcoxon signed-rank test to compare two likert items, one being 9 point likert and the second being 4 point likert. I think I have to standardize and make them the same point scale (either 4 or 9). How can I do it? Do you think there is a way for it?

    March 6, 2018 | Unregistered CommenterSerdar