Next Upcoming Google+ Hangout: Tuesday, August 27 @ 7PM (CST) - To Participate CLICK HERE

Search For Topics/Content

Missing Data/Imputation Discussion > What do I do after I get my multiply imputed data?

Hi everyone,

I'm new to SPSS and somehow managed to run multiple imputation on my data set. I got 3 imputed data sets which I do not know how to combine them and come up with one single set. I've been reading posts here and there that says that I have to pool the result by getting the descriptives for each set? Can anyone please guide me through the steps on how I move on from here? I am using SPSS 17.0

Thank you in advance!
Soco

August 31, 2011 | Unregistered CommenterSoco

Greetings Soco,

First of all, allow me to apologize for the extremely delayed nature of my response. Unfortunately, my forum notifications were being sent to an inactive email, so I didn't become aware of new posts for a long time. Anyway, to answer your question, you simply need to "split" (under the "Data" file menu) your sample by the "imputation" variable that should've been created in your SPSS dataset. Once your data is split by the "imputation" variable, SPSS should recognize that your dataset is multiply imputed and provide pooled estimates automatically (for the analyses that support multiple imputation in SPSS). I hope that helps!

February 23, 2012 | Registered CommenterJeremy Taylor

Hi Jeremy,

First thank you for your unconditional help. I have 5 imputed data set and my data is split and provides the pooled estimates, I just dont know what happens afterwards? My purpose with my data set is to conduct a multiple regression and I used the m.imputation to generate a complete data set. Could you please give me some direction? Shall I just carry on doing the multiple regression analysis on the 5 data set? Or do I need to choose one particular data set from the 5 imputed data set? This is so confusing, I thought after the imputation there will be only one data set which SPSS created from the 5.
Thank you in advance,
Sisi

April 8, 2012 | Unregistered CommenterSisi

Hey Sisi,

With multiple iputation, you always end up with multiple datasets. The procedure from there is to estimate your model separately in each dataset and then pool your estimates to get your robust results. Typically, the pooled estimates are reported as your final results. The object of MI is to get robust estimates of your model (unbiased by missing data), it isn't to get a full dataset, but I can easily see how it would seem that way.

April 11, 2012 | Registered CommenterJeremy Taylor

Hi Jeremy,

Thank you for your answer. I would like to conduct a multiple regression and in order to met normal distribution of the assumptions I had to delete some outliers. Now according to my original data set for my DV (dependent variable I got 77 participants) and for my IV variables the range of participants who responded is between 65 and 84). My question is, can I still carry on with multiple regression, will the original dataset be valid with various missing values? Or shall I just use only the datasets for m.regression which were imputed?

Thanks in advance,
Sisi

April 16, 2012 | Unregistered CommenterSisi

Hi again Sisi!

Typically only the impute datasets are used to estimate your model, since they solve the missing data issues. However, sometimes people do calculate estimates from their original data also, just for the purposes of comparing the results to their imputed datasets, to see how much impact their missing data might've had... Am I making sense?

April 17, 2012 | Registered CommenterJeremy Taylor

Hi there,

I have imputed my data and now have five imputed sets in addition to my original data. I have activated the "split file option." When I'm conducting chi-squares, I'm getting pooled values for frequencies, but I'm not getting a pooled significance level (just individual ones for the five MI sets and the one original dataset). Is there any way to get a pooled chi-square value? I'm using SPSS 20.0 with the missing values add-on.

Thanks!

May 10, 2012 | Unregistered CommenterLauren

Hey Lauren,

If it didn't produce the pooled value, then that may not be an option in SPSS, for chi-square specifically. You can always calculate an pooled estimate by hand though. Until recently, I probably would've suggest that you simply calculate the average of the 5 estimates, because that is a common way to pool MI estimates and is generally pretty reliable. However, I've recently read that this might not work as well for chi-square. I haven't read too much about this yet (and the average estimate will still probably work in a pinch), but here is an article on the topic that I've been meaning to try to digest:

http://www.statmodel.com/download/MI5.pdf

I hope this helps!

May 12, 2012 | Registered CommenterJeremy Taylor

I'm just reading this post and have some questions for you Jeremy:

I'm running multiple imputation with 25 imputations (Spratt et al., 2010). My analysis of substantive interest will be exploratory: I plan to run approx. 100 logisitic regressions to identify the best fitting models. Running 100 logistic regressions of 25 data sets and then pooling them seems laborious if there is an easier way. Is it possible to run analyses to determine if there is any significant difference between datasets and if not then choose one randomly? I'm using SAS 9.2.

thanks~!
Alyssa

July 17, 2012 | Unregistered CommenterAlyssa

Hi Alyssa!

I believe I just responded to your other posts on this topic. Let me know if you have additional questions after reading my response to that.

July 28, 2012 | Registered CommenterJeremy Taylor

Hi! Great blog! I'm a Statistician but the method of SPSS Multiple Imputation confuses me a lot. After I have already created a data set with 5 imputations, my problem is how to determine which of the output is the appropriate to look at to see if there is a significant differences on the ff pairs. I am running a paired t-test and there are a lot of paired-t test results. I am looking at the possibility of using the p-value of the pooled pairs but I am just worrying that the degrees of freedom of the pooled pairs are very high. I only have 1118 samples but the df of the pooled reaches 2031254.

October 18, 2012 | Unregistered CommenterArvR

Thanks for the question and sorry for the delay in response. I would always try to use the pooled estimates when analyzing data that was multiply imputed, as opposed to trying to interpret the results of the individual data sets. I hope that helps!

November 3, 2012 | Registered CommenterJeremy Taylor

Hi Jeremy, your blog is so helpful! Thanks so much for running it. Similarly to ArvR I am worried about the very hight degrees of freedom for pooled estimates. (In my preliminary analyzes I am calculating a paired-samples t-test on my imputed data; the variables are normally distributed). Is it normal or should I worry about it? [Perheps some other assumptions that I did not think of are violated?] I can't find an answer to this in any papers or textbooks; but also haven't spotted such 'problem' in any scientific papers in which the authors used the multiple imputation method. Thanks!
Magdalena

February 9, 2013 | Unregistered CommenterMagdalena

You are correct that this topic is not well documented, and that is because it is a fairly recent development that these techniques are being used in social science research. As for your specific situation, I'm not sure I understand what your question is. What is it that you aren't sure whether you should worry about? About your degrees of freedom being high?

February 12, 2013 | Registered CommenterJeremy Taylor

What do you do about using MI with logistic regression? I can only get a pooled estimate if I do the ENTER method. If I need to do stepwise methods, no dice - I get the stats for the 5 individual estimates? Is there a work around for this or is it not supported? I have the full version 21 in SPSS. Someone told me that I should consider doing FIML in mPlus....

February 16, 2013 | Unregistered CommenterRae

Hi Jeremy. I have a problem after multiple imputation in SPSS, maybe you can help me... I imputed data because I "miss" a fair part of my dichotomous outcome. With this multiple imputated data I want to do a logistic multiple regression to create a clinical prediction model. Pooled values in SPSS only appear when all outcome variables of the regression are the same. The problem is this is not the case in a few of the 5 datasets. The only solution I can think of, is to mean/pool the datasets into one. When, for example, three out of five values gives a positive outcome of 1 the value will be transform to 1, so the majority decides. Is this a acceptable solution?

Thnx

February 18, 2013 | Unregistered CommenterStijn

Rae,

I'm not 100% about using stepwise regression with MI, but my guess for it not being support would be because we can't guarantee that the same variables will be added to the model in each of the imputed datasets, so the estimates could not be pooled if they aren't estimates of the same predictors. I would indeed recommend to use something other than SPSS, in most any circumstance (I prefer R, but mplus is an improvement over SPSS also). Another possible work-around might be to run the stepwise portion of your model on a single dataset, determine the variables that are entered and their entry order from that initial run, and then run your model as ENTER method (Hierarchically) to obtain Robust estimates of that model with your MI data. Anyone else have any ideas?

February 22, 2013 | Registered CommenterJeremy Taylor

Stijn,

I do not recommend reducing your MI datasets into a mean/pooled dataset, as this removes the main benefit from using MI in the first place. Pesonally, I would be leary of imputing outcome variables at all, I typically feel more comfortable with imputing IV's. In some circumstances I might feel more comfortable with imputing outcome data, such as when I have repeated measures and I have valid data at previous time points to include in my imputation model. Otherwise, I'd be concerned that I would be just adding a lot of noise to my model.

Aside from that, it has not been my experience that the outcome variables need to be the same in SPSS for it to run or poolled estimates. You might try running the MI analysis in another software, such as R (e.g. Zelig package). You can import the MI data directly from SPSS (using the "foreign" package) and then just run it in R.

February 22, 2013 | Registered CommenterJeremy Taylor

Hi Jeremy!

I am so glad to see I am not the only one who is struggling with MI. So I have 2 questions for you. I am doing a multivariate linear regression in SPSS using the following syntax:

GLM Y1 Y2 WITH X1 X2 X3
/PRINT PARAMETERS
/LMATRIX 'Multivariate test of entire model'
X1 1; X2 1; X3 1.

My first question is what modifications do I need to make to run this with MI data sets. Is there any way to get pooled results with this?

My second question is about just the multivariate operation and not MI. How can I alter this miltivariate regression syntax to control for a confounding variable?

Many, many, many thanks for any assistance you can provide!

March 21, 2013 | Unregistered CommenterL

L,

Thanks for your great questions!

First, your mi question: there is no modification to the syntax that will allow it to run via mi. For a model to run on mi datasets: 1) your version of SPSS needs to support MI and 2) you must have your data split by imputation. If it won't produce pooled estimates at that point, then SPSS just doesn't support pooled estimates for that analysis (perhaps turn to R). You will know if your data is setup properly if it will produce mi pooled estimates in other types of analysis, then the problem is just that SPSS isn't supporting the pooled estimates for that analysis.

As for your question about including a control variable, you just need to add the additional covariate to the model as a predictor along with X1, X2, and X3.

For example, if I wanted to control for variable "Q1", then I would just have the following:

GLM Y1 Y2 WITH X1 X2 X3 Q1

UNLESS Q1 was a categorical variable (a.k.a. factor varible), then you could have:

GLM Y1 Y2 BY Q1 WITH X1 X2 X3

Remember: any variables you include in a model together are producing "partial estimates", which means they are giving you estimates of there effects, controlling for ALL OTHER VARIABLES IN THE MODEL. I hope that helps!

March 27, 2013 | Registered CommenterJeremy Taylor

Dear Jeremy,

Really good to know that you're here to help! I've a question with MI data set and pooled result-- I was trying to do MANOVA with my 5 datasets and wonder if it's possible to get a pooled result? It didn't show in the output so does it mean that SPSS doesn't support it?
Thanks a lot!!

March 27, 2013 | Unregistered CommenterChris

Hi Chris!

Sadly, if SPSS doesn't offer a pooled estimate (below the individual dataset estimates), it is likely that it isn't supported in SPSS for that analysis (super annoying, I know).

However, you can still calculate pooled results:

To calculate a pooled estimate (e.g. pooled regression coefficient), you can simply calculate the mean of the estimate across all datasets (simple!)
Calculating standard error is a bit more complicated, but not impossible, it is just a matter of a few equations. Here is a link to a PSU website that presents the needed equations (and has a few links to papers on this subject).

http://sites.stat.psu.edu/~jls/mifaq.html#howto

I hope this helps!

April 11, 2013 | Registered CommenterJeremy Taylor

Also Chris, be sure that you have "pooled results" checked int he "Multiple Imputation" options tab of the SPSS settings. Check out more about these options on page 40 of the SPSS "missing data" manual (link below):

SPSS Missing Data Manual

April 11, 2013 | Registered CommenterJeremy Taylor

Hi Jeremy,

I've read through this thread and just want to clarify a couple of things to be sure I'm doing things right. I've got a multiply imputed dataset in SPSS and am running repeated measures ANOVA. As you know, SPSS does not pool results for this type of analysis. If I need pooled results, is it true that I can just calculate the mean (for example, for an F-statistic, and then look up an associated p-value for a given F-statistic)? Can this be done for df as well? I see that a slightly different procedure is required for standard error, but otherwise is calculating the mean a legitimate option, and do you have a citation for this?

Thanks!

April 26, 2013 | Unregistered CommenterM

Hello again Jeremy,
In February I asked you about the problem with extreamly high degrees of freedom. Thanks so much for addressing my question that time and sorry for not doing the same!
Yes, what I am worried about is the fact that some of the df's are very high (e.g. I would expect df=55252, and I get df=2742536455) or in other cases much lower than expacted.
As I couldn't find a solution to that, I made such footnote in my thesis: "The degrees of freedom in paired-samples t-tests are higher (or in further cases lower) than expected, because the results are pooled from 5 imputed datasets. No corrections were applied in case of these analyses. Discussion on the application of possible corrections of the degrees of freedom for pooled estimates in small and large samples can be found in Barnard and Rubin (1999) or Van Ginkel (2010)". Indeed, there is a discussion on this in these papers, but I had problems with applying the corrections discribed there. Hope the above footnote will be enough for my reviewers. I will be very happy to hear any further explanations.
All teh best,
Magdalena

April 27, 2013 | Unregistered CommenterMagdalena

M,

This is a common method of calculating pooled estimates, yes.

The citation I've seen used most often for this is:

Rubin, D.B. (1987) Multiple Imputation for Nonresponse in Surveys. J. Wiley & Sons, New York.

April 29, 2013 | Registered CommenterJeremy Taylor

Hello Jeremy.
I too am strugling with MI. I found the need to use even if I have a very low percentage of missing values (2.9% of cases), because the values on the constructs being evalauted are different for respondents and non-respondents.
The thing is, I need a unique dataset to perform analysis outside the SPSS (namely CFA using Mplus or LISREL), but I cant seem to understand how to do it or if it is even possible.
If it is not possible, then my only option is to delete the missing values?

Thanks in advance for all your help.
All the best,
Paula

April 29, 2013 | Unregistered CommenterPaula

Paula,

If you are using maximum likelihood estimation (MLE), I don't think it is necessary to listwise delete or impute values, as MLE allows for analysis of datasets with missing data. Even if it isn't missing completely at random, it is likely a better option than listwise deleting (especially with such a small percentage of data missing).

April 30, 2013 | Registered CommenterJeremy Taylor

Dear Jeremy,

I am working with multiple imputation on my thesis. The study design is a randomized controlled trial. I have collected data for the intervention and controlegroup of a depression questionnaire at baseline (T0), after 6 months (T1), and after a year (T2). I would like to run a paired sample t-test and a ANOVA after de data is imputed. But the problem is that normally I would use 'split file' and use the grouping variable (controlgroup/interventiongroup) so I can run the paired sample t test. But I can't because de data is already on split_imputation. How do I solve this problem? Thank you in advance for your answer, it would really help me a lot!

May 7, 2013 | Unregistered CommenterNadine

So you are saying you want to run paired sample t-tests separately for the control and intervention group? If so, I would use the split file for the imputations (as you indicated) and then simply use the "select" command to select for each group and run the t-tests with each groups selected, respectively. With that said, I'm not sure paired-sample t-tests would be my first choice, given the design you described, as I'd tend to lean towards a repeated-measures analysis with an examination of a TIME-by-group interaction (unless you don't have a large enough sample size for that).

May 8, 2013 | Registered CommenterJeremy Taylor

Hi Jeremy, thank you for your advice. I have "tried" to use MI for my data and the advice that I got was to run six imputations with 1000 iterations which gave me six lots of outputs (data sheets). I was then told to average the results for the estimates given to me on each imputation to fill in the missing data (tedious). Is this a correct way if I am not confident or competent to do pooled results. If it is ok to do this, is there a reference that I could use that says that it is ok?
Thanks
Bron

May 13, 2013 | Unregistered CommenterBJ

Just as a follow up for my previous post, I am not doing anything fancy with my results, just descriptive statistics, ANOVA and t-tests.

May 13, 2013 | Unregistered CommenterBJ

BJ,

This is not how I would recommend handling missing data, to be honest. Typically analysis is run on each of the datasets (six in your case) and then the RESULTS (e.g. t-statistic) are pooled (e.g. averaged), not the actual imputed data points themselves. I hope that helps!

May 14, 2013 | Registered CommenterJeremy Taylor

Hello! I'm working on my dissertation and am using PROCESS to run multiple moderator regression models in SPSS on 2 sets of data that I collected. I've discovered that of my datasets has data that is not missing at random (via Little's MCAR test and an analysis of how the missingness of the variables are related to the DV) and so I've run multiple imputation, which I'm not very familiar with. My question is, now that PROCESS will no longer work, should I create standardized variables after I run MI, create my product terms and then run the regressi0ns? Or am I creating my standardized variables at the wrong time? Should I be doing MI on those as well?
Thanks in advance!

July 6, 2013 | Unregistered CommenterDanielle

Danielle,
I don't see a problem with creating your standardized variables and interactions after running MI. However, there is a benefit to taking those steps before MI, as that would allow you to use the interaction terms in your imputation model (which can help to build the best model, at times). There isn't a right or wrong answer, as there is variability in how people conduct these processes. The important thing is to think through why you are choosing one method over another and be able to justify that choice. I hope that is helpful.

July 10, 2013 | Registered CommenterJeremy Taylor

Hi Jeremy

I am wondering if you can possibly help with a quick question- I am aiming to perform SEM on my data but am initially faced with issues before I start. Am I right in thinking the process for preparing the data should be; 1) normalise variables where needed 2) perform multiple imputations for missing data 3) run linear regressions (on complete data) with my outcome variable to ascertain which variables are to be included in my SEM. I 'm not 100% sure on the order of points 2 and 3 but my guess is to impute then perform regressions- is this correct?

I would be grateful for any advice you can possible give.
Many thanks,
Kelly

January 14, 2014 | Unregistered CommenterKelly

Hi Jeremy,

I'm performing a multivariate regression analysis. One of my dependent variables has a lot of missing values, so I used the multiple imputation method in SPSS to impute these values. Unfortunately, SPSS does not give pooled estimates for multivariate regression analysis. So, I was thinking to take the mean over the 5 imputed dataset and perform the multivariate regression analysis on the resulting variable, but something tells me that this might not be a good idea. What do you think? I also saw in the previous comments that using imputation on an outcome variable is not a good idea. Why is this? And are there exceptions to this?

Thank you in advance!
Wendy

January 15, 2014 | Unregistered CommenterWendy

Kelly,

I think you are smart to clean your data and check assumptions (e.g. normalization) before analysis, but beyond that I recommend building your SEM models on theory. Thus, the way you'd determine what variables to include in your model would be a conceptual/theoretical decision, not based on a preliminary regression. With that said, it isn't bad practice to modify your theory as you build your model and consider emerging evidence, but try to make sure that theory is driving your decisions, not just the data, as that leaves you vulnerable to spurious findings based on model error.

ALSO, PLEASE BE AWARE THAT THIS SITE IS BEING MOVED TO WWW.STATSMAKEMECRY.COM IN THE FUTURE, SO I WILL NOT BE MANAGING THIS SITE ON AN ONGOING BASIS.

January 17, 2014 | Registered CommenterJeremy Taylor

Wendy,

Data imputation is still as much as an art as a science, as their are a lot of different opinions out there about how to do it (or if it should be done at all). I tend to try to avoid imputing values for DVs, as I'm concerned about introducing additional error variance into my model (thus reducing power) and because it is conceptually awkward in my mind.

With that said, if you are going to impute, I'd recommend using R, as the packages available have many more options for pooling estimates. Typically averaging effects across imputation results is a reasonable estimate of the pooled effect though.

ALSO, PLEASE BE AWARE THAT THIS SITE IS BEING MOVED TO WWW.STATSMAKEMECRY.COM IN THE FUTURE, SO I WILL NOT BE MANAGING THIS SITE ON AN ONGOING BASIS.

January 17, 2014 | Registered CommenterJeremy Taylor

Hello Jeremy,
I am in the middle of data analysis and wonder if you could give some suggestions regarding imputation.

I am investigating students’ achievement goals change before and after the intervention (with questionnaire). The research design is longitudinal (T1, T2, T3) with control group and experimental group. I have relatively high dropout rate in T2 (30% of students dropout and never return to my study and most of them are from experimental group). So far, I have checked Little’s MCAR test and separated variance t test (missing value analysis in SPSS) and both results show non-significant ( can I say my data is missing at random?).

Here are my questions.

1.Does it make sense if I only impute for experimental group? (Because missing data are mostly from this group)

2.Would it be more accurate if I just delete those who only participate once and keep those who participated at least two times then do estimated-maximization imputation?

3.Do you know where I can find more information about sensitivity analysis for spss users?

Thank you for reading the long message.

February 22, 2014 | Unregistered CommenterLin

Dear Jeremy,

I am quite new to SPSS, but have a dataset in which I have to find some method to fill in the gaps of missing data that I have, as I am making an additive index where I will calculate scores based on the added data. I have tried doing a multiple imputation, which gives me pooled means for the variables. But as you mentioned in a previous reply, the object of doing a MI is not to get a full dataset. My question is, can I paste the pooled mean in the gaps of missing data in my dataset for each variable?

I also want to conduct a cluster analysis, but have not been able to do the analysis using with the pooled means included in the analysis. What am I doing wrong?

Would deeply appreciate if you took the time to reply to my questions!

August 13, 2014 | Unregistered CommenterJohanna

hi! I have successfully created the multiple imputation data but not able to do the statistical analysis. The following command pop up on the screen" unrecoverable application error in the statistic processor".. may I know what is the problem and how can it be solved? Tahnk you.

August 13, 2014 | Unregistered Commenterharris

Hi,

I have created an imputed data set (5 imputations) and want to do multiple regression on this dataset. In the pooled results, however, I do not get standardized regression coefficients. How do I calculate them? Is it just the average of the standardized regression coefficients of the 5 imputed datasets? Or do I need to standardize some of the variables? And how should I do that then?

Do you know a source on this issue (literature which I can refer to)?

Many thanks for your advice!

October 8, 2014 | Unregistered CommenterX-el

Hi,

This has been very interesting to read!

I've just had a colleague show me how to do some imputation (so I'm very new to the topic) however, we seem to have hit a wall. SPSS doesn't support pooling the results for my analysis - a Hodges-Lehman Confidence Interval for Median Differences.

I saw another post where you suggested calculating the pooled estimates by hand. Looking at that link, it explained how to calculate the estimate itself, which is fine, but when it came to the confidence interval, that was based on assumptions of normality which wouldn't be appropriate here.

Have you seen anybody impute data for a Median Difference CI?

Thank you for your help

K

November 13, 2014 | Unregistered CommenterKIpper

Hi,

I've been reading the previous posts with the hope of finding a solution to my problem. Though I have found some hints, its still not clear. I'm working with data that has got missing values. I have done MI and now I wish to do a chi square test for the imputed data and the original data. How do I go about doing that.

January 20, 2015 | Unregistered CommenterPhinda

I have a dataset with some missing values. I used SPSS for multiple imputation to have a dataset with no missing values (for AMOS). How should I save and use the pooled outcome in AMOS?

February 25, 2015 | Unregistered CommenterReza

Good thread.

Here's a related question:
When SPSS "pools" estimates, apparently it's not just averaging them, so what exactly is the calculation being done in the "pooling"? For example, i am running correlations, t-tests, and regression (all of which are multiple imputation-supported analyses in SPSS).

Thanks in advance!

March 3, 2015 | Unregistered CommenterSean

Hi Jeremy,

I wonder whether you could help with a problem I have. I would like to use graphs to show the results of a multinomial logistic regression which relies on 5 imputed datasets. I would like to deal with one dataset to create these graphs because having 5 graphs for each result I am trying to illustrating graphically is a bit tedious in an article.
So I have calculated the average of the estimated probabilities as I believe you suggested earlier in this blog. But if I want to show the differences in terms of probabilities according to a variable X and that X has been imputed, what do I reduce the 5 values for X to a single one? Can I then take the average that I would round if I deal with categorical variables? Or simply report the mode (the most frequent value in the 5 imputations)? These graphs would have a descriptive meaning anyway. I couldn't find anything on this; so any help would be appreciated. I am using spss by the way.

Many many thanks!

April 23, 2015 | Unregistered CommenterLaurie

Hi Jeremy,

I have imputed my data with 5 imputation + the original data set to do a multivariate linear regression. I have some questions.
first, for descriptive analysis some of my results from imputed data set are not valid. how should I deal with this? I have some labor force at age 2 or 3 years old, which are there after imputation! When I am talking about labors mean age or range how can I report these numbers?

my second question is about pooling the results. Can I run tests for different data sets and choose the robust one as my result manually?

and the last question is if I choose some results from data set.1 and some from data set.2, is this a reliable and correct way of reporting or I should get all the results from one data set?

Thanks,
Sara

June 2, 2015 | Unregistered CommenterSara

Hi Jeremy,
I am working with a dataset in which the data was already imputed (100 x's) and then put into one single SPSS dataset. I don't think the imputation was done in SPSS, maybe SAS, Mplus or something else? So there is a variable (ranging from 1-100) in the dataset. My question is, how can I use this dataset to run pooled analyses (correlations, regressions, etc)? I am working with SPSS v 23, if that matters.
Thanks so much!

September 19, 2015 | Unregistered CommenterKristen