This post shows how useful AMOS can be to test MANY structural hypotheses at once, focused on whether an outcome after the intervention is better in one vs. another group, controlling for baseline differences (a Comparative Effectiveness line of questioning).
The simple idea is that you can conclude that the 2 post-intervention means are different by comparing a 2-group (intervention vs comparison) model where the 2 post-intervention means are equal to the similar model in which the 2 means (technically the intercepts however) are different: if the 'equal intercepts' model fits worse, the means cannot be deemed equal; quite basic.
Now, there are about 15 'baseline' models one can start with to test this 'equal intercepts' second model, and hence 15 such potential conclusions; we should however use only 1 to conclude whether there is/not a difference: this is the test of differences (labeled T) conducted using the best fitting baseline model, but also considering the power of that model to really detect this difference. The problem is: a WELL fitting model may be under-powered to detect a specific effect, which happened here too.
Take a look, try it for yourself, with Excel only and the [previously] free AMOS v.5 software; I can email you these files (ask at comanus@netscape.net ), but also posted here: http://bit.ly/2eGSxlq
AMOS does MODEL COMPARISONS" for ALL possible nested models, assuming one correct and testing the other ones nested in it... Pretty cool!!!
The little paper telling this story is available at:
http://digitalcommons.wayne.edu/jmasm/vol13/iss1/6/
Coman, E. N., Iordache, E., Dierker, L., Fifield, J., Schensul, J. J., Suggs, S., & Barbour, R. (2014). Statistical power of alternative structural models for comparative effectiveness research: advantages of modeling unreliability. Journal of Modern Applied Statistical Methods, 13(1).
No comments:
Post a Comment