Monday, February 15, 2021

Spatial mediation demonstration

This posting is a ‘how&why to’ analyze spatial data (census tract, ZIP, town, county levels, e.g.) in the more flexible structural equation modeling (SEM) manner. The technical details will be hinted at, this is a pretty specialized areas, so much so that has been branded with a solid name (spatial econometrics).

Here we will walk through: (1). How to get the data; (2). How to map and analyze the spatial data using spatial regression models; (3). Create spatially lagged variables for all effects in the mediation model (M and Y pretty much); (4). Run spatial mediation models (SEM).

Notes on software: some choices for software are habit/utility, you can switch them: (i). I use Stata for SEM/mediation, I normally use Mplus, R\lavaan would do too, or Onyx or others. (ii). GeoDa is the only one I know that can generate for you a spatially lagged variable, AND allow you to save it, then use it in another software for analyses; it also does mapping the quickest (Stata can do maps nicely too, even over time, see http://bit.ly/covidCT_video ); (iii). To show 2 variables (‘layers’) at a time in a map, Tableau seems to fastest and most flexible option (see an example here http://bit.ly/debtct_zip ); we have here however 3 such pairs of variables… and visualizing spatial mediation overall I don’t think has been done yet by someone (visualizing plain mediation wasn’t accomplished well either). So let’s proceed with the steps:

1. How to get the data

Download some free data to work with; I chose for this  CDC's Social Vulnerability Index

(SVI): free, covering the whole US; you can analyze county level US data, e.g., but with thousands of counties in it, it’s less ‘visible’ what we are doing, and the map is too busy too: I will use CT data, Census Tract level for this (aggregating up to ZIP would be an option); to download it as such, choose from Data Documentation; get BOTH, shape file (Connecticut.zip in my case) and the CSV one.

From the many files you need to extract and use 3: *.shp & *.dbf: & *.shx 

NOTE: If you have Shape files separately from your own data, you would need to MERGE them, GeoDa can easily do it, you need to make sure you have the SAME spatial/region code in both (e.g. census tract, or ZIP code): a worked example is here.

2. How to map and analyze the spatial data using spatial regression models

* Get GeoDa (if you haven’t yet), and install it. Open the Shape file, a map will pop up too. A GeoDa intro is here.   




2.0. Target a specific theoretical model to investigate, here we want “racial/ethnic minority -> income -> uninsured” or EP_MINRTY -> EP_PCI (inc_sqrt)  -> EP_UNINSUR (variables are nicely described in documentation).

2.a. Visualize them first: as a trio, using Scatter plot matrix 



- this shows off-limit values, and one can click on the dots in the scatterplots, and bring up BOTH the table and the map to see ‘who’ are those offending cases, and why.

- here also one can see regression coefficients, both-directions, for each pair: there is no ‘correlation’ here, and there is a good reason: with spatial data a ‘bidirectional’ coefficient like a correlation is less meaningful, because one needs an ‘effect’ (outcome) and a ‘cause’ (predictor), so one can then add a proper NEEDED spatially lagged variable for the effect: only then one can talk about a X->Y spatial effect, after accounting for LagY like: X + Xlag -> Y.

2.b. There are some -999 values in that need to be made BLANK/delete

One way to handle them is to click in EACH cell and delete the -999 values: 3 found and deleted.

+ At this point is better to save the project (which carries the table behind it):


2.c. Also, it would be better for estimation ease to rescale income (EP_PCI  ) to be e.g. US$10,000’s; but I am not sure how to do it in here: so will get the SQUARE ROOT instead: Go to Table view, Right click and choose Add Variable, type inc_sqrt (e.g.); then repeat and choose ‘Calculator’, and select as Pperator SQUARE ROOT. Btw, these were the variables:


EP_MINRTY

Percentage minority (all persons except white, non-Hispanic) estimate, 2014-2018 ACS

This calculation resulted in some division by 0 errors in cases where E_HH equals 0. These rows were revised with the estimated proportions set to 0 and their corresponding MOEs set to -999.

EP_UNINSUR

Adjunct variable - Percentage uninsured in the total civilian noninstitutionalized population estimate, 2014-2018 ACS

EP_PCI

Per capita income estimate, 2014-2018 ACS

2.d. Now we need a ‘weight matrix’ that tells GeoDa what we think counts as a ‘neighbor’; there are some options here, we’ll go with Queen: Go to Tools tab, Choose Weight manager; click on Create; then Select ID variable & choose FIPS; leave Queen contiguity, order of contiguity 1; click Create: a window pops up to SAVE this new file (*.gal) with your name: say CTCensusTracts_queen; Close

Can close this window now.


2.e. Now one can examine these ‘weights’ or ‘who is neighboring whom’ (will not show here for now).

3. Create spatially lagged variables for all effects in the mediation model (M and Y pretty much)

3.a. Create spatially lagged variables; lagincom: for inc_sqrt and lagunins: for EP_UNINSUR; no need for a XLag for the cause/predictor, if no variable is pointing into it.  

Right click in Table: choose Calculator, choose Spatial lag tab; the new weight matrix is shown up in ‘Weight’ box now; Click on Add variable, enter say ‘lagincom’ then click on ‘Add’. NOW you can define what to compute in it: click in Variable for the target Y (here inc_sqrt); leave ‘Use row-standardized weights’ checked; then done: can click Apply now.  




You have now 3 new variables in the data: so SAVE the project.


NOTE: These lagged variables are NOT needed for GeoDa, it uses them behind your back: they are needed to SAVE and then analyze in a SEM program.

3.b. Save the new data, and bring in another software for SEM analyses: for spatial mediation.

Before this, run EP_MINRTY -> EP_UNINSUR as Classic regression; then click on Weight box: and run Spatial Lag, then Spatial Error: save them.

Tip: try running ‘Classic’ GeoDa regression EP_MINRTY +  lagincom -> EP_UNINSUR

and then compare to the ‘Spatial lag’ GeoDa regression

EP_MINRTY -> EP_UNINSUR: any guess what they will show???  (yes, identical results; both GeoDa and Stata of course). 

4. Run spatial mediation models (SEM)

4.a. To save the file, Save As, click on yellow ‘open’ icon, choose ‘Comma Separated Value’ csv; click ‘OK’ give it a name (will see ‘Saved Successfully’ pop up!); now better open CSV in Excel, and save as *.xls first, then import in Stata with Import\Excel

- Stata can handle now for 2 lagged effects in the model, not only one for the final effect/outcome, as in Geoda, this is the power of SEM to handle simultaneous effects). So we fit 2 equations:

EP_MINRTY +  lagincom -> inc_sqrt

EP_MINRTY +  lagunins + inc_sqrt -> EP_UNINSUR

 Quick results (more results table available) are:

The naïve effect NonWhite-Uninsured is: for 10% points increase in percent non-White, there is a 1.2% points increase in uninsured (SE=0.05, t=24.7).

The proper spatially lagged effect however is about ½: 0.55% points increase (SE=0.06, t=9.8).

In SEM, the total effect is a 0.47% points increase (SE=0.06, t=7.5, differs because we properly also lagged spatially M=income!), of which:

* 10% is indirect: 0.05% points increase (SE=0.02, t=6.0),

* and the rest of 90% is direct (residual), 0.42% points increase (SE=0.07, t=6.0). I paste the Stata code and results: 

* Y <- X M lag [with sem]:
sem EP_UNINSUR <- EP_MINRTY inc_sqrt lagunins, nocapslatent
(3 observations with missing values excluded)
Endogenous variables
Observed:  EP_UNINSUR
Exogenous variables
Observed:  EP_MINRTY inc_sqrt lagunins
Fitting target model:
Iteration 0:   log likelihood = -12439.076
Iteration 1:   log likelihood = -12439.076
Structural equation model                       Number of obs     =        827
Estimation method  = ml
Log likelihood     = -12439.076
-------------------------------------------------------
                 |                 OIM
                 |      Coef. SE      z    P>|z|       [95%C.I.]
-------------------------------------------------------
Structural       |
  EP_UNINSUR     |
       EP_MINRTY |   .043   .007     6.00   0.000     .029    .057
        inc_sqrt |  -.010   .003    -2.96   0.003    -.016   -.003
        lagunins |   .753   .040    19.04   0.000     .676    .831
           _cons |  1.892   .799     2.37   0.018     .325   3.459
-------------------------------------------------------
var(e.EP_UNINSUR)|   12.556   .618                   11.403   13.827
-------------------------------------------------------
LR test of model vs. saturated: chi2(0)   =      0.00, Prob > chi2 =      .
* Now full SEm mediation model, with gsem:
gsem (inc_sqrt <- EP_MINRTY lagincom ) (EP_UNINSUR <- EP_MINRTY inc_sqrt lagunins) , nocapslatent
gsem, coeflegend
Generalized structural equation model           Number of obs     =        828
Response       : inc_sqrt                       Number of obs     =        828
Family         : Gaussian
Link           : identity
Response       : EP_UNINSUR                     Number of obs     =        827
Family         : Gaussian
Link           : identity
Log likelihood = -6130.5325
-------------------------------------------------------
                  |      Coef.  Legend
-------------------------------------------------------
inc_sqrt          |
        EP_MINRTY |  -.4799262  _b[inc_sqrt:EP_MINRTY]
         lagincom |   .8298569  _b[inc_sqrt:lagincom]
            _cons |   49.46046  _b[inc_sqrt:_cons]
--------------------------------------------------------
EP_UNINSUR        |
         inc_sqrt |  -.0096035  _b[EP_UNINSUR:inc_sqrt]
        EP_MINRTY |   .0425863  _b[EP_UNINSUR:EP_MINRTY]
         lagunins |   .7530868  _b[EP_UNINSUR:lagunins]
            _cons |   1.891827  _b[EP_UNINSUR:_cons]
-------------------------------------------------------
   var(e.inc_sqrt)|   741.4005  _b[/var(e.inc_sqrt)]
 var(e.EP_UNINSUR)|   12.55649  _b[/var(e.EP_UNINSUR)]
-------------------------------------------------------
*       IE = indirect effect
nlcom _b[inc_sqrt:EP_MINRTY]*_b[EP_UNINSUR:inc_sqrt]
       _nl_1:  _b[inc_sqrt:EP_MINRTY]*_b[EP_UNINSUR:inc_sqrt]
-------------------------------------------------------
             |      Coef. SE       z      P>|z|      [95%C.I.]
-------------------------------------------------------
       _nl_1 |    .005   .002     2.86   0.004     .001   .008
-------------------------------------------------------
 
*       TE = total effect
nlcom _b[EP_UNINSUR:EP_MINRTY] + _b[inc_sqrt:EP_MINRTY]*_b[EP_UNINSUR:inc_sqrt]
       _nl_1:  _b[EP_UNINSUR:EP_MINRTY] + _b[inc_sqrt:EP_MINRTY]*_b[EP_UNINSUR:inc_sqrt]
-------------------------------------------------------
             |      Coef. SE      z     P>|z|     [95%C.I.]
-------------------------------------------------------
       _nl_1 |   .047   .006     7.51   0.000     .035    .060
-------------------------------------------------------



Sunday, June 14, 2020

Changes in 3 groups

This post shows how to test for differences (between groups like race/ethnicity RE, 'health disparities' HDs) in changes, here for two waves of data (‘pre’ and ‘post’). I am just pointing to the cool solution, and mention other software capacities. Before the latent change score (LCS) model though, a note on group comparisons, period. Someone said somewhere (can’t find it anymore) that now he doesn’t need to ever run an anova test in his remaining professional life; that’s what I am aiming for too. So:

*!!! This is to run in Stata multiple group comparisons of means instead of anova
* Here YYY is your outcome; raceetn3 is the 3 group grouping variable
sem ( <- YYY), group(raceetn) ginvariant(none) means(YYY ) var(YYY) method(mlmv) coeflegend nocapslatent
 * all 3 Wald test
test (_b[/mean(YYY)#1.raceetn3] = _b[/mean(YYY)#2.raceetn3] ) ///
 (_b[/mean(YYY)#2.raceetn3] = _b[/mean(YYY)#3.raceetn3] )
  * pairwise Wald test 1 vs 2
test (_b[/mean(YYY)#1.raceetn3] = _b[/mean(YYY)#2.raceetn3] )
  * pairwise Wald test 2 vs 3
test (_b[/mean(YYY)#2.raceetn3] = _b[/mean(YYY)#3.raceetn3] )
  * pairwise Wald test 1 vs 3
test (_b[/mean(YYY)#1.raceetn3] = _b[/mean(YYY)#2.raceetn3] )

I am showing 2 options for LCS:
1. This allows one to quickly assess whether means are equal (across the 3 groups, or more) or 2 at a time; I am showing an example for comparing Males (M) only: 'if Gender=="M" '. The BIGGEST advantage here is that even when there is attrition, i.e. fewer cases at 'post', this model estimates changes 'for the full sample'!

* Here YYY is your outcome; raceetn3 is the 3 group grouping variable
sem (L21 -> YYY_t2@1) (YYY_t1@1 _cons@0 -> YYY_t2)  if Gender=="M" , ///
 group(raceetn) ginvariant(none) means(L21 ) var(e.YYY_t2@0 L21 ) method(mlmv) latent (L21) coeflegend
 * all 3 Wald test
test (_b[/mean(L21)#1.raceetn3] = _b[/mean(L21)#2.raceetn3] ) ///
 (_b[/mean(L21)#2.raceetn3] = _b[/mean(L21)#3.raceetn3] )
  * pairwise Wald test 1 vs 2
test (_b[/mean(L21)#1.raceetn3] = _b[/mean(L21)#2.raceetn3] )
  * pairwise Wald test 2 vs 3
test (_b[/mean(L21)#2.raceetn3] = _b[/mean(L21)#3.raceetn3] )
  * pairwise Wald test 1 vs 3
test (_b[/mean(L21)#1.raceetn3] = _b[/mean(L21)#2.raceetn3] )
2. This allows one to also see the means set equal (all of them, or 2 at a time)
* One can also test the M2 model (no equality constraints) then the M1 model (with equality constraints) and compare them using likelihood ratio test; here the UCLA post is illuminating as usual.

*note one has now pre YYY_t1 and post YYY_t2
sem (L21 -> YYY_t2@1) (YYY_t1@1 _cons@0 -> YYY_t2)  if Gender=="M" , ///
 group(raceetn3) ginvariant(none) means(L21 ) var(e.YYY_t2@0 L21 ) method(mlmv) latent (L21)
estimates store diffLCSmeans
sem (L21 -> YYY_t2@1) (YYY_t1@1 _cons@0 -> YYY_t2)  if Gender=="M" , ///
 group(raceetn3)  means(1: L21@equallcs )  means(2: L21@equallcs )  means(3:L21@equallcs ) var(e.YYY_t2@0 L21 ) method(mlmv) latent (L21)
estimates store eqLCSmeans
lrtest diffLCSmeans eqLCSmeans

* can test 2 1vs2 I use new labels equallcs to set means equal and then when not, black=group 1, white=group 1, hisp =group 1 for the 3rd, left free, not-equal
sem (L21 -> YYY_t2@1) (YYY_t1@1 _cons@0 -> YYY_t2)  if Gender=="M" , ///
 group(raceetn3)  means(1: L21@equallcs )  means(2: L21@equallcs )  means(3:L21@hisp ) var(e.YYY_t2@0 L21 ) method(mlmv) latent (L21)
estimates store eq1vs2
lrtest diffLCSmeans eq1vs2

* can test 2 2vs3
sem (L21 -> YYY_t2@1) (YYY_t1@1 _cons@0 -> YYY_t2)  if Gender=="M" , ///
 group(raceetn3)  means(1: L21@black )  means(2: L21@equallcs )  means(3:L21@equallcs ) var(e.YYY_t2@0 L21 ) method(mlmv) latent (L21)
estimates store eq2vs3
lrtest diffLCSmeans eq2vs3

* can test 2 1vs3
sem (L21 -> YYY_t2@1) (YYY_t1@1 _cons@0 -> YYY_t2)  if Gender=="M" , ///
group(raceetn3)  means(1: L21@equallcs )  means(2: L21@white )  means(3:L21@equallcs ) var(e.YYY_t2@0 L21 ) method(mlmv) latent (L21)
estimates store eq1vs3
lrtest diffLCSmeans eq1vs3

3.I tried the constraint option too, like:
constraint 1 _b[YYY:1.GROUPING] - _b[Age:2.GROUPING] = 0
constraint 2 _b[YYY:2.GROUPING] - _b[Age:3.GROUPING] = 0
which allows you to run the model then without, then by adding at the end
constraints (1 2)
but it doesn’t seem to do the trick for me.
More:
i. Mplus does this easily and flexibly, of course, but the Stata solution is of note, because it’s a ‘general statistics’ software, not a (what some think of Mplus) a SEM dedicated software. A longer explained code is here. One can also go to this demonstration/deconstruction piece too.

ii. Others can do it of course too; the SAS LCS code is below, e.g., not with group contrasting however (for now).
* This simply introduces a NEW variable you don't have in the data that statistically forces the mathematical equality: y2 = y1 + L21;

proc calis data = one ;
path
y2 <--- L21= 1/* constrained = 1 */
y2 <--- y1 = 1/* constrained = 0 */
/* fchange <--- y1 = a; POSSIBLE */
pvar
y2 = 0/* constrained = 0 */
y1 = var1,
L21= varlds;
mean
y2 = 0/* constrained = 0 */
L21= meanlds; /* otherwise it's not estimated: Mn=1.80488 SE=3.10569 t=0.5812 */
run;

Sunday, June 7, 2020

Latent growth mirrors linear mixed and multilevel models

[slides, data and more SAS details at https://bit.ly/LGM_Mplus_StataThis post deals with analyzing many-time-points data, intensive longitudinal (ILD, some use ecological momentary assessment, EMA (Shiffman, Stone, & Hufford, 2008), or experience sampling method (Larson & Csikszentmihalyi, 1983) an example to the right, 28 waves; note: when analyzing 1 case only, these are labeled time series).

 I extend the the excellent crosswalk between 2 less intuitive statistical models of changes for many time points of data posted by the UCLA folks: the Linear mixed Model (LMM) and the MultiLevel Model (MLM). I add here a 3rd, which I didn’t see until trying to replicate the 2 they linked: the Latent Growth Model (LGM, in structural equation modeling SEM = latent variables tradition). What I add here then are: SAS+Stata+Mplus code to run the LGM that replicates the LMM and the MLM analyses and then: set them side-by-side for direct matching that can allow for more informed interpretation. I also add some help with interpretation of results, heavily drawn from ch. 4 from the intensive longitudinal analysis (ILA) ‘bible’ by (Bolger & Laurenceau, 2013).
Several points for ease of navigation: (1). Follow the subscripts! I am using generously subscripts, and insist on keeping a dot in there when that variability dimension has been muted, either because we averaged across it, or because that variable or parameter does not vary across it; (2). In addition to using the original Greek letters in setting up the mathematical models, I then revert to their simpler labels, as the UCLA folks did too: G00 for e.g. instead of γ00; (3). I use the same data, publicly available online (in *.dat format, ready for Mplus, Stata too: this means is does not have any variable labels or codes in it, it’s raw numbers data); (4). I add the SAS LMM MIXED and LGM CALIS syntaxes, because SAS is often omitted in such head-to-head comparisons; e.g. in a ‘growth bible’ book  (Grimm, Ram, & Estabrook, 2017) (with lots of online codes) SAS (nlmixed) and R codes are used for LMM models, then Mplus and R (but not SAS anymore) for LGM ones.

Let’s dive in:
(A). The biggest barrier in easily analyzing data with many time points (>5, e.g.) with these models is the naming of things, and then the Greek letters, and then (non)use of subscripts.

(B). To interpret consistently across models, and obtain the same results, one needs to use a time setting that is common in LGM, but not otherwise: first time point is 0, last one 1, and each wave is set to the % of time it passed from 0, for 4 waves this comes out as 0, .333, .666, 1 (see ch. 4 in (Bolger & Laurenceau, 2013)).
The biggest challenge in walking across the 3 models is the language/labeling/meanings. My insights are captured in the table below; also, Mplus/LGM/SEM talks about the ‘random coefficient for the time slope’ in terms of a ‘latent slope varying across cases’ instead, although equation-wise we end up in the same spot.
Symbol
General meaning
LGM parameter meaning
                                   MLM parameter meaning
LMM parameter meaning

LEVEL 1



S^2
Y residual variance
σi2(Yit) Y0-YT S^2 Residual variances (set @=)
Residual Y variance
Residual Y variance

LEVEL 1 & 2

Group by time
Group by time
G01
Ai.  effect on Y intercept
Ai.  ->  ‘LGM Y Intercept’ beta
L2 Ai. ->Y effect
Ai.  -> Y effect
G11
Ai.  effect on time slope
Ai.  ->  ‘LGM Y Slope’ beta
L2 Ai. ->Slopes effect
Ai.  * Time interaction effect

LEVEL 2



G00
Intercept (Y level at time 0)
αI.. Intercept/mean of Y
L2 Y Intercept
Y Intercept (when all X’s are 0)
G10
Time ‘effect’
αS.. Intercept/mean of LGM slope
L2 Intercept of Y slopes
Time effect = time slope
Tau11
Variance of time slope
σ2(uIi.) Residual variance of LGM Y Slope
L2 Slopes residual variance
Time slopes variance
Tau00
Variance of Y intercept
σ2(uSi.) Residual variance of LGM Y Intercept
L2 Slopes residual variance
Y Intercepts variance
Tau01
Covariance time slope* Y intercept
σ(uIi.,uSi.) Covariance LGM Y Intercept*LGM Y Slope
L2 slopes*intercepts covariance
Time slopes with Y intercepts covariance
G20
Effect of X on Y
Contemporaneous effects set equal Xi= ->  Yi=
Within level Xit ->  Yit effect
Xit ->  Yit effect
Formally:
1. LMM and MLM setup is:
Level-1 Model
     Y = B0 + B1*(TIME) + B2*(X) + E
Level-2 Model
     B0 = G00 + G01*(A) + U0
     B1 = G10 + G11*(A) + U1   
B2 = G20                       
Which can be ‘combined’ into 1 equation by bringing B0, B1, and B2 from level-1 into the level-1 equation (which is what makes the LMM harder to intuitively decipher!)”:
Y = G00 + G20*(X) + G01*A + G10*TIME + G11*A*TIME + [U0 + U1*TIME + e]
+ with some additional parameters that appear now at level-2:
Tau00 = sigma^2(U0,U0); Tau11 = sigma^2(U1,U1); Tau01 = sigma^2(U0,U1)
or with Greek and subscripts:
     yit = β0i. + β1i.∙Time.t + β2i.∙Xit + 1∙εit
Level-2 Model
     β0i. = γ00 + γ01*Ai. + u0i.
     β1i. = γ10 + γ11*Ai. + u1i.  
β2i. = γ20
τ00 = σ2(u0i.)i  ; τ11 = σ2(u1i.)i  ; τ01 = σ(u0i., u1i.)i
(I add a ‘redundant’ last subscript outside () to indicate variability dimension that gives rise to the co/variance)

2. The LGM setup is:
Level-1 Model
Y = 1*B0 + B1*(TIME.) + E (time here are fixed ‘loading’ numbers; can also be freed)
or with Greek and subscripts:
yit = 1∙ηYIntercept,i. + ηYSlope,i λSlope,.t + βYX,..Xit   +  1∙εYit            [βYX.. forced equal across time and persons!]
Level-2 Model
B0 = G00 + G01*(A) + U0
     B1 = G10 + G11*(A) + U1
B2 = G20
or with Greek and subscripts:
ηYIntercept, i. = αIntY,.. + βAInt,..Ai.  + uYIi.                                    ~ B0 = G00 + G01*(A) + U0
ηYLSlope, i.    = αSloY,.. + βASlo,..Ai.  + uYSi.                                   ~ B1 = G10 + G11*(A) + U1
βYX,.. = βYX,..                                                                          ~ B2 = G20
The major insights gained are (this comes through in the table of meanings above too):
(1). G00 or γ00 is LGM’s  αIntY,..
(2). G10 or γ10 is LGM’s αSloY,..
(3). G01 or γ01 is LGM’s βAInt,..
(4). G11 or γ11 is LGM’s βASlo,..
(5) G20 or γ20 is LGM’s βYX,..  
(6) Tau00 or τ00 is LGM’s σ2YIntercept, i.)i 
(7) Tau11 or τ11 is LGM’s σ2YLSlope, i.)i  
(8) Tau01 or τ01 is LGM’s σYIntercept, i. YLSlope, i.)i
- I use A as a time-fixed (non-varying) predictor, A.t is sometimes used for ‘treatment’ (T would not work here, because with t time, T is maximum waves…) and Xit for a time-varying predictor (I switched the UCLA notation, sorry). Note that the gamma, beta, lambda, eta Greek labels don’t make our job of understanding easier, I would want to have a simpler labeling, common across all traditions; for now I think the UCLA G00 etc. that comes out of LMM (and MLM) is good enough to show commonalities between models: this means I will drop the LGM specific labels here, to make it clear what LGM estimates that matches the LMM/MLM approaches.
- Note that the ‘intercept/slope language becomes quickly a problem, unless we complete the phrase’: whose intercept/slope… much like a regression coefficient β for that matter: βYX, two more pointers, from what to what: e.g. βYTime,1i.is the time slope of Y, the outcome (Y and ‘Time’ can be dropped often), and because it becomes itself a variable at level-2, it will have its own intercept estimated G10 or γ10, and its own slope (if a predictor of it is modeled)… G11 or γ11, hence G10/γ10 is the ‘level-2 intercept or conditional mean, of the Y time slope’, while G11/γ11 is the ‘level-2 slope of βYTime’ or ‘the effect of the time-invariant predictor A on the Y time slope’: yes, 2 mouthfulls!

(C). The SAS Calis option is pretty flexible, see e.g. URL ,and it has several flavors, of which we showcase 2: the lineqs option, which is an equation-like coding (like y1 = 1*yintLGM + Lambda * ysloLGM  + e1), and the path option (like y <--- yintLGM + Lambda * ysloLGM)

(D). LGMs differ in Mplus from the Stata and the SAS ones; among the reasons may be … “Mplus does not consider a growth model to be a two-level model as in multilevel modeling but a single-level model. With longitudinal data, the number of levels in Mplus is one less than the number of levels in conventional multilevel modeling.” (Mplus Guide, p. 113; needs further investigation). Not sure; even a simple unconditional LGM differs (see the detailed outputs at https://bit.ly/LMM_MLM_LGM  )



(E). SAS has a ‘proc’ that is underutilized: CALIS. It of course can run LGMs (as well as LCSs). E.g.: URL1; URL2; URL3.
Mplus has a unique feature that other software needs a different procedure for: it can rename the variables you have and use in an analysis, on the spot, without changing the data itself. Meaning, one can recycle the same analysis over and over, which comes in handy for long code (like LGM, or Latent Change Score LCS models).
Q&A
You might have some questions, I suspect (so I label them ‘predicted Q…):
pQ&A 1. What if we add auto-regressive (AR) paths in LGM? Go ahead, go wild! You will then run a Autoregressive latent trajectory (ALT) (Bollen & Curran, 2004) or LV-ALT (Bianconcini & Bollen, 2018).
pQ&A 2. Can we add nonlinear slopes in this LGM? Yes, of course, or even let the trajectory be ‘free’, i.e. follow the data (go up and down as in the data). Just free the Lambda loadings of the latent slope (except the first @0 and the last @1)
pQ&A 3. Is this ‘new’? Certainly not, many before pointed to this possibility, but I have not seen a ‘how to’ and side-by-side crosswalk like this anywhere. Excellent detailed appendix (with SAS, Mplus and Mx codes) accompanies (Mehta & Neale, 2005); also see the appendix for (McNeish & Matta, 2018)
pQ&A 4. Is it really ‘this simple’? Well… the setup for LGM I was lucky to get right before figuring out the equations, there are several settings that one can miss or mess up easily: I got the ‘all Y intercepts set @0’ insight from the online codes accompanying Kevin Grimm, Nilam and Ryne’s  growth modeling ‘bible’ (Grimm et al., 2017) .
pQ&A 5. Can this be made even simpler? Likely yes, using the graphical instead of equational display/tool: e.g. for the LGM:


pQ&A 6. Is there more to this? Certainly, Tihomir Asparouhov told me first this is not so such a surprising find (but his brain resonates at other wavelengths than normal ones), it's just a lond to wide change... and in fact the Mplus team has recently merged the classical ‘very many time points’ (econometric time series) tradition with the latent variable (SEM) one recently, under the comfortably large and colorful umbrella of DSEM. Much more possible here to explore.
pQ&A 7. (I am asking this, sorry) Isn’t this a good candidate for a ‘teacher’s corner’ paper for a journal like SEM ?It sure looks like to me (thanks for asking!), however, a bad experience that got me caught in some ideological disputes that the editors seem to be part of (regarding the ‘existence’ of potential outcomes in SEM) made me promise that I won’t knock on that door ever again: so I prefer to lay out things in the open, both for critiques, and for free open use and enjoyment.

Some limitations/invitations for more:
(i). I tried for many hours to replicate the simple LMM models from Stata in SAS: there are many little quirks that made this pretty hard (and got the dreaded “WARNING: Stopped because of infinite likelihood” and another like it in response too: if someone knows the easy way to do this, email me; UCLA has a page, and URL2, and URL3, etc.); eventually Dan McNeish’s code did it for me!
(ii). The SAS MLM setup is also a next ‘must’: so that SAS users can see the link in their own preferred output format (the logic and interpretation should not change): if someone knows a quick way to do this, email me.
(iii). Can LCS replicate these, and then go wilder beyond? Most likely, that’s for another post.

References
Bianconcini, S., & Bollen, K. A. (2018). The Latent Variable-Autoregressive Latent Trajectory Model: A General Framework for Longitudinal Data Analysis. Structural Equation Modeling: A Multidisciplinary Journal, 25(5), 791-808.
Bolger, N., & Laurenceau, J. (2013). Intensive longitudinal methods: an introduction to diary and experience sampling research. New York: Guilford Press.
Bollen, K. A., & Curran, P. J. (2004). Autoregressive latent trajectory (ALT) models a synthesis of two traditions. Sociological Methods & Research, 32(3), 336-383.
Grimm, K. J., Ram, N., & Estabrook, R. (2017). Growth modeling: Structural equation and multilevel modeling approaches: Guilford Publications.
Larson, R., & Csikszentmihalyi, M. (1983). The experience sampling method. New Directions for Methodology of Social & Behavioral Science.
McNeish, D., & Matta, T. (2018). Differentiating between mixed-effects and latent-curve approaches to growth modeling. Behavior Research Methods, 50(4), 1398-1414.
Mehta, P. D., & Neale, M. C. (2005). People are variables too: Multilevel structural equations modeling. Psychological Methods, 10, 259-284.

Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological momentary assessment. Annu. Rev. Clin. Psychol., 4, 1-32.
Summary of outputs for illustration of identical results and how to read/what:

LMM
LMM (Stata) results:
. xtmixed y x1 a time timeBYx1  || id: time, cov(un) var mle
           y |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
G01       x1 |    .611028    .062607     9.76   0.000     .4883205    .7337355
G20        a |   .2978672   .0215842    13.80   0.000      .255563    .3401715
G10     time |   3.217683   .0972144    33.10   0.000     3.027147     3.40822
G11 timeBYx1 |   .8891406   .0974282     9.13   0.000     .6981847    1.080096
G00    _cons |   .6647655   .0624683    10.64   0.000     .5423298    .7872011
------------------------------------------------------------------------------
  Random-effects Parameters  |   Estimate   Std. Err.     [95% Conf. Interval]
-----------------------------+------------------------------------------------
id: Unstructured             |
Tau11              var(time) |   3.724995   .3006731      3.179938    4.363477
Tau00             var(_cons) |   1.561595   .1239062      1.336685    1.824349
Tau01        cov(time,_cons) |   1.111405   .1396993      .8375996    1.385211
-----------------------------+------------------------------------------------
S^2            var(Residual) |   .5415599   .0242323      .4960884    .5911993
------------------------------------------------------------------------------
LR test vs. linear model: chi2(3) = 2209.97               Prob > chi2 = 0.0000            
MLM Mplus
                    Estimate       S.E.  Est./S.E.    P-Value
Within Level
 Y          ON
G20    A               0.298      0.022     13.352      0.000
 Residual Variances
S^2    Y               0.542      0.024     22.149      0.000
Between Level
 S          ON
G11    X1              0.889      0.101      8.836      0.000
 Y          ON
G01    X1              0.611      0.063      9.739      0.000
 Y        WITH
Tau01    S             1.112      0.145      7.685      0.000
 Intercepts
G00    Y               0.665      0.062     10.683      0.000
G10    S               3.218      0.097     33.025      0.000
 Residual Variances
Tau00    Y             1.562      0.118     13.202      0.000
Tau11    S             3.725      0.284     13.121      0.000                   

LGM <plus
YINTLGM  ON
G01    X1              0.611      0.063      9.760      0.000
 YSLOLGM  ON
G11    X1              0.889      0.097      9.126      0.000
 YY0-3      ON
G20    AX0-3           0.298      0.022     13.772      0.000
 YSLOLGM  WITH
Tau01    YINTLGM       1.111      0.140      7.956      0.000
 Intercepts
        YY0-3          0.000      0.000    999.000    999.000
G00    YINTLGM         0.665      0.062     10.642      0.000
G11    YSLOLGM         3.218      0.097     33.099      0.000
 Residual Variances
S^2    YY0-3           0.542      0.024     22.349      0.000
Tau00    YINTLGM       1.562      0.124     12.603      0.000
Tau11    YSLOLGM       3.725      0.301     12.389      0.000


Summary input/syntax/software codes for replications (but ZIP with full list of: data, codes, outputs are posted here https://bit.ly/LMM_MLM_LGM  )
1. SAS
1.1. SAS LMM
*     1. Read data first LONG/Vertical
proc import out= ex6long datafile = "P:\ex6.10L.dta";
run;
proc contents data=ex6long;
run;


/* NOT completely sure about this MIXED model */
proc mixed data= ex6long covtest method=reml; *Method=REML uses restricted ML;
model y = time | x1 / s ddfm=kr;*ddfm = KR uses Kenward-Roger correction;
random int time / subject=id type=vc; *Uncorrelated random effects for the intercept and slope; solution adds MANY estimates: what are they?
repeated / type=vc subject=id; *homogeneous diagonal error structure;
run;

1.2. SAS LGM
*     1. Read data first WIDE/Horizontal
proc import out= ex6wide datafile = "P:\ex6.10W.dta";
run;
proc contents data=ex6wide;
run;

title "Linear Latent Growth Model LGM Model replicating the Unconditional Linear Mixed Model LMM and the MultiLevel Model MLM";
proc calis  data= ex6wide method=ml pall;
lineqs
y1 = 0. * Intercept + 1*F_yintLGM + 0 * F_ysloLGM          + e1,
y2 = 0. * Intercept + 1*F_yintLGM + 0.3333333 * F_ysloLGM + e2,
y3 = 0. * Intercept + 1*F_yintLGM + 0.6666666 * F_ysloLGM + e3,
y4 = 0. * Intercept + 1*F_yintLGM + 1 * F_ysloLGM          + e4;
variance
F_yintLGM F_ysloLGM,
e1-e4;
mean
F_yintLGM F_ysloLGM;
cov
F_yintLGM F_ysloLGM;
run;

* OR run like: 

title " LGM replicating the LMM and the MLM";
proc calis  data= ex6wide method=ml PSHORT ;
path
y1 <--- F_yintLGM = 1, /* constrained = 1 */
y1 <--- F_ ysloLGM = 0, /* constrained = 0 */
y2 <--- F_yintLGM = 1, /* constrained = 1 */
y2 <--- F_ ysloLGM = 0.3333333, /* constrained = 0.33333334*/
y3 <--- F_yintLGM = 1, /* constrained = 1 */
y3 <--- F_ ysloLGM = 0.6666666, /* constrained = 0.66666669 */
y4 <--- F_yintLGM = 1, /* constrained = 1 */
y4 <--- F_ ysloLGM = 1; /* constrained = 1 */
pvar
y1-y4 = reserr,  /* variancee estimated but equal*/
F_yintLGM = varint,
F_ysloLGM = varSlo;
pcov
F_yintLGM F_ ysloLGM ;
mean
y1 = 0, /* intercept of the 2nd constrained = 0 */
y2 = 0, /* intercept of the 2nd constrained = 0 */
y3 = 0, /* intercept of the 2nd constrained = 0 */
y4 = 0; /* intercept of the 2nd constrained = 0 */
run;

title "Conditional x1 LGM replicating the LMM and the MLM with time varying predictor too a1-4";
proc calis  data= ex6wide method=ml PSHORT ;
path
y1 <--- F_yintLGM = 1, /* constrained = 1 */
y1 <--- F_ysloLGM = 0, /* constrained = 0 */
y1 <--- a1 = bx,
y2 <--- F_yintLGM = 1, /* constrained = 1 */
y2 <--- F_ysloLGM = 0.3333333, /* constrained = 0.33333334*/
y2 <--- a2 = bx,
y3 <--- F_yintLGM = 1, /* constrained = 1 */
y3 <--- F_ysloLGM = 0.6666666, /* constrained = 0.66666669 */
y3 <--- a3 = bx,
y4 <--- F_yintLGM = 1, /* constrained = 1 */
y4 <--- F_ysloLGM = 1, /* constrained = 1 */
y4 <--- a4 = bx,
F_yintLGM <--- x1,
F_ysloLGM <--- x1;
pvar
y1= reserr,  /* variance estimated but equal*/
y2= reserr,  /* variance estimated but equal*/
y3= reserr,  /* variance estimated but equal*/

y4= reserr,  /* variance estimated but equal*/
F_yintLGM = varint,
F_ysloLGM = varSlo;
pcov
F_yintLGM F_ ysloLGM ;
mean
y1 = 0, /* intercept of the 2nd constrained = 0 */
y2 = 0, /* intercept of the 2nd constrained = 0 */
y3 = 0, /* intercept of the 2nd constrained = 0 */
y4 = 0, /* intercept of the 2nd constrained = 0 */
F_yintLGM  = mnInt, /* declare them to be estimated*/

F_ysloLGM = mnSlo; /* declare them to be estimated*/
run;
SELECTION of output to show identical results:

Absolute Index
Fit Function
0.0724

Chi-Square
36.1498

Chi-Square DF
25

Pr > Chi-Square
0.0694
Path
Param.
Est.
SE

t Value
p
y1-4
<===
a4
G20
0.298
0.022
13.787
<.0001
F_yintLGM
<===
x1
G01
0.611
0.063
9.750
<.0001
F_ysloLGM
<===
x1
G11
0.889
0.098
9.117
<.0001
Variance Param.s
Variance
Variable
Param.
Est.
SE
t Value
p
Error
y1-4
S^2   
0.543
0.024
22.338
<.0001

F_yintLGM
Tau00
1.565
0.124
12.591
<.0001

F_ysloLGM
Tau11
3.733
0.301
12.387
<.0001
Means and Intercepts
Type
Variable
Param.
Est.
SE
t Value
p

F_yintLGM
G00 
0.665
0.063
10.620
<.0001

F_ysloLGM
G10 
3.218
0.097
33.033
<.0001
Covariances Among Errors
Error of
Param.
Est.
SE
t Value
p
F_ysloLGM
_Tau01
1.114
0.140
7.9518
<.0001
2. Stata
2.1. Stata LMM
xtmixed y x1 a time timeBYx1  || id: time, cov(un) var mle
(yes, that’s all)

2.2. Stata LGM

* one would need to switch the X and A to mirror the equations above! The original data had X as time-fixed and A as time invariant: bad choices
* Unconditional Mplus eg ex6.10W
sem (y1 <- IntY@1 SloY@0              _cons@0) ///
(y2 <- IntY@1 SloY@.33333334  _cons@0) ///
(y3 <- IntY@1 SloY@.66666669  _cons@0) ///
(y4 <- IntY@1 SloY@1                  _cons@0) , latent(IntY SloY) ///
var(e.y1@var e.y2@var e.y3@var e.y4@var ) /// /*set = means (IntY SloY ) */
cov(IntY*SloY) means (IntY SloY ) method(mlmv)

* Time-fixed & time-varying predictor Mplus eg ex6.10W 
* Beware: if not declaring _cons in Intercept & Slope line, df's will be larger by 2, and estimates off
 sem (y1 <- a1@beta IntY@1 SloY@0            _cons@0) ///
(y2 <- a2@beta IntY@1 SloY@.3333333  _cons@0) ///
(y3 <- a3@beta IntY@1 SloY@.6666666  _cons@0) ///
(y4 <- a4@beta IntY@1 SloY@1                 _cons@0) ///
(IntY <- x1 _cons) (SloY <- x1 _cons), latent(IntY SloY) ///
var(e.y1@var e.y2@var e.y3@var e.y4@var ) /// /*set = means (IntY SloY ) */
cov(e.IntY*e.SloY) method(mlmv)
estat gof

3. Mplus
3.1. Mplus MLM

3.2. Mplus LGM
! one would need to switch the X and A to mirror the equations above! The original data had X as time-fixed and A as time invariant: bad choices
! time-varying outcome part
        yintLGM BY y0-y3@1;
        yintLGM;
        [yintLGM];
        ysloLGM BY y0@0
                  y1@.3333333
                  y2@.6666666
                  y3@1;
        yintLGM on x1  ;
        ysloLGM on x1  ;
        [ysloLGM];
        intLGM WITH sloLGM; ! because sloLGM ERROR variance was>0 and is so small
        yy0-yy3 (v_u);! set equal across time
        [yy0-yy3@0]; ! set all intercepts to 0 as in Grimm
   ! make it like LMM&MLM:
  y0 on a0 (bx);
  y1 on a1 (bx);
  y2 on a2 (bx);
  y3 on a3 (bx);
    OUTPUT: SAMPSTAT standardized tech4 tech1;