Sunday, March 22, 2026

Causality focused basic statistics

 

This is an outline of a presentation prepared for the SEM Working Group Meeting 2026, in Warsaw, Poland, 15–17 April 2026. It was developed with colleagues from the University of Medicine, Pharmacy, Sciences and Technology of Târgu Mureș: Ioan-Bogdan Bacos, Manuela Rozalia Gabor, Laura Barcutean & Petru-Alexandru Curta.

    The arguments are old [i] but less known: statistical modeling is not statistical testing, and whereas modeling is done more intuitively graphically, in a structural way, statistical tests are just ‘hammers’ one use for different nails… David Kenny showed [ii] 3.5 decades ago that models can be easily expressed like [iii]

independent variable   ->   dependent variable

He defined a model as “a formal representation of a set of relationships between variables” (there is also Model Theory [iv]).

As Jim Jaccard and Jacob Jacobi have shown [v] (see pic in footnotes), many statistical tests really tackle the same model, commonly some xcont ->  ycont relation (xcont means x is continuous; x01 instead is a binary x; for more causal-focused discussions, go to Tinyurl.com/ONCAUSALITY ).

To make this ‘visible’, we show how to ‘run’ several statistical tests, and that they necessarily have to reach the same conclusion, in terms of the ‘p value’, to what extent we decide/not that a relation is non-null. We share a link to the Copilot.AI chat that implements the technical parts of our illustration (one does not need to ‘know’ software coding in the age of AI…).

     We first generated data in the very flexible and intuitive graphical modeling software Onyx[vi] using a data generating model

ivcont -> xcont -> mcont -> ycont [& xcont ->  ycont]

which saves a csv file; and dichotomized all variables in Excel, around their means, to create binary counterparts; and we compute xbym as the product xcont*ycont: this data will be then read into R and utilized for the demonstration to follow.

We show the model equivalence of the following statistical tests:

STATISTICAL TEST           STRUCTURAL MODEL

(1) t-test                                 for x01 -> y01 (x01 -> ycont similar)

(2) F-test                                 for x01 -> y01 (x01 -> ycont similar)

(3) chi-squared test                 for x01 <-> y01 (cannot run x01 -> ycont)

(4) simple regression; and       for x01 -> y01 (correlation x01 <-> y01; & xcont <-> ycont should reach similar conclusion)

(5) a path model                       x01 -> y01 (x01 -> ycont similar)

And then add a third variable and show that it can play several distinct roles:

(6) A mediator                          xcont -> mcont -> ycont [& xcont -> ycont]

(7) An instrumental variable (IV) model  ivcont -> xcont -> ycont [no ivcont -> ycont path]

(8) Pearl’s mediating IV model   xcont -> mcont -> ycont [no xcont -> ycont]

Beyond this, adding a xcont*ycont interaction term opens up modeling options for ‘causal’ mediation too (a Mplus translation of Tyler Vandweweele’s SAS decomposition code is on SEMNET; AIs can do this now right away, one for R exists already [vii]).

The results of simulation and analyses are:

(1) t-test                     t = 0.39753, df = 95.447, p-value = 0.6919

(2) F-test                    F value 0.158, p-value = 0.692

(3) Chi-squared test   X-squared = 0.16103, df = 1, p-value = 0.6882

(4) Simple regression t value -0.398, Pr(>|t|) = 0.692

(Pearson correlation mirrors the regression findings necessarily t = -0.39757 , df = 98, p-value = 0.6918)

(5) Path model (with lavaan)    z-value -0.402 P(>|z|) = 0.688

Their p-values align [viii]: we would conclude the same thing.

All of them however can be replaced by a ‘walk through’ the path model “x01 -> y01”, using as ‘raw’ data the variances and covariance between the variables. This ‘tracing rule visual estimation’ will replicate the regression and path analysis results, in terms of the actual estimate; the tracing rule does not run statistical significance tests, however.

The effects estimated in R were: Regression: -0.040 & Path analysis: -0.03982

The tracing rule simply leads to the solution

Effect x01 -> y01 = Covariance(x01, y01)/Variance(x01)

which yields the same result: [Tracing rule] -0.04025

For (6)-(8), the codes are in the R appendix r_Poland.txt – all are easy to ‘grab’ with an AI assisting.

The file contains two more ‘free gifts’, dagitty and MIIVsem codes to investigate what ‘statistical adjustments/controls’ have to be done, and nOT done, when focused on specific causal effects of interest.

Of course, each test is better suited for some combination of continuous/categorical pair, e.g. the t-test and the F test and the z-test in the regression model commonly use a continuous outcome (but they work with binary too).

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

PROMPT used in Copilot:

Using the notation ivcont, xcont, mcont, ycont, for 4 continuous variables, and x01, m01, y01 for 3 binary variables, generate R code to analyze some of them using the following tests:

(1) t-test for x01 -> y01

(2) F-test for x01 -> y01

(3) chi-squared test for x01 <-> y01

(4) simple regression for x01 -> y01

(5) a path model (with lavaan) x01 -> y01

(6) a mediation model (lavaan)  xcont -> mcont -> ycont [& xcont -> ycont]

(7) a instrumental variable (IV) model (lavaan)  ivcont -> xcont -> ycont [no ivcont -> ycont path]

(8) Pearl’s mediating IV model (lavaan) xcont -> mcont -> ycont [no xcont -> ycont]

[then asked for Pearson correlation for x01 <-> y01]


[i] Robin Beaumont has shown this in 2017 in great detail  SEM equivalent to basic statistical procedures

[ii] Kenny, D. A. (1987). Statistics for the social and behavioral sciences. Posted by author at https://davidakenny.net/doc/statbook/kenny87.pdf  Little, Brown Boston.

[iii] “Research in the behavioral and social sciences often involves testing statistical models.

What Is a Model?

A statistical model is a formal representation of a set of re1ationships between variables. Statistical models contain an outcome variable that is the focus of study. […]

A very simple model is one in which the dependent variable equals a constant plus the residual variable.

dependent variable = constant variable + residual variable

[…]  In simple equation form the model is

dependent variable = effect of the independent variable + residual variable

Instead of expressing the model as an equation, the model could be just as easily specified by a diagram; arrows could be drawn from cause to effect, as follows:

independent variable   ->   dependent variable    <-   residual variable

A representation of a model that uses arrows is called a path diagram.”  (Kenny, 1987), p. 184-5

[iv] Rizza, D. (2025). Model Theory: The Algebraic Basics: Springer.

[v] Jaccard, J., & Jacoby, J. (2009). Theory construction and model-building skills: A practical guide for social scientists: Guilford Press.

[vi] The Onyx steps are simple, Robin Beaumont has a series of trainings on Youtube, see WWW1

[vii] Software choice can be of course expanded at will, see e.g. Python and Stata

[viii] That t, z, F, and chi-squared tests are special cases of eachother and can be mathematically derived one from another, under special conditions, Gemini.AI conformed to us (but you can verify too).

 

 




Friday, December 27, 2024

Run a Spatial lag regression in Excel

 

This is an explanatory note for the ‘How to’ training videos tinyurl.com/intrstats3 (or directly Youtube: youtube.com/watch?v=K_Emygh7axo ). The Excel worksheet is posted online at Tinyurl.com/SPATIALSSM  (or directly Dataverse).

The Research Question we start with is simply:

“Do states with more residents in poverty have longer/shorter life expectancies? By how much?”

First, the spoiler: naïve or a-spatial analyses will overestimate the effect, almost always (depending on the extent of spatial ‘excess similarity’ of values in both variables, between neighboring states)[i]. A visual 'proof' is below: neighboring states 'push up/down' their neighbors' values, one variable at a time.




* To run a spatial lag regression in Excel, one needs 2 pieces of distinct data: (1). The 2 variables for the US states; (2). The ‘shape file’ of the US states, i.e. the geographic information systems (GIS) set of files encoding the location of, and boundaries between, the states.

*** The steps involved in this are:

1. Obtain a 49x49 matrix data file marking which states neighbors which other state  

* Find a ‘shape file’ for the US states online: e.g. Census – States level (cb_2018_us_state_20m.zip)

 

   * (to go from the 51 states file to the contiguous 49, use QGIS[ii]) Unzip in a folder, and then in GeoDa (free) open it; in Tools \ Weights Manager \ Create, Select an ID variable, choose an ID (GEOID e.g. or better the 2 letter state abbreviation one), and Contiguity Weight \ Queen Contiguity: what saves then is a *.gal file, in essence a text file: open it with Notepad e.g. to see its structure; for CT e.g., it’s 2 lines: The state name, the total number of its ‘queen contiguity’ neighbors, then on the next line the names of the neighbors

CT 3
NY MA RI

* This is the data we’ll process in Excel to turn into a 49 * 49 matrix, full with 0’s except spots where the column state is the neighbor of the row state (plus, we scale the non-zero numbers to add up to 1: so for CT’s 3 neighbors, each cell gets a .33): all this is shown in the Excel file Poverty_lifeexp_matrix_reg.xlsx in successive worksheets: queen_49Orig, HowTo, ProcessStandardize, STANDimport49b49. This last one will be used to turn a OLS regression into a spatial lag regression: that’s all!

2. Generate the spatial lag variables

* Multiply each variable, a column of 49 rows (a vertical vector) by the standardized weight matrix (in worksheet Reg_Lag): formula is simply MMULT(B2:AX50,P2:P50); the result is the spatial lag derivative of the initial Life Expectancy variable found in P2:P50!

3. Run the spatial lag regression

* Use the mean centered data in columns B, C, and D to run a multiple regression by hand’ in Excel; it merely means implementing (in steps however, the formula entered in 1 chink did not run!) the formula for the beta/regression coefficient found in Greene, p.23 eq. 3-10.

β p +1 x 1  = (X’p+1 x N ·XN x p+1)-1 · X’ p+1 x N ·y N x 1

for p predictors (here p = 2), N = 49 states, X are the predictors, y is the outcome. The “+1” addition is because one would need a ‘vector of 1s’ added in the matrix of predictors, this is added in the Excel.

* What we see is that the naïve β = -0.44, while the proper spatial β = -0.31.  

Conclusion:

States with 10% points more residents in poverty have a lower life expectancy at birth by 3.7 months; naïve analyses would yield instead an inflated (biased up) 5.3 months value.

* Now anyone can run a spatial regression without much fuss; working in Stata or R this can be done quite quickly, but what’s happening behind the scenes would be lost: we unpacked it here.

Some more details:

A. Keeping track of the matching by state is essential: many options exist for this, best in this instance is to use the 2 letter state abbreviation, and keep checking whether one messes the order or not, at each step: copy and paste alongside the columns to check. Alternatively, Excel can also do ‘matching’, see e.g. WWW.

          * For larger files, like the ~3,080 US counties, or the ~65,000 US census tracts, this ‘by-hand’ process becomes a little cumbersome (Excel could still do it… ), so other automated options are recommended: Stats’s sp module is a simple and instructive one: see Chuck Huber’s ‘how to’ blog posting. See also Di Liu’s post.

B. Checking the results can be done in GeoDa straight away: see Luc Anselin’s Guide (a PDF here)

* There are two ways to check this in GeoDa:  B.a. Run a Classic Regression, then a Spatial Lag (with Weight File defined); B.b. Create a spatial lag variable using the Calculator \ Spatial lag option.

  

C. Accounting for the spatial ‘auto’-correlation is much like accounting for the prior time values – where the true meaning of ‘auto’ comes from: prior values of the same variable are the main ‘driver’ of current values; one can easily a prior time (=time lag) outcome as co-predictor too, along with the spatial lag co-predictor.

*******Additional resources****************************

Some books to refer to when needing stats reviewing/reminding

* Kenny, D. A. (1987). Statistics for the social and behavioral sciences: Little, Brown Boston.

* Greene_2002_Econometric Analysis

*Reference cited**

Cameron, A., & Trivedi, P. (2009). Microeconometrics Using Stata. College Station, TX: Stata Press.

Footnotes:


[i] This is commonly called ‘nonindependence’ or less intuitively ‘auto’-correlation, even though the concept applies to 1 variable only: % poverty exhibits this, and separately life expectancy exhibits it too, the extent of is is given (commonly) by Moran’s I, which is ‘kind of’ a correlation, meaning theoretically ranging from -1 to +1. At least two features however makes it pretty different: (1). Its ‘null’ (no non-independence…) value is not 0, but ; (2). The ‘what correlates with what’ is less visible, economists call it more properly “correlated observations”, E(yi, yj) = 0, see (Cameron & Trivedi, 2009), p. 81 .

[ii] Handling 'shape files’ to delete unwanted regions, and for ‘joining’ and other operations, can be best done in QGIS; this is another task, see e.g. WWW.

Sunday, December 22, 2024

Intro to Statistics only in Excel

 

This is an explanatory note for the ‘How to’ training videos tinyurl.com/intrstats1 (or directly Youtube1 ) & tinyurl.com/intrstats2  (or Youtube2 ).

I provide details to assist in answering some research questions (RQ), using simulated data, with several basic statistical tests: chi-square test (then McNemar) and t-test, for ‘independent’ and ‘dependent samples’. The Excel worksheet is posted online at Tinyurl.com/101statsexcel  (or Osf.Io ).  The RQs are motivated by a study on weight loss, whose data is posted also online at Dataverse  (Coman, 2024) and center around body mass index (weight), Hemoglobin A1c (blood glucose), gender, and time. I asked: RQ1: Are there more overweight males than females? (& RQ.1.b. Research Question 1: Do males and females differ in body mass index?); RQ.2. Research Question 2: Does BMI levels change?; RQ.3. Research Question 1: Is the level of HgA1c predicted by BMI?

These RQs invite directly analyses best equipped to answer them. [i]

All these tests merely compare: differences, against some standard reference level of similarity (no-difference):

1. Do cases (persons/patients) differ in their 1 variable only value, say BMI? They may, they may not: if all had the same BMI, there would be nothing to explain. If ½ of the sample seem to have somewhat similar high BMI values, another ½ some somewhat similar low BMI values, the differences are mainly between the low and high ‘clusters’ (we may have 2 classes of folks, and within-class differences are rather small-ish, compared to the between-classes).

1.a. These questions beat around a causal bush, to be honest: differences in BMI are of interest mostly because of the obesity epidemic in several countries, so what we truly want to know is not just ‘what explains differences in male BMI’, but what determines John’s BMI and Jake’s BMI, so that we can tell John to exercise 30 min/day and tell Jake to exercise 45 min/day (whatever comes out from analyses), if they want to drop their BMI by some 5 kg/m2 (the unit for BMI).

1.b. Eventually, this ‘what drives differences’ knowledge is needed for another practical (and causal) inquiry: How much average weight loss would prevent say half of those who (are not yet now, but) would become diabetic in a year, to actually become diabetic? 

2. Are ‘these folks’ different from ‘those folks’ (diabetic vs. ‘normal’) in terms of something else, like weight (BMI)?

This “2 variable” question can take on different ‘shapes’ depending on how we ‘carve out’ each variable: from a ‘both continuous’ first step, we can look at a graph like below, where each diamond is a person, and split it into 2 halves, either vertically, or horizontally, or into 4 quadrants, using some ‘arbitrary’ lines (in our case set at the sample means).


2.a. If we ignore where diamonds sit in each quadrant, and just compare the 4 ‘groups’ of folks, we fall back on a 2-categorical variables RQ setup: this is handled in the Excel we work through in the Youtube-Training-1 in the “Are there more overweight males than females?” section.[ii]

*** Note that a 2x1 table of counts (of the 2 combinations (0,.), and (1,.) of normal/over-weight) in which one instead enters the means of the other variable, HgA1c here, turns the data into a format ripe for a comparison of means line of questioning: a t-test of independent samples would fit here like a glove (the one-way Anova test will yield identical results)!

*** Also note that, if we add a 3rd variable in a 2x2 table (like normal/over-weight and normal/diabetic), say blood pressure, in the form of the mean of each cross-group, one ends up with a two-way Anova structure in which there are 2 ‘main effects’ on blood pressure [iii].

2.b. The 2 continuous variables shown in the scatter plot invite questions of ‘going hand-in-hand’: are there most of the folks situated in the Low&Low (0,0) and High&High (1,1) quadrants, with only a few in the other 2? Then we have a positive relation; if we push this mental exercise to placing ALL the diamonds on a straight line (at a 45 degree angle), the 2 variables become identical [iv].

*** We show how to run a simple linear regression analysis using Excel’s ‘powers’, but also how to run a multiple regression, using Excel’s matrix multiplication (and Greene’s formulas, p. 23, eq. 3-10).

Some cold showers:

A. The statistical tests themselves are related, and each ‘falls back’ on another under some limiting constraints [v]; they also rest on specific assumptions, which can/not be relaxed handily (e.g. equality of variances in t-tests); better way to ‘open up the black box’ of such mathematical straight jackets is to model all ‘parts’ flexibly, e.g. in multiple-group structural equation models (SEM), like in this article (Coman et al., 2014). 

B. Using mathematical formulas to derive specific estimates (e.g. the standard error of the mean difference) can only take us so far: statistics is not as exact as arithmetic/algebra[vi].

*Additional resources**

Some books to refer to when needing stats reviewing/reminding

* Devore 2016 Probability and Statistics for Engineering andthe Sciences

* Kenny, D. A. (1987). Statistics for the social and behavioral sciences: Little, Brown Boston.

* Hernán MA, Robins JM (2019). Causal Inference. Boca Raton: Chapman & Hall/CRC. (SAS , Stata R, Python)

* Greene_2002_EconometricAnalysis

* Barreto, H., & Howland, F. (2005). IntroductoryEconometrics: Using Monte Carlo Simulation with Microsoft Excel

 *References cited**

Coman, E. (2024). Data and appendix for: "Restructuring basic statistical curricula: mixing older analytic methods with modern software tools in psychological research. Retrieved from: https://doi.org/10.7910/DVN/QDXM7U

Coman, E. N., Iordache, E., Dierker, L., Fifield, J., Schensul, J. J., Suggs, S., & Barbour, R. (2014). Statistical power of alternative structural models for comparative effectiveness research: advantages of modeling unreliability https://pubmed.ncbi.nlm.nih.gov/26640421/. Journal of Modern Applied Statistical Methods, 13(1), 71-90.

Stevens, J. (2009). Applied multivariate statistics for the social sciences: Lawrence Erlbaum.

Footnotes:

[i] Note that one puts the horse behind the carriage when “dichotomizing a continuous variable then using statistical tests for a categorical variable”! One in fact either asks the question in a continuous framework (Does BMI differ between biological genders?) OR in a categorical framework (Are there more/less overweight persons among males vs/ females?) It is the RQ that should trigger transforming a variable, not the search for a convenient analytic model. The ‘what does overweight mean?’ additional research question is buried when one gallantly splits a continuous variable around some convenient value, like the sample mean: for some specific variables, like HgA1c this becomes essential: what HgA1c qualifies a patient as ‘diabetic’? (i.e. “ When does diabetes ‘comes into existence’?”)

[ii] Note that we used ‘biological gender’ here, where we could have used ‘diabetic vs. not’, just to give more weight to this ‘categorical’ variable meaning: biological gender itself however can be conceptualized as a continuous measure, and it has been, in cases where the gender assignment is questioned (like this tennis example, or the recent Olympics boxing controversy, see ‘unspecified gender eligibility tests’).

[iii] More generally, Anova is a special case of the log linear model where the cell frequencies are replaced by the cell means of a third variable (see (Stevens, 2009), ch.14 Categorical Data Analysis: The Log Linear Model).

[iv] This is the end point of the problem called multi-collinearity: we use two variables in statistical models, but unbeknownst to us, they are correlated 1.0, i.e. one is a linear combination of the other one, so we don’t have 2, but 1!.

[v] * t and z test are equivalent for samples n>30; t uses sample variance, z needs population variance (www);

* F = t2: If you square a t-statistic, you get an F-statistic with 1 degree of freedom in the numerator (www1 & www2).

* When the denominator degrees of freedom in an F-statistic become very large, the F-distribution approaches a chi-square distribution: chi-squared = (numerator degrees of freedom) * F (www).

[vi] in math 1 ≠ 2, ever, while statistically, sometimes 1 = 2 can ‘happen’:  if 1 and 2 represent the  means of say $cash boys and girls in a classroom have on them, we may conclude they ‘have the same amounts of cash’, depending on the variability of individual values (if the second mean is within 1.96 standard errors of the other mean. The t-test formula for 2 independent samples means is t = (mean1 - mean2) / sqrt((sd1^2/n1) + (sd2^2/n2)), where sd1 and sd2 are the 2 standard deviations; for say 10 boys and 10 girls, with sd1 = 1.2 and sd2 = 1.2, t = 1/ 0.536656315  = 1.863389981, which is smaller than the 1.96 value that corresponds to a very small chance (<.05) of observing such a difference between the sample means, if the two population means were in fact equal (‘null’ hypothesis): we hence cannot reject the ‘null’ so 1 and 2 are statistically (significantly) indistinguishable.