I had many questions about the mechanics of structural modeling,
particularly because as a physicist I am always trying to figure out 1st
what's 'given' and what do we need to obtain/get/calculate/estimate.

I
read recently in Les Hayduk's 1996 book (p. 15-16, e.g.) that Duncan
has proposed a very simple and elegant procedure for obtaining model predicted variances and covariances from the structural model
parameters, which is probably
even easier to follow/apply than Wright's tracing rules (see a great
modernized tutorial on Phillip Wood's web pages: standardized and unstandardized parts).

That simple rule allows you for instance to also "estimate the beta coefficient" in a simple regression by hand, like this:

y
= b*x + e , then multiply by x: x*y = x*b*x + x*e, take expectations (a
simple operation in fact, Duncan explains it, roughly speaking just sum
up across the entire sample and divide by sample size):

E(x,y) =
b*E(x,x) + E(x,e), which, if we assume that the covariance between
predictor and error is zero (common), takes us to the "quick estimation"
of b: b = Cov(x,y) /Var(x), a well known 'formula'.

Anyways,
Duncan has shown that for a rather simple (& 'saturated') model
like the one above, by doing this multiplication+taking expectations
repeatedly, one can obtain/estimate the structural coefficients from variances & covariances (model predicted ones; but one can also go the other way around).
This figure above shows however in this simple model
what-depends-on-what; neat isn't it?

References:

Duncan, O. D. (1975). Introduction to structural equation models. New. York: Academic Press.

Hayduk, L. A. (1996). LISREL issues, debates, and strategies: Johns Hopkins University Press.