All posts by Mathilda du Toit

LISREL 10.3 now available on SSI Live™

SSI is pleased to announce the immediate availability of new versions of LISREL as software subscription on SSI Live™, our brand new licensing, delivery, and support system. 

With your renewable SSI Live subscription, you are entitled to many more benefits in comparison to previous licensing models. 

After paying for a renewable Standard subscription, you are entitled to download two (2) concurrent installs (previously offered single-user LISREL licenses only allow one install), access to all upgrades or updates, technical support, and discounts on subscriptions for other VPG or SSI programs, as long as your subscription is active. Additional installs can be added easily at any time at substantially lower cost. Unlike perpetually installed and activated software, you may move licenses or activations from one machine to another from your SSI account. This may be particularly convenient given increased remote work demands placed on all of us. Educator (or workshop instructor) access to fully-functioning student licenses are available at no additional cost if the instructor maintains an active LISREL Standard subscription. Even more affordable LISREL Basic subscriptions are available.

Multiple group analyses using a single data file

In previous versions of LISREL, the user was required to create separate data files for each group. Suppose that the groups to be analyzed consisted of data collected in eight countries, the implication is that eight datasets must to be created in order to fit a multiple group structural equation model. A new feature implemented in LISREL 10 allows researchers to use a single dataset that contains a group variable.

Models for grouped-time survival data

In LISREL 10 a generalization of an ordinal random-effects regression model to handle correlated grouped-time survival data is implemented. This model accommodates multivariate normally-distributed random effects, and additionally, allows for a general form for model covariates.

Assuming a proportional or partial proportional, hazards or odds model, a maximum marginal likelihood solution is implemented using multi-dimensional quadrature to numerically integrate over the distribution of random-effects. The reference guide “Survival Models for grouped data.pdf” contains examples and references and is accessible via the online Help menu.

Models for ordinal outcomes and the proportional odds versus non-proportional odds assumption

In LISREL 10, it is possible to fit both proportional and non-proportional odds models to verify the proportional odds assumption using a chi-square difference test.  The reference guide “Models for proportional and non-proportional odds.pdf” contains examples and references and is accessible via the online Help menu.

Combining LISREL and PRELIS functionality

With LISREL 10, if raw data is available in a LISREL data system file or in a text file, one can read the data into LISREL and formulate the model using either SIMPLIS syntax or LISREL syntax. It is no longer necessary to estimate an asymptotic covariance matrix with PRELIS and read this into LISREL. The estimation of the asymptotic covariance matrix and the model is now done in LISREL.

STAT/Transfer Version 15

The data import/export feature has been upgraded from Stat/Transfer Version 14 to the most recently released Version 15. Amongst others, Stat/Transfer supports importing data from the most current SAS, SPSS, STATA, MINITAB, MATLAB  and R software.

Bug Fixes

All user-reported problems associated with previous versions of LISREL have been fixed.

Announcing the Release of AUXAL 4

A new version of the AUXAL program is immediately available for purchase.

Auxal4 uses the same empirical Bayes estimation procedure as AUXAL3, but has new features and improvements that extend the functionality of the program and make it easier to use.  The accuracy of the estimated population covariance matrix of the model parameters has been improved. Although archival of data from the Fels longitudinal study continues as the source of the default prior distribution means and covariance matrices for the several models, the more accurately estimated covariance matrices provide the priors of Auxal4.  When the user wishes to replace of the priors in the current job by those based on other longitudinal data, the steps involved have been simplified (see MEAN and COVARIANCE commands below).

The evaluated heights of the structural average curve that appear at the end of the summary output listing now include standard errors of height at any given age. They facilitate statistical analysis of group comparisons of structural average curves. The standard errors are computed from the estimated population covariance matrix of the parameters and the derivatives of the structural average curve with respect to the parameters at any given age point (see Rao, C. R. (2002). Linear Statistical Analysis and Its Applications, 2nd edition, paperback, New York: Wiley, pp.386-389).  If the cases in the current analysis are drawn from the same population as the assumed prior distribution, the default covariance matrix of the prior is used in these computations.  If the size of the current sample is large enough to justify large sample assumptions, the population covariance matrix estimated in the current job may be used in place of the default (see TECHNICAL command below).

Facilities have been added for cross-sectional analysis of mixed longitudinal data. They include standard errors for average height in given age intervals, and a provision for evaluating so-called plausible values of height at any given age that permit conventional multivariate analysis of group differences in mixed longitudinal data.  The standard errors are exact when each case is represented by only one observed height in each interval. Otherwise it is conservative: additional observations within the interval that information, but it is difficult to evaluate the reduction in the standard error because of the observations are correlated.

Changed commands, options, and keywords

MEAN and COVARIANCE commands

     New option:  IMPORT

These commands allow the user to replace the prior mean and covariance matrix with others more relevant to the current analysis. If new versions of the population mean and covariance matrix have been estimated by the program and saved using the MEAN keyword of the SAVE command,  the  appearance of the new option in these commands automatically extracts the mean and/or covariance matrix from an existing file created by the SAVE command. In the absence of the IMPORT option, the mean or covariance matrix will be read from existing files containing the parameter values in the standard order described in the AUXAL manual.

ERROR command

Setting larger error standard deviations (SD) for purposes of suppressing failures of the maximum posterior eye estimation procedure from converging is no longer required globally. It is now applied only to those cases that do not converge in the iterative estimation of the model parameters for the case.  When this occurs, the program automatically attempts re-estimation up to five times with increasing values of the error SD.   If convergence is then obtained, the last value of the SD appears in the case processing list.   These adjustments typically reduce the number of failed convergences.  (Failures of convergence are also fewer when autocorrelation of the residuals across ages is neglected by invoking and the TECHNICAL option UNCOR.

The ERROR command is no longer needed, but is still operative.

TECHNICAL command

    New option:   SAMPLE

If this option is present, the program will base the standard errors of the structural average curve at any given age on the population covariance matrix of the model parameters estimated in the current job. Otherwise, the default prior distribution is the source of the covariance matrix in the computations.

    New keyword:   PLSEVAL = t

If the EVALUATE key word of the PROCEDURE command is present, and the evaluated heights of the cases at successive ages are saved using the EVAL keyword of the SAVE command, the evaluated heights are converted into plausible values on the assumption that the measurement error distribution at each age has mean zero and standard deviation equal to the square root of the error variance for the individual case.  A random deviant from this distribution is added to each of the evaluated heights. Group differences of cross-sectional average growth  can be analyzed by multivariate analysis of variance using these values as data.  Their sampling variance includes the effects of sampling the cases as well as that of those of measurement error and the equation error (see also SAVE command).

The quantity  t  is the seed of the random number generator—any integer greater than 1 and less than 2147483647.

    New keyword:    ONLY = u

This keyword allows the program to compute the standard errors of a structural average curves directly from user-supplied values of the parameters means (possibly those from analyses by other investigators). The parameters must be input by use of the MEAN command in the standard order for the model in question.  The quantity  u  is the number of cases in the sample from which the putative mean was obtained. The program must be then executed in a dummy job of least a few cases.  The plot of the curve follows as usual.  The default prior covariance matrix is used in these calculations.

SAVE command

    Keyword: COVARIANCE

    This keyword was not implemented in AUXAL3.  It is now operative.

    Option:  HGTROW

If case heights or plausible value heights are evaluated, this option lists the output in rows of space-delimited values; otherwise, the values will be listed in a single column.

Adjusting priors for different populations

If measurements of a large sample of N cases from a suitable longitudinal growth study are available, the population mean covariance matrix of the model parameters can be estimated from the MAP estimate and posterior covariance matrix for each case.

Because the population mean and covariance matrix are required in the prior distribution for estimating the parameter means and covariances, a “boot strap” procedure is required in their use. Initially, one starts with the existing AUXAL priors for the BTT, JPA2 or Jenss-Bayley models. These priors are based on USA data. Provided the number of well-spaced data points per case exceeds, say, 20, a pass through the cases with this provisional prior will give a good approximation to the population quantities. The revised prior can be saved to an external file via the SAVE command

>SAVE means=’priors.par’;

The file priors.par contains the estimated population mean and population covariance matrix of the model parameters.

A second or third pass, each time substituting the resulting provisional prior will yield a sufficiently accurate estimation of the population mean and covariance matrix for practical use in MAP estimate of the model parameters. The prior obtained from the previous run is read into the program via the commands

>MEAN MALE FILE = ‘male.mea’;

>COVARIANCE MALE FILE = ‘male.cov’;

See examples exampl10.axl and exampl11.axl.

Announcing the Release of BILOG-MG 4

A new version of the BILOG-MG program is immediately available.

New features

  1. Executables for running the Classical Statistics, Calibration and Scoring phases have been replaced with 32bit and 64bit dynamic link libraries. The GUI determines if the computer uses a 64bit or 32bit processor.
  2. The number of examinees in a dataset has been increased from 1 million to ten million.
  3. The maximum length of all file names has been changed from 128 to 256 characters. This “feature” is only useful if the DLL can handle syntax file with directory. Currently the UI break the full path into <drive>/<dir> and <filename>, change the current working directory to <drive>/<dir> and call the BLM.exe with <filename> (together with other parameters).
  4. The phases can be run in batch mode

Any of the BILOGMG modules (Phase1 = Classical Statistics, Phase2 = Calibration or Phase3 = Scoring) can be run in batch mode by using a .bat file   with the following script.

  "c:\program files\BILOG-MG\x64\BLM64” <module> <syntax file> [CHAR=#] [NUM=#]

  where

  • module is one of 1, 2, or 3
  • syntax file denotes the name of the blm file (without the .blm extension).

Example:

 "c:\program files\BILOG-MG\x64\BLM64” 1 exampl06

Optionally, CHAR=# and/or NUM=# can be used to specify the character and numeric workspace respectively, e.g.

"c:\program files\BILOG-MG\x64\BLM64” 1 exampl06 CHAR=2000 NUM=8000

 Bug fixes

When multiple forms and multiple groups are specified in a syntax file, then no SCORE file is produced for the following command

>SCORE Method=2, NQPT=20, IDIST=0;

It was discovered that there was no computer code present to write the scores for this scenario. This problem was fixed in May 2020.

Announcing the release of SuperMix

A new version of the SuperMix program is immediately available.

Use of Stat/Transfer Version 15

The data import feature has been upgraded from Stat/Transfer Version 14 to the most recently released Version 15 to ensure import/export compatibility with software packages such as SPSS, SAS and STATA. Among others, Stat/Transfer supports importing data from the most current SAS, SPSS, STATA, MINITAB, MATLAB and R software.

Dynamic link libraries

Executables have been replaced with 32bit and 64bit dynamic link libraries. The GUI determines if the computer uses a 64bit or 32bit processor.

Announcing the release of PARSCALE 5

A new version of the PARSCALE program is immediately available.

New features

  1. The number of examinees in a dataset has been increased from 1 million to ten million.
  2. The maximum length of all file names has been changed from 128 to 256 characters. This “feature” is only useful if the DLL can handle syntax file with directory. Currently the UI break the full path into <drive>/<dir> and <filename>, change the current working directory to <drive>/<dir> and call the BLM.exe with <filename> (together with other parameters).
  3. The phases can be run in batch mode

Introducing New Features in HLM 8

The HLM8 program has a number of new statistical features.

Estimating HLM from incomplete data

In HLM 8, the ability to estimate an HLM from incomplete data was added. This is a completely automated approach that generates and analyses multiply imputed data sets from incomplete data. The model is fully multivariate and enables the analyst to strengthen imputation through auxiliary variables. This means that the user specifies the HLM; the program automatically searches the data to discover which variables have missing values and then estimates a multivariate hierarchical linear model (”imputation model”) in which all variables having missed values are regressed on all variables having complete data. The program then uses the resulting parameter estimates to generate M imputed data sets, each of which is then analysed in turn. Results are combined using the “Rubin rules”.

Flexible Combinations of Fixed Intercepts and Random Coefficients

Another new feature of HLM 8 is that flexible combinations of Fixed Intercepts and Random Coefficients (FIRC) are now included in HLM2, HLM3, HLM4, HCM2, HCM3 and HLM2.

A concern that can arise in multilevel causal studies is that random effects may be correlated with treatment assignment. For example, suppose that treatments are assigned non-randomly to students who are nested within schools. Estimating a two-level model with random school intercepts will generate bias if the random intercepts are correlated with treatment effects. The conventional strategy is to specify a fixed effects model for schools. However, this approach assumes homogeneous treatment effects, possibly leading to biased estimates of the average treatment effect, incorrect standard errors, and inappropriate interpretation. HLM 8 allows the analyst to combine fixed intercepts with random coefficients in models that address these problems and to facilitate a richer summary including an estimate of the variation of treatment effects and empirical Bayes estimates of unit-specific treatment effects. This approach was proposed in Bloom, Raudenbush, Weiss and Porter (2017).