Inference
In Pumas, inference is performed after model fitting to obtain uncertainty estimates on the model's population-level parameters (fixed effects), which can then be used to derive standard errors and confidence intervals. This is accomplished by calling the infer
function on the object returned by fit
:
Pumas.infer
— Functioninfer(fpm::FittedPumasModel; level=0.95, rethrow_error::Bool=false, sandwich_estimator::Bool=true) -> FittedPumasModelInference
Compute the vcov
matrix and return a struct used for inference based on the fitted model fpm
. The confidence intervals are calculated as the (1-level)/2
and (1+level)/2
quantiles of the estimated parameters. sandwich_estimator
is a boolean that switches on or off the sandwich estimator. If rethrow_error
is false
(the default value), no error will be thrown if the variance-covariance matrix estimator fails. If it is true
, an error will be thrown if the estimator fails.
infer(fpm::FittedPumasModel, bts::Pumas.Bootstrap; level=0.95)
Perform bootstrapping by resampling the Subject
s from the Population
stored in fpm
. The confidence intervals are calculated as the (1-level)/2
and (1+level)/2
quantiles of the estimated parameters. The number of samples
used in the bootstrapping is bts.samples
. bts.ensemblealg
specifies the ensemblealg
used here. If ensemblealg
is EnsembleSerial()
, a single thread will be used. If it is EnsembleThreads()
(the default value), multiple threads will be used. See the documentation of Bootstrap for more details on constructing an instance of Bootstrap
.
infer(fpm::FittedPumasModel, alg::MarginalMCMC; level=0.95)
Infer the uncertainty of the parameters of the fitted model fpm
by sampling from the marginal likelihood with the MCMC algorithm alg
.
The confidence intervals are calculated as the (1-level)/2
and (1+level)/2` quantiles of the sampled parameters.
infer(fpm::FittedPumasModel, sir::SIR; level=0.95, ensemblealg = EnsembleThreads()) -> FittedPumasModelInference
Perform sampling importance re-sampling for the Subject
s from the Population
stored in fpm
. The confidence intervals are calculated as the (1-level)/2
and (1+level)/2quantiles of the estimated parameters.
ensemblealgcan be
EnsembleThreads()(the default value) to use multi-threading or
EnsembleSerial()` to use a single thread.
The function can be specialized to behave differently by setting the second (optional) argument type
to different values. Details of each specialization are given below.
The infer
function accepts the following arguments:
fpm
: a fitted Pumas model, typically returned byfit
.type
: the second (optional) argument determines the inferential approach used. Details are given below.level
: for confidence intervals, the confidence level to use. Typically,0.95
, corresponding to 95% confidence intervals....
: further arguments, the meaning of which is dependent on thetype
of inference. Details are given below.
The function returns an object of type FittedPumasModelInference
. Further functions can be called on this object, such as stderror
to obtain the standard errors.
We now describe in more detail the different inferential approaches that can be used by specifying the type
argument. The default is to use the asymptotic variance-covariance matrix obtained from the "robust" (also called "sandwich") estimator [8]. Other options include:
Inference from asymptotic variance-covariance matrices
This inferential approach is the default when no type
is specified in infer
:
infer(fpm)
The additional optional arguments are:
rethrow_error
: whether to throw an error when it is not possible to compute the variance-covariance matrix (e.g., because the observed information matrix is not numerically positive definite). The default is to return a failedFittedPumasModelInference
object but not throw an error.sandwich_estimator
: whether to use the "robust" variance-covariance estimator [8].true
to default. Iffalse
, then the inverse of the observed Fisher information (i.e., negative inverse Hessian of the log-likelihood) is used.
Note that the variance-covariance matrix can also be obtained directly by calling the vcov
function directly on a fitted Pumas model (returned by fit
).
Here is an example of obtaining confidence intervals by setting sandwich_estimator
to false
in order to not use the robust estimator:
my_infer = infer(fpm; sandwich_estimator = false)
The FittedPumasModelInference
returned by infer
has a show
method that prints the estimates, standard errors and confidence intervals in the REPL
. To extract this information in a DataFrame
(e.g. for markdown reporting), the coeftable
function can be used:
StatsAPI.coeftable
— Functioncoeftable(fpm::FittedPumasModel) -> DataFrame
Construct a DataFrame of parameter names and estimates from fpm
.
coeftable(vfpm::Vector{<:FittedPumasModel}) -> DataFrame
Construct a DataFrame of parameter names and estimates and their standard deviation from vector of fitted single-subject models vfpm
.
coeftable(pmi::FittedPumasModelInference; level = pmi.level) -> DataFrame
Construct a DataFrame of parameter names, estimates, standard error, and confidence interval with confidence level level
. The default value of level
is the one that was passed in infer
.
The returned DataFrame
also has relative standard errors (RSEs) included.
Additionally, you can also use the correlation_diagnostic
to print a list of parameter pairs with high, medium or low correlations:
Pumas.correlation_diagnostic
— Functioncorrelation_diagnostic(
pmi::FittedPumasModelInference;
high_cor_threshold = 0.7,
medium_cor_threshold = high_cor_threshold / 2,
)
A function which accepts the output of the infer
function as its argument and prints the lists of parameter pairs with high, medium or low correlations. Parameter pairs with high correlation are those with a correlation higher than high_cor_threshold
. Parameter pairs with a low correlation are those with a correlation lower than medium_cor_threshold
. The remaining parameter pairs have a medium correlation. Assume fpm
is the output of fit
. The following is an example of how to use correlation_diagnostic
:
fpmi = infer(fpm)
correlation_diagnostic(fpmi)
Inference using the robust variance-covariance estimator is consistent under heteroscedasticity of residuals, but relies on asymptotic normality of the maximum likelihood estimator. While this is often a reasonable approximation for the parameters associated with the mean of the response variable, it is often a poor approximation for the variance parameters. Furthermore, confidence intervals derived using this approach do not necessarily respects bounds on the parameters, and it is not unlikely for a parameter that by definition is strictly positive (such as a clearance, volume or variance) to have an associated confidence interval that contains negative values. For that reason, inference based on other approaches, such as Bootstrap
and SIR
, may be preferable in some situations.
Bootstrap-based inference
Bootstrap-based inference is obtained by calling the function Bootstrap
and passing the returned object of type Bootstrap
as the type
argument of infer
. The resulting function signature is:
infer(fpm, Bootstrap())
There are no additional keyword arguments to the infer
function, but Bootstrap
has a number of optional arguments that can be used to control the bootstrap procedure. It has the following signature:
Pumas.Bootstrap
— TypeBootstrap(; rng=Random.default_rng, samples=200, stratify_by=nothing, ensemblealg=EnsembleThreads())
Constructs a Bootstrap object that allows the user to use bootstrapping with the supplies setting in infer
function calls. See ?infer
and https://docs.pumas.ai/stable/analysis/inference/ for more information about obtaining inference results.
The keyword arguments are as follows:
- rng: used to select a pseudo-random number generator to beused for the random resamples of the datasets.
- samples: controls the number of resampled
Population
s. - resamples: the number of model parameter samples from the original sample distribution to re-sample (by weighed re-sampling without replacement). The number of resamples must be less than that of the original sample. An original sample-to-resample ratio of at least 5:1 is suggested.
stratify_by
control the stratification used in the re-sampling scheme. Should be a set to aSymbol
with the name of a covariate it is possible to stratify by a covariate with a finite number of possible values (e.g:study
for a covariate namedstudy
).ensemblealg
controls the parallelization acrossfit
calls. Each fit is fitted using theensemblealg
that was used in the originalfit
.
The type of bootstrap procedure used is nonparametric bootstrap, also called paired bootstrap [9]. Bootstrap datasets are obtained by resampling individuals from the original dataset with replacement (all records associated with the sampled individual are included), after stratifying by any covariates specified. The stratification ensures that the number of individuals in each stratum remains the same in each resampled dataset, which can help with the stability of the estimator. For example, if an analysis consists of two studies, a small study with rich sampling, and a large study with sparse sampling, stratifying on the study will ensure that each resampled dataset has the same number of subjects with rich sampling. The fitting algorithm is applied to each resampled dataset to obtain $N$ estimates for each parameter, where $N$ is the number of replicates specified in samples
. This collection is treated as a random draw from the sampling distribution of the estimator for the parameters, and a nonparametric 95% confidence interval is obtained from the 2.5th and 97.5th quantiles of this sample.
Inference based on the nonparametric bootstrap approach is often considered more reliable than using the asymptotic variance-covariance matrix, but is computationally intensive. Unlike the latter approach, bounds on the parameters are respected (e.g. the confidence interval for a strictly positive parameter cannot contain negative values).
Here's an example with all options specified:
using Random
my_bootstrap = infer(
fpm,
Bootstrap(;
rn = MersenneTwister(1234),
sample = 500,
stratify_by = [:sex, :trt],
ensemblealg = EnsembleThreads(),
),
)
If any of the sample
fits requires closer inspection or post-processing, it is possible to get a filtered vector of only the successful fits with the successful_fits
function. The included fits are those that did not error. In this case, the appropriate function call would be successful_fits(my_bootstrap)
. Notice, that for the inference based on the bootstrap procedure to be valid, most fits should have been successful in the sense that an error was not thrown. If several of the fits failed it is important to try to get a better understanding of why that happened before using the results. The coeftable
function will work on the returned object, and the confidence interval, relative standard errors, and standard errors will be bootstrap-based.
The bootstrap requires a FittedPumasModel
; FittedPumasEMModel
support is coming in a future version of Pumas.
Sampling importance resampling (SIR)
As an alternative to the nonparametric bootstrap procedures (which can be computationally intensive or unfeasible with small sample sizes), [10] suggests the sampling importance resampling (SIR) procedure for estimating the uncertainty. This can be obtained by calling the function SIR
and passing the returned object of type SIR
as the type
argument (second optional positional argument) of infer
:
Pumas.SIR
— TypeSIR(; rng=Random.default_rng, samples, resamples)
Constructs an SIR object that allows the user to use sampling importance re-sampling (SIR) with the supplies settings in infer
function calls. See ?infer
and https://docs.pumas.ai/stable/analysis/inference/ for more information about obtaining inference results.
The keyword arguments are as follows:
- rng: used to select a pseudo-random number generator to beused within the SIR procedure. Requires loading Random package (using Random)
- samples: the number of model parameter samples simulated from a truncated multivariate normal distribution (proposal distribution) based on the maximum-likelihood estimates and robust variance-covariance matrix. This constitutes the original sample (uncertainty) distribution.
- resamples: the number of model parameter samples from the original sample distribution to re-sample (by weighed re-sampling without replacement). The number of resamples must be less than that of the original sample. An original sample-to-resample ratio of at least 5:1 is suggested.
Both samples and resamples must be supplied, as there are no defaults.
Note that the proposal distribution for the SIR procedure is based on the robust variance-covariance matrix and will therefore fail whenever the computation of the robust variance-covariance matrix fails. Confidence intervals based on SIR respect bounds on the parameters (e.g. the confidence interval for a strictly positive parameter cannot contain negative values).
Confidence intervals based on SIR respect bounds on the parameters (e.g. the confidence interval for a strictly positive parameter cannot contain negative values).
Here is an example of using how to use SIR in Pumas:
my_sir = infer(my_fit, SIR(; samples = 200, resamples = 70))
The coeftable
function will work on the returned object, and the confidence interval, relative standard errors, and standard errors will be based on SIR.
Note that if the robust variance-covariance matrix has already been computed by a call to infer
, it is possible to avoid recomputing it by calling infer
again with the output of the first call to infer
as the first argument, and the result from SIR
as the second argument. For example:
my_inference = infer(fpm)
my_sir = infer(my_inference, SIR(; samples = 300, resamples = 100))
SIR
requires a FittedPumasModel
FittedPumasEMModel
support is coming in a future version of Pumas.
Inference based on samples from the marginal likelihood using MarginalMCMC
Instead of using the SIR
sampling scheme it is possible to use MarginalMCMC
as a way to generate an inference output. It uses a marginal Hamiltonian Monte Carlo (HMC) based No-U-Turn Sampler (NUTS) algorithm where the subject-specific parameters are marginalized and only the population parameters are sampled. The generated chain is the same as would have been generated from
alg = MarginalMCMC()
(; model, data, kwargs) = fpm
param = coef(fpm)
result = fit(model, data, param, alg; kwargs.constantcoef)
but it is set up in a way where it can be used with the usual inference oriented functions such as vcov
and stderror
.
Pumas.MarginalMCMC
— TypeMarginalMCMC
An instance of the marginal Hamiltonian Monte Carlo (HMC) based No-U-Turn Sampler (NUTS) algorithm where the subject-specific parameters are marginalized and only the population parameters are sampled. All the options of the algorithm can be passed to the constructor of MarginalMCMC
as keyword arguments. Example:
alg = MarginalMCMC(marginal_alg = LaplaceI(), nsamples = 2000, nadapts = 1000)
The main options that can be set are:
marginal_alg
: (default isLaplaceI()
) the algorithm used to marginalize out the subject-specific parameters, defaults toLaplaceI()
but can also beFO()
orLaplaceI()
target_accept
: (default is 0.8) target acceptance ratio for the NUTS algorithmnsamples
: (default is 2000) number of Markov Chain Monte Carlo (MCMC) samples to generate including adaptation samplesnadapts
: (default isnsamples ÷ 2
) number of adaptation steps in the NUTS algorithmnchains
: (default is 4) number of MCMC chains to sample, default value is 4ess_per_chain
: (default is infinity) target effective sample size (ESS) per chain, sampling terminates if the target is reachedcheck_every
: (default isess_per_chain ÷ 5
) the number of samples after which the ESS per chain is checkedtime_limit
: (default is infinity) a time limit for sampling in seconds, sampling terminates if the time limit is reachedensemblealg
: (default isEnsembleThreads()
) can be set toEnsembleSerial()
for serial sampling,EnsembleThreads()
for multi-threaded sampling orEnsembleDistributed()
for multi-processing (aka distributed parallelism) samplingparallel_chains
: (default is true if enough threads/workers exist) can be set totrue
orfalse
, if set totrue
the chains will be sampled in parallel using either multi-threading or multi-processing depending on the value ofensemblealg
parallel_subjects
: (default is true if enough threads/workers exist) can be set totrue
orfalse
, if set totrue
the log probability computation will be parallelized over the subjects using either multi-threading or multi-processing depending on the value ofensemblealg
rng
: the random number generator useddiffeq_options
: (default is an empty named tuple(;)
, which uses the downstream defaults) aNamedTuple
of all the differential equations solver's optionsconstantcoef
: (default is an empty named tuple(;)
) aNamedTuple
of the parameters to be fixed during sampling. This can be used to sample from conditional posteriors fixing some parameters to specific values, e.g.constantcoef = (σ = 0.1,)
fixes theσ
parameter to 0.1 and samples from the posterior of the remaining parameters conditional onσ
.
Here is an example of using how to use SIR in Pumas:
my_marginalmcmc = infer(my_fit, MarginalMCMC())
The coeftable
function will work on the returned object, and the confidence interval, relative standard errors, and standard errors will be based on the MarginalMCMC
with a burn-in period.
MarginalMCMC
requires a FittedPumasModel
, FittedPumasEMModel
support is coming in a future version of Pumas.
Calculating mean values and variance-covariance matrices from Bootstrap and SIR
With the output from an infer
call using Bootstrap
or SIR
in hand, it is possible to get mean values of the sampled parameters and a variance-covariance matrix based on the same. If the output of the infer call was named infer_output
, these quantities are simply produced using the following code:
mean(infer_output)
vcov(infer_output)
Extracting standard errors (stderror
)
The stderror
function is used to extract standard errors from either a FittedPumasModel
object returned by fit
or a FittedPumasModelInference
object returned by infer
:
StatsAPI.stderror
— Functionstderror(f::AbstractFittedPumasModel) -> NamedTuple
Compute the standard errors of the population parameters and return the result as a NamedTuple
matching the NamedTuple
of population parameters.
When calling stderror
on a infer
object, the type of standard errors returned will depend on the options passed to infer
. For instance, if nonparametric bootstrap inference was performed, then the standard errors will be bootstrap-based.
Calling stderror
directly on a fitted Pumas model object will result in using the default arguments to infer
, i.e. standard errors based on the robust variance-covariance matrix. They can be inefficient because it may result in the variance-covariance matrix being re-calculated unnecessarily it is therefore recommended to not call stderror
directly on a fitted Pumas model object, but to call infer
first and store the object, which is then passed to stderror
.
Condition number
Many pharmacometricians use the condition number to gauge if a model is stable or not. While we do not necessarily recommend the condition number as a good measurement of this the number is available by calling the cond
function with the inference output as input. This will calculate the condition number of the correlation
matrix implied by the variance-covariance matrix. This can be changed to the condition number of the variance-covariance matrix by setting the keyword correlation
to false
.
LinearAlgebra.cond
— Functioncond(pmi::FittedPumasModelInference; correlation = true)
Return the condition number of the correlation matrix (or variance-covariance matrix with correlation = false) stored in pmi
. Throws an error if pmi
is the result of a call to infer
with Pumas.Bootstrap
or if the variance-covariance calculation failed.