Estimating Parameters using SAEM
PumasEMModel
s can optionally be fit using SAEM()
. Here is an example PumasEMModel
definition:
using Pumas, PumasUtilities, Random
covariate_saem_cov = @emmodel begin
@random begin
CL ~ 1 + logwt | LogNormal
v ~ 1 | LogNormal
end
@covariance 2
@covariates wt
@pre begin
Vc = wt * v
end
@dynamics Central1
@post begin
cp = Central / Vc
end
@error begin
dv ~ ProportionalNormal(cp)
end
end;
To fit this model, we'll simulate data using simobs
.
sim_params_covariate_cov = (CL = (4.0, 0.75), v = 70.0, Ω = (Pumas.@SMatrix([0.1 0.05; 0.05 0.1]),), σ = ((0.2,),));
obstimes = 0.0:100.0;
dose = DosageRegimen(1000, addl=2, ii=24)
function choose_covariates()
wt = (55 + 25rand())/70
(wt = wt, logwt = log(wt))
end
pop = [Subject(id=i, events=dose, covariates = choose_covariates()) for i = 1:72];
sims = simobs(covariate_saem_cov, pop, sim_params_covariate_cov, obstimes = obstimes, ensemblealg = EnsembleSerial());
reread_df = DataFrame(sims);
pop_covariate = read_pumas(reread_df; observations=[:dv], covariates = [:logwt, :wt])
When specifying inits, it is not necessary to specify any variance parameters for the random effects or error models.
init_covariate = (CL = (2.0, 2/3), v = 50.0);
If unspecified, they will be initialized to 1.0
or the identity matrix for covariances. SAEM
's stochastic exploration phase is most effective when these variance parameters are much larger than the true variances. Thus, if the true variances are believed to be around 1.0
or larger, it is reccomended to specify larger initial values manually.
It is possible to pass a vector of random number generates as an argument to fit
. If so, the fit will use one thread per RNG
. By specifying the seeds of each RNG
, SAEM()
can be fully reproducible:
julia> rngv = [MersenneTwister(1941964947i + 1) for i ∈ 1:Threads.nthreads()];
julia> @time fit_covariate_cov1 = fit(covariate_saem_cov, pop_covariate, init_covariate, Pumas.SAEM(), ensemblealg = EnsembleThreads(), rng=rngv)
14.411832 seconds (40.41 M allocations: 2.576 GiB, 3.87% gc time)
FittedPumasEMModel
-------------------
Estimate
-------------------
CL₁ 4.0393
CL₂ 0.53895
v 69.407
Ω₁,₁ 0.099891
Ω₂,₁ 0.053913
Ω₂,₂ 0.090414
σ 0.20073
-------------------
julia> rngv = [MersenneTwister(1941964947i + 1) for i ∈ 1:Threads.nthreads()];
julia> @time fit_covariate_cov2 = fit(covariate_saem_cov, pop_covariate, init_covariate, Pumas.SAEM(), ensemblealg = EnsembleThreads(), rng=rngv)
0.713260 seconds (2.03 M allocations: 146.718 MiB)
FittedPumasEMModel
-------------------
Estimate
-------------------
CL₁ 4.0393
CL₂ 0.53895
v 69.407
Ω₁,₁ 0.099891
Ω₂,₁ 0.053913
Ω₂,₂ 0.090414
σ 0.20073
-------------------
julia> coef(fit_covariate_cov1) == coef(fit_covariate_cov2) # true
true
One can also specify the number of iterations for each of the three phases (rapid exploration, convergence, smoothing):
julia> @time fit_covariate_cov = fit(covariate_saem_cov, pop_covariate, init_covariate, Pumas.SAEM(iters=(1000,500,500)))
12.610038 seconds (15.42 M allocations: 1.063 GiB, 1.90% gc time)
FittedPumasEMModel
-------------------
Estimate
-------------------
CL₁ 4.0392
CL₂ 0.53812
v 69.398
Ω₁,₁ 0.099749
Ω₂,₁ 0.053603
Ω₂,₂ 0.09004
σ 0.20076
-------------------
The results of a fit can be analyzed normally:
julia> infer_cov = infer(fit_covariate_cov)
[ Info: Calculating: variance-covariance matrix.
Progress: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| Time: 0:01:02
[ Info: Done.
Asymptotic inference results using sandwich estimator
Fitted PumasEMModel
------------------------------------------------------------------
Estimate SE 95.0% C.I.
------------------------------------------------------------------
CL_base 4.0392 0.14757 [ 3.75 ; 4.3284 ]
CL_logwt 0.53812 0.36439 [-0.17606; 1.2523 ]
v_base 69.398 2.4757 [64.546 ; 74.25 ]
ω_1₁,₁ 0.31583 0.025094 [ 0.26665; 0.36501]
ω_1₂,₁ 0.16972 0.033839 [ 0.1034 ; 0.23605]
ω_1₂,₂ 0.24746 0.021764 [ 0.2048 ; 0.29011]
σ_0 0.20076 0.0016548 [ 0.19752; 0.20401]
------------------------------------------------------------------
julia> coeftable(infer_cov)
7×5 DataFrame
Row │ parameter estimate se ci_lower ci_upper
│ String Float64 Float64 Float64 Float64
─────┼────────────────────────────────────────────────────────
1 │ CL_base 4.03921 0.147567 3.74998 4.32843
2 │ CL_logwt 0.538125 0.364387 -0.17606 1.25231
3 │ v_base 69.3981 2.47567 64.5459 74.2503
4 │ ω_1₁,₁ 0.315831 0.0250943 0.266647 0.365015
5 │ ω_1₂,₁ 0.169721 0.0338395 0.103397 0.236045
6 │ ω_1₂,₂ 0.247456 0.0217635 0.2048 0.290111
7 │ σ_0 0.200764 0.00165484 0.197521 0.204008
julia> stderror(infer_cov)
(CL_base = 0.1475673618627648, CL_logwt = 0.36438662518030024, v_base = 2.475667050258552, ω_1 = [0.025094291231830657 0.0; 0.0338394747181779 0.021763520157798536], σ_0 = 0.0016548384801504602)
julia> inspect_cov = inspect(fit_covariate_cov)
[ Info: Calculating predictions.
[ Info: Calculating weighted residuals.
[ Info: Calculating empirical bayes.
[ Info: Done.
FittedPumasModelInspection
Likehood approximations used for weighted residuals : Pumas.FOCE()
julia> inspect_cov_df = DataFrame(inspect_cov)
7488×21 DataFrame
Row │ id time evid dv amt cmt rate duration ss ii route logwt wt tad dv_pred dv_ipred dv_wres dv_iwres ⋯
│ String Float64 Int64? Float64? Float64? Int64? Float64? Float64? Int64? Float64? NCA.Route? Float64? Float64? Float64? Float64? Float64? Float64? Float64? ⋯
──────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ 1 0.0 0 27.3982 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 0.0 14.9102 24.2467 1.30945 0.64741 ⋯
2 │ 1 0.0 1 missing 1000.0 1 0.0 0.0 0 0.0 NullRoute -0.0341517 0.966425 0.0 14.9102 24.2467 1.30945 0.64741
3 │ 1 1.0 0 23.1543 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 1.0 14.0542 22.1085 0.800136 0.23563
4 │ 1 2.0 0 18.0066 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 2.0 13.2472 20.1588 -0.0567508 -0.53176
5 │ 1 3.0 0 15.285 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 3.0 12.4867 18.381 -0.445813 -0.83896 ⋯
6 │ 1 4.0 0 21.852 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 4.0 11.7697 16.76 1.83175 1.51329
7 │ 1 5.0 0 12.2457 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 5.0 11.094 15.282 -0.739151 -0.98964
8 │ 1 6.0 0 15.0068 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 6.0 10.457 13.9343 0.572195 0.38337
9 │ 1 7.0 0 12.1011 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 7.0 9.85664 12.7055 -0.103936 -0.23693 ⋯
10 │ 1 8.0 0 11.0134 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 8.0 9.29073 11.585 -0.163157 -0.24578
11 │ 1 9.0 0 10.4917 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 9.0 8.75731 10.5633 0.00350273 -0.03378
12 │ 1 10.0 0 10.8511 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 10.0 8.25451 9.63179 0.627155 0.63055
13 │ 1 11.0 0 10.3327 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 11.0 7.78058 8.78238 0.83949 0.87927 ⋯
14 │ 1 12.0 0 5.88849 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 12.0 7.33386 8.00788 -1.3905 -1.31828
15 │ 1 13.0 0 7.01808 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 13.0 6.91279 7.30169 -0.294469 -0.19346
16 │ 1 14.0 0 9.50649 missing missing missing missing 0 missing NullRoute -0.0341517 0.966425 14.0 6.51589 6.65777 2.00481 2.13125
⋮ │ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱
7474 │ 72 86.0 0 1.68917 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 38.0 2.06606 1.10113 2.51913 2.66003 ⋯
7475 │ 72 87.0 0 1.06284 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 39.0 1.94939 1.02231 0.0620061 0.19747
7476 │ 72 88.0 0 0.928543 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 40.0 1.8393 0.949143 -0.238215 -0.10810
7477 │ 72 89.0 0 0.750478 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 41.0 1.73543 0.881209 -0.8638 -0.73894
7478 │ 72 90.0 0 0.937663 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 42.0 1.63742 0.818137 0.607976 0.72769 ⋯
7479 │ 72 91.0 0 0.859094 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 43.0 1.54495 0.759579 0.537858 0.65257
7480 │ 72 92.0 0 0.763293 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 44.0 1.4577 0.705213 0.300373 0.41022
7481 │ 72 93.0 0 0.710599 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 45.0 1.37538 0.654738 0.319837 0.42497
7482 │ 72 94.0 0 0.632001 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 46.0 1.29771 0.607875 0.0971175 0.19768 ⋯
7483 │ 72 95.0 0 0.441978 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 47.0 1.22443 0.564367 -1.17634 -1.08017
7484 │ 72 96.0 0 0.447987 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 48.0 1.15528 0.523973 -0.814265 -0.72233
7485 │ 72 97.0 0 0.522996 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 49.0 1.09004 0.48647 0.28614 0.37399
7486 │ 72 98.0 0 0.396368 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 50.0 1.02848 0.451651 -0.693618 -0.60968 ⋯
7487 │ 72 99.0 0 0.378225 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 51.0 0.970397 0.419325 -0.568386 -0.48820
7488 │ 72 100.0 0 0.463738 missing missing missing missing 0 missing NullRoute 0.00272001 1.00272 52.0 0.915596 0.389312 0.87564 0.95223
4 columns and 7457 rows omitted