Interactive Web Applications

The PumasUtilities package provides several interactive web applications to aid in your analysis. These include exploration of parameter estimates for models, fitted model comparison, and model listings for your workspace.

Note

The apps discussed in this section only support continuous data models. Support for discrete data models will be made available soon.

Exploration of Estimates

The explore_estimates function can be used to visually aid in selecting the initial estimates to be used when fitting a model. For a given Pumas model, population, and initial parameters we can start an app that provides us with control over all the parameter values alongside an updating plot of the simulated observations.

ee_app = explore_estimates(model, population, parameters)

Once the app has finished loading it will display an address at which you can view and interact with the application.

When the address is opened in the browser you will be presented with an application with several sections.

The first section contains step-by-step guidance on using the application. It is unexpanded by default and can be expanded by clicking on it.

explore_estimates_usage

Below the usage details is a summary table of the current parameter values and their names. Whenever you adjust any parameters their values will be updated automatically in the table. Like the usage section this one is also unexpanded and can be revealed by clicking on it.

explore_estimates_table

After these two sections you will find the parameter controls and a plot of the simulated observations based on the current parameter values. There are controls here for selecting the parameter to adjust (named Parameter), a coarse-grained input box that increases or decreases the parameter value by 5, and to it's right an Adjustment slider that can be used to make fine-grained changes to the parameter value. Whenever a parameter value is changed it will update the plot.

If a model includes multiple observations then the observation to plot can be selected using the Observation dropdown. Random effects, which are turned off by default, can be switched on using the Random Effects toggle.

explore_estimates_pop_controls

Once a suitable set of values have been discovered they can be assigned to a variable in the REPL using the coef function.

my_values = coef(ee_app)

The simulated population can be inspected with

simpop = DataFrame(ee_app)

Comparison of Parameter Estimates

In addition to the exploration of estimates of a population we can also perform an interactive comparison of different parameter sets for a single model and subject and initial parameter set parameters.

ee_subject_app = explore_estimates(model, subject, parameters)

This application is very similar to the population-based explorer discussed above aside from the addition of two more controls that select the number of parameter sets to explore (Total Parameter Sets) and the current parameter set that is being adjusted (Parameter Set).

explore_estimates_subject_controls

The parameter table, which can be expanded by clicking on it, contains a column for each parameter set rather than just a single one as in the population-based explorer.

explore_estimates_subject_table

PumasApps.explore_estimatesFunction
explore_estimates(model, subject, parameters; seed)

Create a new app to explore parameter estimates for a given model and subject along with the given initial parameters. coef(app) can be used to return the vector of estimated parameters from the app.

Examples

julia> app_1 = explore_estimates(model, population[1], parameters);

julia> est_params = coef(app_1);

parameters can be either a single NamedTuple of the initial parameters or a vector of NamedTuples. When a vector is provided the field names of each NamedTuple must be equivalent.

julia> app_2 = explore_estimates(model, population[2], [params_1, params_2]);

Manual Seeding

When creating a new app the initial seed for the RNG can be set by passing the seed keyword, a positive integer. This also applies to the population-level explore_estimates app.

julia> app_3 = explore_estimates(model, population[1], parameters; seed = 100);

This allows for recreating previous app states should the user want to begin from the same starting point again.

explore_estimates(model, population, params; seed)

Create a new app to explore parameter estimates for a given model and population along with the given initial parameters. coef(app) can be used to return the estimated parameters from the app. DataFrame(app) can be used to retrieve the simobs data.

Examples

julia> app = explore_estimates(model, population, parameters);

julia> est_param = coef(app);

julia> sim_data = DataFrame(app);

When a Population is provided to the app instead of a single Subject only a single NamedTuple may be provided for the parameters.

Stopping Applications

If you are done with an application then you may want to close it. This is not essential and unless you running hundreds of separate apps during a single session is not required. To manually close an application that you have assigned to a variable name in the REPL, use:

close(ee_app)

where ee_app is the name of our example application from the previous section.

Tip

Applications will also stop running automatically if they have not been assigned to a variable and all references to them have been lost.

Listing Models

Within a single Julia session you may have defined and evaluated multiple different models. To gain an overview of what models are available as well as some summary data associated with each you can use the list_models function.

lm_app = list_models()

When opening this app in your browser you will be presented with a table of all the currently available fitted models within the Main module of your Julia session, ie. those defined in your REPL or evaluated into the REPL from a VSCode editor window. In addition to listing models from the module Main you can also pass in a different Module, Dict, Vector, or NamedTuple that contains any number of fitted models instead.

list_models

By clicking on the checkbox next to a particular model within the Models table the table in the Data section will be populated with the model's population data.

list_models_data

The Metrics section allows for comparison of the model metrics of different models side-by-side. Use the left-most column of checkboxes in the Models table to select two different models, which will then populate the Metrics table.

list_models_compare

Once you have several models selected they can also be gathered for further analysis in the REPL using the selected_models function.

selected_models(lm_app)

which will return a NamedTuple of the currently selected models from your lm_app application. The result can be passed directly into the application that we will be discussing in the next section.

PumasApps.list_modelsFunction
list_models(mod = Main)
list_models(container)

Create a new app to explore the currently defined Pumas models in the given module mod or container which can be a Vector, Dict, or NamedTuple, of fitted models.

julia> app_1 = list_models();

julia> app_2 = list_models(MyModule);

julia> app_3 = list_models([fit_1, fit_2]);

julia> app_4 = list_models((; fit_1, fit_2));

This launches a new app that collects and summarises all the results from fit, inspect, and infer within the REPL's Main module by default, or the given Module object. inspect and infer results are linked back to their respective fit results within the app to highlight their relationships.

Fitted Model Comparisons

As discussed in the Plotting section of the manual we have a large amount of plots that can help us to visualize our fitted models. To aid with this the evaluate_diagnostics function can be used to gather up all the relevant plots and tables into a single interactive application that can be used to perform side-by-side comparisons of plots and tables for different fitted models.

Let us use the result from selected_models(lm_app) in our previous section that introduced the list_models app. We can then call

evaluate_diagnostics(selected_models(lm_app))

to generate an application that displays all the diagnostics related to the given fitted models. It provides a short usage guide at the top left, four dropdowns at the top right for selecting particular diagnostics to display, and a display area at the bottom for showing the chosen diagnostics.

evaluate_diagnostics_latex

Tip

The "latexfied" source is generated by the latexify function. Please see that section of the manual for a more detailed explanation of this function.

evaluate_diagnostics_tables

Below each table a download link is presented, which can be used to download a CSV file containing the generated table's data.

evaluate_diagnostics_plots

When hovering your mouse over a particular plot several icons will appear at the top right which can be used to perform a number of actions, such as saving the plot to file, or switching the hover-mode used for mouse-overs. The legends displayed next to plots can be used to show or hide specific lines on the plots should you want to focus on a particular element of the plot. Click-and-drag can be used to zoom in on sections of the plots and double-clicking will return to the original zoom level.

Supported Input Syntax

evaluate_diagnostics can be used to compare any number of fitted models and associated result with each other. For the simplest case of a single model we can call

fitted_model = fit(...)
eval_diag_app = evaluate_diagnostics(fitted_model)

which will create a single column application for exploration and evaluation of the diagnostics derived from the fitted_model.

Multiple models

For model comparisons we must pass several fitted models to evaluate_diagnostics.

fitted_model_1 = fit(...)
fitted_model_2 = fit(...)
eval_diag_app = evaluate_diagnostics([fitted_model_1, fitted_model_2])

which uses [] syntax to specify each model we would like to include in our comparison.

Including inspect and infer results

If we have associated results such as the output of inspect and infer for each of the fitted models then we can pass those in as well by using () tuple syntax to group the values:

fitted_model_1 = fit(...)
inspect_1 = inspect(fitted_model_1)
infer_1 = infer(fitted_model_1)

fitted_model_2 = fit(...)
inspect_2 = inspect(fitted_model_2)
infer_2 = infer(fitted_model_2)

eval_diag_app = evaluate_diagnostics(
    [
        (fitted_model_1, inspect_1, infer_1),
        (fitted_model_2, inspect_2, infer_2),
    ]
)
Info

If no inspect results are passed to evaluate_diagnostics then they will be computed automatically unless the keyword inspect = false is passed to evaluate_diagnostics. To automatically compute the infer results as well you must set the keyword infer = true for evaluate_diagnostics manually, it will not automatically compute infer.

Include vpc results

Any VPC results can also be passed to evaluate_diagnostics by adding them to the () groupings.

fitted_model_1 = fit(...)
inspect_1 = inspect(fitted_model_1)
vpc_1_a = vpc(fitted_model_1)

fitted_model_2 = fit(...)
inspect_2 = inspect(fitted_model_2)
vpc_2_a = vpc(fitted_model_2)
vpc_2_b = vpc(fitted_model_2, stratify_by=[:DOSE])

eval_diag_app = evaluate_diagnostics(
    [
        (fitted_model_1, inspect_1, vpc_1_a),
        (fitted_model_2, inspect_2, vpc_2_a, vpc_2_b),
    ]
)

As shown above you do not need to provide the same number of VPCs for each fitted model, or any at all if you want. No automatic vpc computation is done and must always be included manually by the user.

In the prior examples we have only included two fitted models (and associated results), but you are able to pass any number of models within the [] syntax if you wish. Each model will appear in the dropdown menus in the first section of the application for you to select from for comparisons.

Naming models

So far all the generated applications have numbered your fitted models from 1 to N where N is the total number of models. To provide more memorable names to go along with your models to ease the understanding of diagnostics we can use Julia's NamedTuple syntax: e.g. (; a = ..., b = ..., ...). So for the last example we can adapt it to the following to provide better names for the models:

fitted_model_1 = fit(...)
inspect_1 = inspect(fitted_model_1)
vpc_1_a = vpc(fitted_model_1)

fitted_model_2 = fit(...)
inspect_2 = inspect(fitted_model_2)
vpc_2_a = vpc(fitted_model_2)
vpc_2_b = vpc(fitted_model_2, stratify_by=[:DOSE])

eval_diag_app = evaluate_diagnostics(
    (;
        fit_1 = (fitted_model_1, inspect_1, vpc_1_a),
        fit_2 = (fitted_model_2, inspect_2, vpc_2_a, vpc_2_b),
    )
)

If you are only passing a single fit for each of your models and not including additional results then the shorthand

fitted_model_1 = fit(...)
fitted_model_2 = fit(...)

eval_diag_app = evaluate_diagnostics(
    (; fitted_model_1, fitted_model_2)
)

can be used to simplify the call to evaluate_diagnostics somewhat.

PumasApps.evaluate_diagnosticsFunction
evaluate_diagnostics(objects; categorical, inspect, infer)

Create a new app to evaluate the diagnostics associated with objects.

Keywords

Supported keyword arguments for evaluate_diagnostics are:

  • categorical, a Vector{Symbol} that gets passed to plotting functions to determine whether to plot covariates as categorical or continuous.

  • inspect, defaults to true and sets whether to run automatic Pumas.inspect on fitted models that do not already provide a corresponding inspection result.

  • infer, defaults to false and sets whether to run automatic Pumas.infer on fitted models that do not already provide a corresponding inference result.

Examples

Basic Apps

A single fitted model app:

julia> f1 = fit(model, pop, params, Pumas.FOCEI());

julia> app = evaluate_diagnostics(f1);

To open the app in your web-browser click on the link that gets printed when you show app in the REPL, for example the localhost URL below will point to the app instance.

julia> app
App{PumasApps.EvaluateDiagnostics.EvaluateDiagnosticsType}(url=http://0.0.0.0:44795)

Comparison Apps

Comparison app for several fitted models:

julia> f2 = fit(model_2, pop_2, params_2, Pumas.FOCEI());

julia> app_2 = evaluate_diagnostics([f1, f2]);

VPCs

VPCs can be included in the model comparison by passing them in alongside the fit results as follows:

julia> f2 = fit(model_2, pop_2, params_2, Pumas.FOCEI());

julia> app_2_with_vpcs = evaluate_diagnostics([(f1, vpc_1, vpc_2), (f2, vpc_3)]);

As shown above any number of VPCs may be passed in with each model with Julia's tuple syntax, i.e. (...,). Note that they cannot be passed in by themselves. They must always appear alongside a fit, inspect, or infer result.

Named Models

The above app will automatically name the models 1 and 2. To give them descriptive names pass a NamedTuple of fitted models instead:

julia> first_fit = fit(model, pop, param, Pumas.FOCEI());

julia> second_fit = fit(model_2, pop_2, param_2, Pumas.FOCEI());

julia> app_3 = evaluate_diagnostics((; first_fit, second_fit));

Categorical Data

Categoricals can be provided via the keyword categorical which takes a vector of symbol names which should be treated as categorical data during evaluation.

julia> app_4 = evaluate_diagnostics((; first_fit, second_fit); categorical = [:race, :gender]);

Stopping Apps

Apps will automatically stop once the variable they are assigned to in the REPL is reassigned. The simplest way to force an app to be closed is to assign app = nothing. When Julia runs it's garbage collection the app will be closed and "finished" will be printed in the REPL.