Tools Reference
PumasAide provides a collection of tools that AI assistants use to perform pharmacometric analyses. You don't need to memorize these tools or call them by name—just describe what you want to accomplish, and the AI selects the appropriate tools automatically. This reference helps you understand what's possible and formulate effective requests.
For example, saying "load my PK data and create a one-compartment model with first-order absorption" triggers the AI to use multiple tools behind the scenes to accomplish your goal. Every tool call generates executable Julia code saved to the programs/ folder, so you always have a reproducible record of what was done.
Tool Categories
Tools are organized by function:
- Workflow Management: Guided analysis workflows
- Learning and Help: Tutorials and documentation search
- Data Management: Loading and transforming datasets
- Non-Compartmental Analysis: NCA parameter calculation and related analyses
- Pharmacometric Modeling: Building, fitting, and validating PK/PD models
- Simulation: Creating dosing regimens and running simulations
- Visualization & Tables: Creating publication-quality plots and summary tables
- Utilities: Code execution for debugging and advanced use
Workflow Management
analysis_workflows
Triggers guided workflows for common analyses. Rather than calling individual tools, you can request a complete workflow like "perform an NCA analysis" or "run a population PK modeling workflow." The AI follows a structured sequence of steps including data preparation, analysis execution, quality control, and visualization. Available workflows cover NCA, bioequivalence, population PK modeling, simulation, data exploration, ADAM dataset preparation, and report generation.
Learning and Help
interactive_tutorial
Launches a guided tutorial to learn PumasAide. The tool sets up sample datasets and walks you through a complete workflow of your choice—NCA analysis, bioequivalence, population PK modeling, ADAM preparation, or data exploration. Along the way, it explains how tools work, how to review and approve tool calls, and how to interpret results.
pumas_docs_search
Searches the built-in documentation for pharmacometric concepts and guidance. You can search for topics ("bioequivalence acceptance criteria") or read specific documents by ID. The documentation covers typical parameter values, study designs, regulatory guidance, model diagnostics, and troubleshooting. You can also add custom documentation to .pumas/docs/ folders in your project.
Data Management
load_dataset
Loads your data files into the workspace. Supported formats include CSV, Arrow, SAS (.xpt, .sas7bdat), Stata, and SPSS. The tool automatically detects common missing value markers like "NA", "BLQ", or blank cells. If it can't find your file, it searches nearby directories and suggests alternatives. Just say "load the concentration-time data from pkdata.csv" and the AI handles the details.
data_wrangler
Transforms your data using operations like filtering rows, selecting columns, creating derived variables, and aggregating by groups. You might ask it to "filter out negative time points and calculate dose-normalized concentrations" or "compute mean concentration by time point for each dose group." The AI translates your request into the appropriate data manipulation operations.
Non-Compartmental Analysis
build_nca_population
Prepares your concentration-time data for NCA calculations. The AI figures out how to connect your data columns—subject ID, time, concentrations, dose amounts—to the NCA engine. It handles both plasma and urine data, and can incorporate BLQ flags and grouping variables like dose level or study period.
run_nca_analysis
Calculates NCA parameters from your data. Common parameters include exposure metrics (AUC, Cmax, Tmax), elimination metrics (half-life, clearance, volume of distribution), and dose-normalized values. Always request quality metrics like adjusted R² for the terminal slope and percent AUC extrapolated—these help identify subjects with unreliable estimates. You can also calculate partial AUCs between specific timepoints.
run_dose_linearity
Assesses whether exposure increases proportionally with dose. The tool uses a power model where a slope of 1 indicates perfect proportionality, plus pairwise comparisons between dose groups. Ask it to "assess dose proportionality for Cmax and AUC across my three dose levels" and it runs both analyses.
run_bioequivalence
Compares test and reference formulations for bioequivalence. Standard criteria require the 90% confidence interval of the geometric mean ratio to fall within 80-125%. The tool also supports reference-scaled BE for highly variable drugs and narrower limits for narrow therapeutic index drugs. Results include the geometric mean ratio, confidence intervals, within-subject variability, and pass/fail determination.
Pharmacometric Modeling
build_pumas_model
Creates PK and PD models using pre-defined compartmental structures. For PK, you can build anything from simple one-compartment IV models to complex multi-compartment structures with oral absorption, transit compartments, or dual absorption pathways. The tool supports Michaelis-Menten elimination for saturable kinetics and TMDD models for biologics. For PD, options include direct effect, effect compartment, and indirect response models.
Covariate effects can be specified using power relationships (like allometric scaling), exponential, linear, or categorical forms. Just describe what you want: "build a two-compartment model with first-order absorption and body weight on clearance using allometric scaling."
build_pumas_population
Creates a population object from your data for model fitting. If your data follows NMTRAN conventions (subject ID, time, observations, dosing events), the AI connects everything automatically. Covariates like weight and sex get mapped to model parameters. Just mention which covariates you want to include and the AI configures the rest.
pumas_model_parameters
Sets initial parameter values for model fitting or simulation. You can specify exact values ("set CL to 5 L/h and Vc to 50 L with 30% variability") or let the AI choose reasonable starting values ("set typical initial parameters for a one-compartment oral model"). Parameter types include typical values, between-subject variability, residual error, and covariate effects.
fit_pumas_model
Fits your model to population data using maximum likelihood estimation. The main methods are FO (fast but approximate), FOCE (the gold standard for most models), and LaplaceI (better for categorical or count data). A common strategy is to start with FO for quick exploration, then switch to FOCE for final estimates. Bayesian estimation via MCMC is also available when you need full posterior distributions.
run_vpc
Generates Visual Predictive Checks to validate your fitted model. The tool simulates many replicates of your study and compares observed data against prediction intervals. You can stratify by dose or demographics to assess model performance across subgroups, or use prediction correction to focus on structural model adequacy. Request "generate a VPC with 500 simulations stratified by dose group" to see how well your model captures the data.
Simulation
create_dosage_regimen
Defines dosing schedules for simulations. You can create single doses, multiple dose regimens with specified intervals, IV infusions with controlled rates, or steady-state dosing. Describe what you need naturally: "100mg oral dose every 12 hours for 7 days" or "30-minute IV infusion of 500mg."
create_simulation_population
Generates virtual subjects for simulations with realistic covariate distributions. You can specify the number of subjects, assign dosing regimens, set observation times, and define how covariates are distributed (fixed values, normal, log-normal, uniform, or categorical). Create multiple subpopulations for dose-ranging studies or special population analyses.
run_simulation
Executes simulations to predict concentration-time profiles or other outcomes. You can simulate with or without residual error, specify observation times, and control reproducibility with random seeds. The results come back as DataFrames ready for plotting or further analysis.
Visualization & Tables
build_plot
Creates publication-quality plots using a layered approach. Basic plot types include scatter plots, line plots, bands for confidence regions, box plots, violin plots, histograms, and density curves. You can add reference lines, apply statistical transformations like linear regression or LOESS smoothing, and overlay mean profiles with error bands.
Plots can be faceted into panels by dose, treatment, or other grouping variables. Axis scales can be linear, logarithmic, or square root. Colors, markers, line styles, and transparency are all customizable. Describe what you want visually: "plot concentration versus time with individual profiles in gray and the population mean in red with 90% prediction bands."
table_summary
Generates summary tables from your data or fitted models. You can create overview tables for data quality assessment, clinical "Table 1" demographics stratified by treatment arm, concentration-time listings by subject, or model comparison tables showing AIC, BIC, and likelihood statistics. Request something like "create a demographic summary grouped by dose level" and the AI produces a formatted table.
Utilities
eval_julia_code
Runs Julia code for quick inspection or debugging. This is useful when you want to check intermediate values, test expressions, or examine data structures. The code isn't saved—it's purely for exploration during your session.
include_julia_file
Executes Julia code from files in your workspace. After editing a generated script, use this to apply your changes to the live session. You can run entire files or specific line ranges.
Generated Code
Every tool call generates executable Julia code saved to the programs/ folder, organized by analysis type. These scripts are fully reproducible—you can run them outside PumasAide, version control them with git, and review exactly what the AI did. This transparency ensures your analyses are documented and auditable.