Analysis Workflows

PumasAide provides guided workflows that help AI assistants conduct pharmacometric analyses systematically. When you request an analysis, the AI assistant reads the relevant workflow documentation and follows a structured sequence of steps covering data preparation, analysis execution, quality control, and visualization.

How Workflows Work

Workflows are documentation pages that the AI reads via the pumas_knowledge tool. Each workflow specifies:

  1. Step-by-step guidance for the analysis, including which API docs to consult
  2. Code patterns showing how to write the Julia code for each step
  3. Best practices for data preparation and quality control
  4. Common pitfalls to avoid
  5. Visualization recommendations for results

Rather than memorizing code patterns, you simply describe your analysis goal, and the AI assistant reads the appropriate workflow to guide the process.

Using Workflows

Workflows are triggered automatically when you describe an analysis:

Perform an NCA analysis on my concentration-time data

The AI reads the nca_analysis workflow via pumas_knowledge, then follows the steps—writing Julia scripts to programs/, executing them, and presenting results along the way.

You can also explicitly request a workflow:

Follow the population PK analysis workflow for my dataset

Built-in Workflows

PumasAide includes ten built-in workflows:

nca_analysis

Calculates pharmacokinetic parameters from concentration-time data using non-compartmental analysis methods.

Common use cases:

  • Single and multiple dose PK studies
  • Bioavailability and food effect studies
  • First-in-human PK characterization

Workflow steps:

  1. Load and validate concentration-time data
  2. Explore data with summary tables and concentration-time plots
  3. Build NCA population with column mappings and BLQ handling
  4. Calculate NCA parameters with quality metrics
  5. Generate summary tables and visualizations
  6. Optional: dose linearity and bioequivalence assessment

poppk_analysis

Population pharmacokinetic model development from data preparation through final model validation.

Common use cases:

  • Developing population PK models for regulatory submissions
  • Characterizing drug disposition in target populations
  • Identifying clinically significant covariate effects

Workflow steps:

  1. Load and assess population PK data
  2. Exploratory data analysis and visualization
  3. Build Pumas population with column mappings
  4. Systematic base model development (structural → statistical → estimation)
  5. Covariate model development
  6. Final model diagnostics and validation (GOF plots, VPC)

Model building strategy:

  • Start simple (one-compartment) and add complexity based on data support
  • Use progressive estimation (FO for initial exploration, FOCE for final estimates)
  • Validate with VPC and comprehensive diagnostics

simulation

Model-based simulation for dose selection, trial design, and exposure predictions.

Common use cases:

  • Dose selection for clinical trials
  • Trial design simulations
  • Exposure predictions for special populations
  • What-if scenario analyses

Workflow steps:

  1. Build or select a Pumas model
  2. Define model parameters (from fitting results or literature)
  3. Create dosing regimens
  4. Generate virtual population with covariates
  5. Run simulations
  6. Visualize concentration-time profiles and exposure metrics

data_explore

Explores datasets through visualizations and summary tables before formal analysis. Helps identify data quality issues, patterns, and distributions.

Common use cases:

  • Initial data review before analysis
  • Quality control checks for imported data
  • Exploratory visualization to understand data structure

generate_report

Creates comprehensive analysis reports as Quarto (.qmd) files. Reports include embedded Julia code chunks that recreate workspace objects by including scripts from programs/, ensuring full reproducibility.

Common use cases:

  • Creating reports after completing NCA, PopPK, or simulation workflows
  • Documenting methods and results for regulatory submissions
  • Generating reproducible documentation for publications

adam_preparation

Prepares CDISC-compliant ADaM ADPPK datasets from SDTM domain datasets for population PK modeling.

Common use cases:

  • Converting SDTM datasets (DM, EX, PC, PP) to ADaM format
  • Creating ADPPK datasets for population PK analysis
  • Deriving analysis variables from source data

consolidate_scripts

Combines multiple analysis scripts from programs/ into a single, reproducible Julia file. Useful for regulatory submission, sharing, and archival.

Common use cases:

  • Archiving completed analyses in a single file
  • Preparing scripts for regulatory submission
  • Creating portable, self-contained analysis scripts

tutorial

Interactive onboarding tutorial for new users. Walks you through a complete pharmacometric analysis using sample data while teaching how PumasAide works.

Common use cases:

  • First-time PumasAide users learning the workflow
  • Understanding how the AI writes and executes Julia code
  • Exploring PumasAide capabilities with sample data

Customising Behaviour

Configures organization-specific pharmacometric preferences as skill files that coding agents discover automatically. These rules override default knowledge base guidance with your organization's thresholds, methodology choices, and conventions. See Organization Rules for a complete guide.

Common use cases:

  • Setting acceptance criteria for NCA quality metrics
  • Defining preferred estimation methods and convergence criteria
  • Establishing naming conventions and file organization standards

create_workflow

Interactively creates custom analysis workflows tailored to your specific needs. The AI assistant guides you through defining requirements and generates workflow documentation.

Common use cases:

  • Specialized analyses not covered by built-in workflows
  • Organization-specific analysis standards
  • Study-specific workflows for recurring analyses

Best Practices

Workflow Selection

  • Start simple: Use data_explore before formal analyses
  • Follow progressions: NCA → PopPK → Simulation as model complexity increases
  • Use domain expertise: Workflows encode best practices, but scientific judgment remains essential

Project Continuity

Analyses often span multiple sessions. The AI assistant maintains context through project notes in the notes/ directory, enabling iterative development over days or weeks.

Version Control

Generated scripts in programs/ are part of your analysis code:

  • Commit scripts to git with your analysis
  • Document analysis decisions in commit messages
  • Share analysis workflows across projects via git repositories