Introduction
making complexity simple
differentiable learning over millions of autonomous agents
Large Population Models (LPMs) are grounded in state-of-the-art AI research, a summary of which can be found here.
AgentTorch LPMs have four key features:
- Scalability: AgentTorch models can simulate country-size populations in seconds on commodity hardware.
- Differentiability: AgentTorch models can differentiate through simulations with stochastic dynamics and conditional interventions, enabling gradient-based learning.
- Composition: AgentTorch models can compose with deep neural networks (eg: LLMs), mechanistic simulators (eg: mitsuba) or other LPMs. This helps describe agent behavior, calibrate simulation parameters and specify expressive interaction rules.
- Generalization: AgentTorch helps simulate diverse ecosystems - humans in geospatial worlds, cells in anatomical worlds, autonomous avatars in digital worlds.
AgentTorch is building the future of decision engines - inside the body, around us and beyond!
Installation
Install the framework using pip
, like so:
> pip install git+https://github.com/agenttorch/agenttorch
Some models require extra dependencies that have to be installed separately. For more information regarding this, as well as the hardware the project has been run on, please see
install.md
.
Getting Started
The following section depicts the usage of existing models and population data to run simulations on your machine. It also acts as a showcase of the Agent Torch API.
A Jupyter Notebook containing the below examples can be found here.
Executing a Simulation
# re-use existing models and population data easily
from AgentTorch.models import disease
from AgentTorch.populations import new_zealand
# use the executor to plug-n-play
from AgentTorch.execute import Executor
simulation = Executor(disease, new_zealand)
simulation.execute()
Using Gradient-Based Learning
# agent_"torch" works seamlessly with the pytorch API
from torch.optim import SGD
# create the simulation
# ...
# create an optimizer for the learnable parameters
# in the simulation
optimizer = SGD(simulation.parameters())
# learn from each "episode" and run the next one
# with optimized parameters
for i in range(episodes):
optimizer.zero_grad()
simulation.execute()
optimizer.step()
simulation.reset()
Talking to the Simulation
from AgentTorch.LLM.qa import SimulationAnalysisAgent, load_state_trace
# create the simulation
# ...
state_trace = load_state_trace(simulation)
analyzer = SimulationAnalysisAgent(simulation, state_trace)
# ask questions regarding the simulation
analyzer.query("How are stimulus payments affecting disease?")
analyzer.query("Which age group has the lowest median income, and how much is it?")
Guides and Tutorials
Understanding the Framework
A detailed explanation of the architecture of the Agent Torch framework can be found here.
Creating a Model
A tutorial on how to create a simple predator-prey model can be found in the
tutorials/
folder.
Contributing to Agent Torch
Thank you for your interest in contributing! You can contribute by reporting and fixing bugs in the framework or models, working on new features for the framework, creating new models, or by writing documentation for the project.
Take a look at the contributing guide for instructions on how to setup your environment, make changes to the codebase, and contribute them back to the project.
Citation
If you use this project or code in your work, please cite it using the following
BibTex entry, which can also be found in
citation.bib
.
@inproceedings{chopra2024framework,
title = {A Framework for Learning in Agent-Based Models},
author = {Chopra, Ayush and Subramanian, Jayakumar and Krishnamurthy, Balaji and Raskar, Ramesh},
booktitle = {Proceedings of the 23rd International Conference on Autonomous Agents and Multi-agent Systems},
year = {2024},
organization = {International Foundation for Autonomous Agents and Multiagent Systems},
}