Modal
Deploy Python code to serverless GPU containers with automatic scaling
✨ The solution you've been looking for
Run Python code in the cloud with serverless containers, GPUs, and autoscaling. Use when deploying ML models, running batch processing jobs, scheduling compute-intensive tasks, or serving APIs that require GPU acceleration or dynamic scaling.
See It In Action
Interactive preview & real-world examples
AI Conversation Simulator
See how users interact with this skill
User Prompt
I have a BERT model that I need to deploy for text classification with GPU acceleration. Can you help me create a Modal deployment that can handle variable traffic?
Skill Processing
Analyzing request...
Agent Response
A production-ready deployment with GPU-accelerated inference, automatic scaling from zero to hundreds of containers, and HTTPS endpoints for API access
Quick Start (3 Steps)
Get up and running in minutes
Install
claude-code skill install modal
claude-code skill install modalConfig
First Trigger
@modal helpCommands
| Command | Description | Required Args |
|---|---|---|
| @modal deploy-ml-model-for-production | Deploy a machine learning model with GPU acceleration and automatic scaling for real-time inference | None |
| @modal batch-process-large-datasets | Process thousands of files or data samples in parallel using automatic container scaling | None |
| @modal schedule-gpu-training-jobs | Set up scheduled model training or data processing jobs that run automatically with GPU resources | None |
Typical Use Cases
Deploy ML Model for Production
Deploy a machine learning model with GPU acceleration and automatic scaling for real-time inference
Batch Process Large Datasets
Process thousands of files or data samples in parallel using automatic container scaling
Schedule GPU Training Jobs
Set up scheduled model training or data processing jobs that run automatically with GPU resources
Overview
Modal
Overview
Modal is a serverless platform for running Python code in the cloud with minimal configuration. Execute functions on powerful GPUs, scale automatically to thousands of containers, and pay only for compute used.
Modal is particularly suited for AI/ML workloads, high-performance batch processing, scheduled jobs, GPU inference, and serverless APIs. Sign up for free at https://modal.com and receive $30/month in credits.
When to Use This Skill
Use Modal for:
- Deploying and serving ML models (LLMs, image generation, embedding models)
- Running GPU-accelerated computation (training, inference, rendering)
- Batch processing large datasets in parallel
- Scheduling compute-intensive jobs (daily data processing, model training)
- Building serverless APIs that need automatic scaling
- Scientific computing requiring distributed compute or specialized hardware
Authentication and Setup
Modal requires authentication via API token.
Initial Setup
1# Install Modal
2uv uv pip install modal
3
4# Authenticate (opens browser for login)
5modal token new
This creates a token stored in ~/.modal.toml. The token authenticates all Modal operations.
Verify Setup
1import modal
2
3app = modal.App("test-app")
4
5@app.function()
6def hello():
7 print("Modal is working!")
Run with: modal run script.py
Core Capabilities
Modal provides serverless Python execution through Functions that run in containers. Define compute requirements, dependencies, and scaling behavior declaratively.
1. Define Container Images
Specify dependencies and environment for functions using Modal Images.
1import modal
2
3# Basic image with Python packages
4image = (
5 modal.Image.debian_slim(python_version="3.12")
6 .uv_pip_install("torch", "transformers", "numpy")
7)
8
9app = modal.App("ml-app", image=image)
Common patterns:
- Install Python packages:
.uv_pip_install("pandas", "scikit-learn") - Install system packages:
.apt_install("ffmpeg", "git") - Use existing Docker images:
modal.Image.from_registry("nvidia/cuda:12.1.0-base") - Add local code:
.add_local_python_source("my_module")
See references/images.md for comprehensive image building documentation.
2. Create Functions
Define functions that run in the cloud with the @app.function() decorator.
1@app.function()
2def process_data(file_path: str):
3 import pandas as pd
4 df = pd.read_csv(file_path)
5 return df.describe()
Call functions:
1# From local entrypoint
2@app.local_entrypoint()
3def main():
4 result = process_data.remote("data.csv")
5 print(result)
Run with: modal run script.py
See references/functions.md for function patterns, deployment, and parameter handling.
3. Request GPUs
Attach GPUs to functions for accelerated computation.
1@app.function(gpu="H100")
2def train_model():
3 import torch
4 assert torch.cuda.is_available()
5 # GPU-accelerated code here
Available GPU types:
T4,L4- Cost-effective inferenceA10,A100,A100-80GB- Standard training/inferenceL40S- Excellent cost/performance balance (48GB)H100,H200- High-performance trainingB200- Flagship performance (most powerful)
Request multiple GPUs:
1@app.function(gpu="H100:8") # 8x H100 GPUs
2def train_large_model():
3 pass
See references/gpu.md for GPU selection guidance, CUDA setup, and multi-GPU configuration.
4. Configure Resources
Request CPU cores, memory, and disk for functions.
1@app.function(
2 cpu=8.0, # 8 physical cores
3 memory=32768, # 32 GiB RAM
4 ephemeral_disk=10240 # 10 GiB disk
5)
6def memory_intensive_task():
7 pass
Default allocation: 0.125 CPU cores, 128 MiB memory. Billing based on reservation or actual usage, whichever is higher.
See references/resources.md for resource limits and billing details.
5. Scale Automatically
Modal autoscales functions from zero to thousands of containers based on demand.
Process inputs in parallel:
1@app.function()
2def analyze_sample(sample_id: int):
3 # Process single sample
4 return result
5
6@app.local_entrypoint()
7def main():
8 sample_ids = range(1000)
9 # Automatically parallelized across containers
10 results = list(analyze_sample.map(sample_ids))
Configure autoscaling:
1@app.function(
2 max_containers=100, # Upper limit
3 min_containers=2, # Keep warm
4 buffer_containers=5 # Idle buffer for bursts
5)
6def inference():
7 pass
See references/scaling.md for autoscaling configuration, concurrency, and scaling limits.
6. Store Data Persistently
Use Volumes for persistent storage across function invocations.
1volume = modal.Volume.from_name("my-data", create_if_missing=True)
2
3@app.function(volumes={"/data": volume})
4def save_results(data):
5 with open("/data/results.txt", "w") as f:
6 f.write(data)
7 volume.commit() # Persist changes
Volumes persist data between runs, store model weights, cache datasets, and share data between functions.
See references/volumes.md for volume management, commits, and caching patterns.
7. Manage Secrets
Store API keys and credentials securely using Modal Secrets.
1@app.function(secrets=[modal.Secret.from_name("huggingface")])
2def download_model():
3 import os
4 token = os.environ["HF_TOKEN"]
5 # Use token for authentication
Create secrets in Modal dashboard or via CLI:
1modal secret create my-secret KEY=value API_TOKEN=xyz
See references/secrets.md for secret management and authentication patterns.
8. Deploy Web Endpoints
Serve HTTP endpoints, APIs, and webhooks with @modal.web_endpoint().
1@app.function()
2@modal.web_endpoint(method="POST")
3def predict(data: dict):
4 # Process request
5 result = model.predict(data["input"])
6 return {"prediction": result}
Deploy with:
1modal deploy script.py
Modal provides HTTPS URL for the endpoint.
See references/web-endpoints.md for FastAPI integration, streaming, authentication, and WebSocket support.
9. Schedule Jobs
Run functions on a schedule with cron expressions.
1@app.function(schedule=modal.Cron("0 2 * * *")) # Daily at 2 AM
2def daily_backup():
3 # Backup data
4 pass
5
6@app.function(schedule=modal.Period(hours=4)) # Every 4 hours
7def refresh_cache():
8 # Update cache
9 pass
Scheduled functions run automatically without manual invocation.
See references/scheduled-jobs.md for cron syntax, timezone configuration, and monitoring.
Common Workflows
Deploy ML Model for Inference
1import modal
2
3# Define dependencies
4image = modal.Image.debian_slim().uv_pip_install("torch", "transformers")
5app = modal.App("llm-inference", image=image)
6
7# Download model at build time
8@app.function()
9def download_model():
10 from transformers import AutoModel
11 AutoModel.from_pretrained("bert-base-uncased")
12
13# Serve model
14@app.cls(gpu="L40S")
15class Model:
16 @modal.enter()
17 def load_model(self):
18 from transformers import pipeline
19 self.pipe = pipeline("text-classification", device="cuda")
20
21 @modal.method()
22 def predict(self, text: str):
23 return self.pipe(text)
24
25@app.local_entrypoint()
26def main():
27 model = Model()
28 result = model.predict.remote("Modal is great!")
29 print(result)
Batch Process Large Dataset
1@app.function(cpu=2.0, memory=4096)
2def process_file(file_path: str):
3 import pandas as pd
4 df = pd.read_csv(file_path)
5 # Process data
6 return df.shape[0]
7
8@app.local_entrypoint()
9def main():
10 files = ["file1.csv", "file2.csv", ...] # 1000s of files
11 # Automatically parallelized across containers
12 for count in process_file.map(files):
13 print(f"Processed {count} rows")
Train Model on GPU
1@app.function(
2 gpu="A100:2", # 2x A100 GPUs
3 timeout=3600 # 1 hour timeout
4)
5def train_model(config: dict):
6 import torch
7 # Multi-GPU training code
8 model = create_model(config)
9 train(model)
10 return metrics
Reference Documentation
Detailed documentation for specific features:
references/getting-started.md- Authentication, setup, basic conceptsreferences/images.md- Image building, dependencies, Dockerfilesreferences/functions.md- Function patterns, deployment, parametersreferences/gpu.md- GPU types, CUDA, multi-GPU configurationreferences/resources.md- CPU, memory, disk managementreferences/scaling.md- Autoscaling, parallel execution, concurrencyreferences/volumes.md- Persistent storage, data managementreferences/secrets.md- Environment variables, authenticationreferences/web-endpoints.md- APIs, webhooks, endpointsreferences/scheduled-jobs.md- Cron jobs, periodic tasksreferences/examples.md- Common patterns for scientific computing
Best Practices
- Pin dependencies in
.uv_pip_install()for reproducible builds - Use appropriate GPU types - L40S for inference, H100/A100 for training
- Leverage caching - Use Volumes for model weights and datasets
- Configure autoscaling - Set
max_containersandmin_containersbased on workload - Import packages in function body if not available locally
- Use
.map()for parallel processing instead of sequential loops - Store secrets securely - Never hardcode API keys
- Monitor costs - Check Modal dashboard for usage and billing
Troubleshooting
“Module not found” errors:
- Add packages to image with
.uv_pip_install("package-name") - Import packages inside function body if not available locally
GPU not detected:
- Verify GPU specification:
@app.function(gpu="A100") - Check CUDA availability:
torch.cuda.is_available()
Function timeout:
- Increase timeout:
@app.function(timeout=3600) - Default timeout is 5 minutes
Volume changes not persisting:
- Call
volume.commit()after writing files - Verify volume mounted correctly in function decorator
For additional help, see Modal documentation at https://modal.com/docs or join Modal Slack community.
What Users Are Saying
Real feedback from the community
Environment Matrix
Dependencies
Framework Support
Context Window
Security & Privacy
Information
- Author
- davila7
- Updated
- 2026-01-30
- Category
- scripting
Related Skills
Modal
Run Python code in the cloud with serverless containers, GPUs, and autoscaling. Use when deploying …
View Details →Latchbio Integration
Latch platform for bioinformatics workflows. Build pipelines with Latch SDK, @workflow/@task …
View Details →Latchbio Integration
Latch platform for bioinformatics workflows. Build pipelines with Latch SDK, @workflow/@task …
View Details →