CLI Reference
Commands accept either DAG names (from YAML name field) or file paths.
- Both formats:
start,stop,status,retry - File path only:
dry,enqueue - DAG name only:
restart
Global Options
dagu [global options] command [command options] [arguments...]--config, -c- Config file (default:~/.config/dagu/config.yaml)--dagu-home- Override DAGU_HOME for this command invocation--quiet, -q- Suppress output--cpu-profile- Enable CPU profiling--help, -h- Show help--version, -v- Print version
Commands
exec
Run a command without writing a YAML file.
dagu exec [options] -- <command> [args...]Options:
--name, -N- DAG name (default:exec-<command>)--run-id, -r- Custom run ID--env KEY=VALUE- Set environment variable (repeatable)--dotenv <path>- Load dotenv file (repeatable)--workdir <path>- Working directory--shell <path>- Shell binary--base <file>- Custom base config file (default:~/.config/dagu/base.yaml)--singleton- Allow only one active run--queue <name>- Enqueue for distributed execution--no-queue- Force local execution--worker-label key=value- Target specific workers (repeatable)
# Basic usage
dagu exec -- python script.py
# With environment variables
dagu exec --env DB_HOST=localhost -- python etl.py
# Singleton execution
dagu exec --singleton --name backup -- rsync -av /src/ /dst/
# Distributed execution
dagu exec --queue gpu-jobs --worker-label gpu=true -- python train.pySee the exec guide for detailed documentation.
start
Run a DAG workflow.
dagu start [options] DAG_NAME_OR_FILE [-- PARAMS...]Interactive Mode:
- If no DAG file is specified, opens an interactive selector
- Only available in terminal (TTY) environments
- Shows enhanced progress display during execution
Options:
--params, -p- Parameters as JSON--name, -N- Override the DAG name (default: name from DAG definition or filename)--run-id, -r- Custom run ID--from-run-id- Re-run using the DAG snapshot and parameters captured from a historic run--no-queue, -n- Execute immediately
Note:
--from-run-idcannot be combined with--params,--parent, or--root. Provide exactly one DAG name or file so the command can look up the historic run.
# Basic run
dagu start my-workflow.yaml
# Interactive mode (no file specified)
dagu start
# With parameters (note the -- separator)
dagu start etl.yaml -- DATE=2024-01-01 ENV=prod
# Custom run ID
dagu start --run-id batch-001 etl.yaml
# Override DAG name
dagu start --name my_custom_name my-workflow.yaml
# Clone parameters from a historic run
dagu start --from-run-id 20241031_235959 example-dag.yamlstop
Stop a running DAG.
dagu stop [options] DAG_NAME_OR_FILEOptions:
--run-id, -r- Specific run ID (optional)
dagu stop my-workflow # Stop current run
dagu stop --run-id=20240101_120000 etl # Stop specific runrestart
Restart a DAG run with a new ID.
dagu restart [options] DAG_NAMEOptions:
--run-id, -r- Run to restart (optional)
dagu restart my-workflow # Restart latest
dagu restart --run-id=20240101_120000 etl # Restart specificretry
Retry a failed DAG execution.
dagu retry [options] DAG_NAME_OR_FILEOptions:
--run-id, -r- Run to retry (required)
dagu retry --run-id=20240101_120000 my-workflowstatus
Display current status of a DAG.
dagu status [options] DAG_NAME_OR_FILEOptions:
--run-id, -r- Check specific run (optional)
dagu status my-workflow # Latest run statusOutput:
Status: running
Started: 2024-01-01 12:00:00
Steps:
✓ download [completed]
⟳ process [running]
○ upload [pending]server
Start the web UI server.
dagu server [options]Options:
--host, -s- Host (default: localhost)--port, -p- Port (default: 8080)--dags, -d- DAGs directory
dagu server # Default settings
dagu server --host=0.0.0.0 --port=9000 # Custom host/portscheduler
Start the DAG scheduler daemon.
dagu scheduler [options]Options:
--dags, -d- DAGs directory
dagu scheduler # Default settings
dagu scheduler --dags=/opt/dags # Custom directorystart-all
Start scheduler, web UI, and optionally coordinator service.
dagu start-all [options]Options:
--host, -s- Host (default: localhost)--port, -p- Port (default: 8080)--dags, -d- DAGs directory--coordinator.host- Coordinator bind address (default: 127.0.0.1)--coordinator.advertise- Address to advertise in service registry--coordinator.port- Coordinator gRPC port (default: 50055)
# Single instance mode (coordinator disabled)
dagu start-all
# Distributed mode with coordinator enabled
dagu start-all --coordinator.host=0.0.0.0 --coordinator.port=50055
# Production mode
dagu start-all --host=0.0.0.0 --port=9000 --coordinator.host=0.0.0.0Note: The coordinator service is only started when --coordinator.host is set to a non-localhost address (not 127.0.0.1 or localhost). By default, start-all runs in single instance mode without the coordinator.
validate
Validate a DAG specification for structural correctness.
dagu validate [options] DAG_FILEChecks structural correctness and references (e.g., step dependencies) without evaluating variables or executing the DAG. Returns validation errors in a human-readable format.
dagu validate my-workflow.yamlOutput when valid:
DAG spec is valid: my-workflow.yaml (name: my-workflow)Output when invalid:
Validation failed for my-workflow.yaml
- Step 'process' depends on non-existent step 'missing_step'
- Invalid cron expression in schedule: '* * * *'dry
Validate a DAG without executing it.
dagu dry [options] DAG_FILE [-- PARAMS...]Options:
--params, -p- Parameters as JSON--name, -N- Override the DAG name (default: name from DAG definition or filename)
dagu dry my-workflow.yaml
dagu dry etl.yaml -- DATE=2024-01-01 # With parameters
dagu dry --name my_custom_name my-workflow.yaml # Override DAG nameenqueue
Add a DAG to the execution queue.
dagu enqueue [options] DAG_FILE [-- PARAMS...]Options:
--run-id, -r- Custom run ID--params, -p- Parameters as JSON--name, -N- Override the DAG name (default: name from DAG definition or filename)--queue, -u- Override DAG-level queue name for this enqueue
dagu enqueue my-workflow.yaml
dagu enqueue --run-id=batch-001 etl.yaml -- TYPE=daily
# Enqueue to a specific queue (override)
dagu enqueue --queue=high-priority my-workflow.yaml
# Override DAG name
dagu enqueue --name my_custom_name my-workflow.yamldequeue
Remove a DAG from the execution queue.
dagu dequeue <queue-name> --dag-run=<dag-name>:<run-id> # remove specific run
dagu dequeue <queue-name> # pop the oldest itemExample:
dagu dequeue default --dag-run=my-workflow:batch-001
dagu dequeue defaultversion
Display version information.
dagu versionmigrate
Migrate legacy data to new format.
dagu migrate history # Migrate v1.16 -> v1.17+ formatcoordinator
Start the coordinator gRPC server for distributed task execution.
dagu coordinator [options]Options:
--coordinator.host- Host address to bind (default:127.0.0.1)--coordinator.advertise- Address to advertise in service registry (default: auto-detected hostname)--coordinator.port- Port number (default:50055)--peer.cert-file- Path to TLS certificate file for peer connections--peer.key-file- Path to TLS key file for peer connections--peer.client-ca-file- Path to CA certificate file for client verification (mTLS)--peer.insecure- Use insecure connection (h2c) instead of TLS (default:true)--peer.skip-tls-verify- Skip TLS certificate verification (insecure)
# Basic usage
dagu coordinator --coordinator.host=0.0.0.0 --coordinator.port=50055
# Bind to all interfaces and advertise service name (for containers/K8s)
dagu coordinator \
--coordinator.host=0.0.0.0 \
--coordinator.advertise=dagu-server \
--coordinator.port=50055
# With TLS
dagu coordinator \
--peer.insecure=false \
--peer.cert-file=server.pem \
--peer.key-file=server-key.pem
# With mutual TLS
dagu coordinator \
--peer.insecure=false \
--peer.cert-file=server.pem \
--peer.key-file=server-key.pem \
--peer.client-ca-file=ca.pemThe coordinator service enables distributed task execution by:
- Automatically registering in the service registry system
- Accepting task polling requests from workers
- Matching tasks to workers based on labels
- Tracking worker health via heartbeats (every 10 seconds)
- Providing task distribution API with automatic failover
- Managing worker lifecycle through file-based registry
worker
Start a worker that polls the coordinator for tasks.
dagu worker [options]Options:
--worker.id- Worker instance ID (default:hostname@PID)--worker.max-active-runs- Maximum number of active runs (default:100)--worker.labels, -l- Worker labels for capability matching (format:key1=value1,key2=value2)--peer.insecure- Use insecure connection (h2c) instead of TLS (default:true)--peer.cert-file- Path to TLS certificate file for peer connections--peer.key-file- Path to TLS key file for peer connections--peer.client-ca-file- Path to CA certificate file for server verification--peer.skip-tls-verify- Skip TLS certificate verification (insecure)
# Basic usage
dagu worker
# With custom configuration
dagu worker \
--worker.id=worker-1 \
--worker.max-active-runs=50
# With labels for capability matching
dagu worker --worker.labels gpu=true,memory=64G,region=us-east-1
dagu worker --worker.labels cpu-arch=amd64,instance-type=m5.xlarge
# With TLS connection
dagu worker \
--peer.insecure=false
# With mutual TLS
dagu worker \
--peer.insecure=false \
--peer.cert-file=client.pem \
--peer.key-file=client-key.pem \
--peer.client-ca-file=ca.pem
# With self-signed certificates
dagu worker \
--peer.insecure=false \
--peer.skip-tls-verifyWorkers automatically register in the service registry system, send regular heartbeats, and poll the coordinator for tasks matching their labels to execute them locally.
Configuration
Priority: CLI flags > Environment variables > Config file
Using Custom Home Directory
The --dagu-home flag allows you to override the application home directory for a specific command invocation. This is useful for:
- Testing with different configurations
- Running multiple Dagu instances with isolated data
- CI/CD scenarios requiring custom directories
# Use a custom home directory for this command
dagu --dagu-home=/tmp/dagu-test start my-workflow.yaml
# Start server with isolated data
dagu --dagu-home=/opt/dagu-prod start-all
# Run scheduler with specific configuration
dagu --dagu-home=/var/lib/dagu schedulerWhen --dagu-home is set, it overrides the DAGU_HOME environment variable and uses a unified directory structure:
$DAGU_HOME/
├── dags/ # DAG definitions
├── logs/ # All log files
├── data/ # Application data
├── suspend/ # DAG suspend flags
├── config.yaml # Main configuration
└── base.yaml # Shared DAG defaultsKey Environment Variables
DAGU_HOME- Set all directories to this pathDAGU_HOST- Server host (default:127.0.0.1)DAGU_PORT- Server port (default:8080)DAGU_DAGS_DIR- DAGs directoryDAGU_LOG_DIR- Log directoryDAGU_DATA_DIR- Data directoryDAGU_AUTH_BASIC_USERNAME- Basic auth usernameDAGU_AUTH_BASIC_PASSWORD- Basic auth password
