Administration
Infrastructure components and INFRA cluster administration SOP: create, destroy, scale out, scale in, certificates, repositories…
This section covers daily administration and operations for Pigsty deployments.
Create INFRA Module
Use infra.yml playbook to install INFRA module on infra group:
./infra.yml # Install INFRA module on infra group
Uninstall INFRA Module
Use dedicated infra-rm.yml playbook to remove INFRA module from infra group:
./infra-rm.yml # Remove INFRA module from infra group
Manage Local Repository
Pigsty includes local yum/apt repo for software packages. Manage repo configuration:
Repo Variables
| Variable | Description |
|---|
repo_enabled | Enable local repo on node |
repo_upstream | Upstream repos to include |
repo_remove | Remove upstream repos if true |
repo_url_pkg | Extra packages to download |
repo_clean | Clean repo cache (makecache) |
repo_pkg | Packages to include |
Repo Tasks
./infra.yml -t repo # Create or update repo
Repo location: /www/pigsty served by Nginx.
More: Configuration: INFRA - REPO
1 - Ansible
Using Ansible to run administration commands
Ansible is installed by default on all INFRA nodes and can be used to manage the entire deployment.
Pigsty implements automation based on Ansible, following the Infrastructure-as-Code philosophy.
Ansible knowledge is useful for managing databases and infrastructure, but not required. You only need to know how to execute Playbooks - YAML files that define a series of automated tasks.
Installation
Pigsty automatically installs ansible and its dependencies during the bootstrap process.
For manual installation, use the following commands:
# Debian / Ubuntu
sudo apt install -y ansible python3-jmespath
# EL 10
sudo dnf install -y ansible python-jmespath
# EL 8/9
sudo dnf install -y ansible python3.12-jmespath
# EL 7
sudo yum install -y ansible python-jmespath
macOS
macOS users can install using Homebrew:
brew install ansible
pip3 install jmespath
Basic Usage
To run a playbook, simply execute ./path/to/playbook.yml. Here are the most commonly used Ansible command-line parameters:
| Purpose | Parameter | Description |
|---|
| Where | -l / --limit <pattern> | Limit target hosts/groups/patterns |
| What | -t / --tags <tags> | Only run tasks with specified tags |
| How | -e / --extra-vars <vars> | Pass extra command-line variables |
| Config | -i / --inventory <path> | Specify inventory file path |
Limiting Hosts
Use -l|--limit <pattern> to limit execution to specific groups, hosts, or patterns:
./node.yml # Execute on all nodes
./pgsql.yml -l pg-test # Only execute on pg-test cluster
./pgsql.yml -l pg-* # Execute on all clusters starting with pg-
./pgsql.yml -l 10.10.10.10 # Only execute on specific IP host
Running playbooks without host limits can be very dangerous! By default, most playbooks execute on all hosts. Use with caution!
Limiting Tasks
Use -t|--tags <tags> to only execute task subsets with specified tags:
./infra.yml -t repo # Only execute tasks to create local repo
./infra.yml -t repo_upstream # Only execute tasks to add upstream repos
./node.yml -t node_pkg # Only execute tasks to install node packages
./pgsql.yml -t pg_hba # Only execute tasks to render pg_hba.conf
Passing Variables
Use -e|--extra-vars <key=value> to override variables at runtime:
./pgsql.yml -e pg_clean=true # Force clean existing PG instances
./pgsql-rm.yml -e pg_rm_pkg=false # Keep packages when uninstalling
./node.yml -e '{"node_tune":"tiny"}' # Pass variables in JSON format
./pgsql.yml -e @/path/to/config.yml # Load variables from YAML file
Specifying Inventory
By default, Ansible uses pigsty.yml in the current directory as the inventory.
Use -i|--inventory <path> to specify a different config file:
./pgsql.yml -i files/pigsty/full.yml -l pg-test
[!NOTE]
To permanently change the default config file path, modify the inventory parameter in ansible.cfg.
2 - Playbooks
Built-in Ansible playbooks in Pigsty
Pigsty uses idempotent Ansible playbooks for management and control. Running playbooks requires ansible-playbook to be in the system PATH; users must first install Ansible before executing playbooks.
Available Playbooks
| Module | Playbook | Purpose |
|---|
| INFRA | install.yml | One-click Pigsty installation |
| INFRA | infra.yml | Initialize Pigsty infrastructure on infra nodes |
| INFRA | infra-rm.yml | Remove infrastructure components from infra nodes |
| INFRA | cache.yml | Create offline installation packages from target nodes |
| INFRA | cert.yml | Issue certificates using Pigsty self-signed CA |
| NODE | node.yml | Initialize nodes, configure to desired state |
| NODE | node-rm.yml | Remove nodes from Pigsty |
| PGSQL | pgsql.yml | Initialize HA PostgreSQL cluster, or add new replica |
| PGSQL | pgsql-rm.yml | Remove PostgreSQL cluster, or remove replica |
| PGSQL | pgsql-db.yml | Add new business database to existing cluster |
| PGSQL | pgsql-user.yml | Add new business user to existing cluster |
| PGSQL | pgsql-pitr.yml | Perform point-in-time recovery (PITR) on cluster |
| PGSQL | pgsql-monitor.yml | Monitor remote PostgreSQL using local exporters |
| PGSQL | pgsql-migration.yml | Generate migration manual and scripts for PostgreSQL |
| PGSQL | slim.yml | Install Pigsty with minimal components |
| REDIS | redis.yml | Initialize Redis cluster/node/instance |
| REDIS | redis-rm.yml | Remove Redis cluster/node/instance |
| ETCD | etcd.yml | Initialize ETCD cluster, or add new member |
| ETCD | etcd-rm.yml | Remove ETCD cluster, or remove existing member |
| MINIO | minio.yml | Initialize MinIO cluster |
| MINIO | minio-rm.yml | Remove MinIO cluster |
| DOCKER | docker.yml | Install Docker on nodes |
| DOCKER | app.yml | Install applications using Docker Compose |
| FERRET | mongo.yml | Install Mongo/FerretDB on nodes |
Deployment Strategy
The install.yml playbook orchestrates specialized playbooks in the following group order for complete deployment:
- infra:
infra.yml (-l infra) - nodes:
node.yml - etcd:
etcd.yml (-l etcd) - minio:
minio.yml (-l minio) - pgsql:
pgsql.yml
Circular Dependency Note: There is a weak circular dependency between NODE and INFRA: to register NODE to INFRA, INFRA must already exist; while INFRA module depends on NODE to work.
The solution is to initialize infra nodes first, then add other nodes. To complete all deployment at once, use install.yml.
Safety Notes
Most playbooks are idempotent, which means some deployment playbooks may wipe existing databases and create new ones when protection options are not enabled.
Use extra caution with pgsql, minio, and infra playbooks. Read the documentation carefully and proceed with caution.
Best Practices
- Read playbook documentation carefully before execution
- Press Ctrl-C immediately to stop when anomalies occur
- Test in non-production environments first
- Use
-l parameter to limit target hosts, avoiding unintended hosts - Use
-t parameter to specify tags, executing only specific tasks
Dry-Run Mode
Use --check --diff options to preview changes without actually executing:
# Preview changes without execution
./pgsql.yml -l pg-test --check --diff
# Check specific tasks with tags
./pgsql.yml -l pg-test -t pg_config --check --diff
3 - Nginx Management
Nginx management, web portal configuration, web server, upstream services
Pigsty installs Nginx on INFRA nodes as the entry point for all web services, listening on standard ports 80/443.
In Pigsty, you can configure Nginx to provide various services through inventory:
- Expose web interfaces for monitoring components like Grafana, VictoriaMetrics (VMUI), Alertmanager, and VictoriaLogs
- Serve static files (software repos, documentation sites, websites, etc.)
- Proxy custom application services (internal apps, database management UIs, Docker application interfaces, etc.)
- Automatically issue self-signed HTTPS certificates, or use Certbot to obtain free Let’s Encrypt certificates
- Expose services through a single port using different subdomains for unified access
Basic Configuration
Customize Nginx behavior via infra_portal parameter:
infra_portal:
home: { domain: i.pigsty }
grafana : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
prometheus : { domain: p.pigsty ,endpoint: "${admin_ip}:8428" }
alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9059" }
blackbox : { endpoint: "${admin_ip}:9115" }
vmalert : { endpoint: "${admin_ip}:8880" }
Server Parameters
4 - Software Repository
Managing local APT/YUM software repositories
Pigsty supports creating and managing local APT/YUM software repositories for offline deployment or accelerated package installation.
Quick Start
To add packages to the local repository:
- Add packages to
repo_packages (default packages) - Add packages to
repo_extra_packages (extra packages) - Run the build command:
./infra.yml -t repo_build # Build local repo from upstream
./node.yml -t node_repo # Refresh node repository cache
Package Aliases
Pigsty predefines common package combinations for batch installation:
EL Systems (RHEL/CentOS/Rocky)
| Alias | Description |
|---|
node-bootstrap | Ansible, Python3 tools, SSH related |
infra-package | Nginx, etcd, HAProxy, monitoring exporters, MinIO |
pgsql-utility | Patroni, pgBouncer, pgBackRest, PG tools |
pgsql | Full PostgreSQL (server, client, extensions) |
pgsql-mini | Minimal PostgreSQL installation |
Debian/Ubuntu Systems
| Alias | Description |
|---|
node-bootstrap | Ansible, development tools |
infra-package | Infrastructure components (Debian naming) |
pgsql-client | PostgreSQL client |
pgsql-server | PostgreSQL server and related packages |
Playbook Tasks
Main Tasks
| Task | Description |
|---|
repo | Create local repo from internet or offline packages |
repo_build | Build from upstream if not exists |
repo_upstream | Add upstream repository files |
repo_pkg | Download packages and dependencies |
repo_create | Create/update YUM or APT repository |
repo_nginx | Start Nginx file server |
Complete Task List
./infra.yml -t repo_dir # Create local repository directory
./infra.yml -t repo_check # Check if local repo exists
./infra.yml -t repo_prepare # Use existing repo directly
./infra.yml -t repo_build # Build repo from upstream
./infra.yml -t repo_upstream # Add upstream repositories
./infra.yml -t repo_remove # Delete existing repo files
./infra.yml -t repo_add # Add repo to system directory
./infra.yml -t repo_url_pkg # Download packages from internet
./infra.yml -t repo_cache # Create metadata cache
./infra.yml -t repo_boot_pkg # Install bootstrap packages
./infra.yml -t repo_pkg # Download packages and dependencies
./infra.yml -t repo_create # Create local repository
./infra.yml -t repo_use # Add new repo to system
./infra.yml -t repo_nginx # Start Nginx file server
Common Operations
Add New Packages
# 1. Configure upstream repositories
./infra.yml -t repo_upstream
# 2. Download packages and dependencies
./infra.yml -t repo_pkg
# 3. Build local repository metadata
./infra.yml -t repo_create
Refresh Node Repositories
./node.yml -t node_repo # Refresh repository cache on all nodes
Complete Repository Rebuild
./infra.yml -t repo # Create repo from internet or offline packages
5 - Domain Management
Configure local or public domain names to access Pigsty services.
Use domain names instead of IP addresses to access Pigsty’s various web services.
Quick Start
Add the following static resolution records to /etc/hosts:
10.10.10.10 i.pigsty g.pigsty a.pigsty
Replace IP address with your actual Pigsty node’s IP.
Why Use Domain Names
- Easier to remember than IP addresses
- Flexible pointing to different IPs
- Unified service management through Nginx
- Support for HTTPS encryption
- Prevent ISP hijacking in some regions
- Allow access to internally bound services via proxy
DNS Mechanism
Default Domains
6 - Module Management
INFRA module management SOP: define, create, destroy, scale out, scale in
This document covers daily management operations for the INFRA module, including installation, uninstallation, scaling, and component maintenance.
Install INFRA Module
Use the infra.yml playbook to install the INFRA module on the infra group:
./infra.yml # Install INFRA module on infra group
Uninstall INFRA Module
Use the infra-rm.yml playbook to uninstall the INFRA module from the infra group:
./infra-rm.yml # Uninstall INFRA module from infra group
Scale Out INFRA Module
Assign infra_seq to new nodes and add them to the infra group in the inventory:
all:
children:
infra:
hosts:
10.10.10.10: { infra_seq: 1 } # Existing node
10.10.10.11: { infra_seq: 2 } # New node
Use the -l limit option to execute the playbook on the new node only:
./infra.yml -l 10.10.10.11 # Install INFRA module on new node
Manage Local Repository
Local repository management tasks:
./infra.yml -t repo # Create repo from internet or offline packages
./infra.yml -t repo_upstream # Add upstream repositories
./infra.yml -t repo_pkg # Download packages and dependencies
./infra.yml -t repo_create # Create local yum/apt repository
Complete subtask list:
./infra.yml -t repo_dir # Create local repository directory
./infra.yml -t repo_check # Check if local repo exists
./infra.yml -t repo_prepare # Use existing repo directly
./infra.yml -t repo_build # Build repo from upstream
./infra.yml -t repo_upstream # Add upstream repositories
./infra.yml -t repo_remove # Delete existing repo files
./infra.yml -t repo_add # Add repo to system directory
./infra.yml -t repo_url_pkg # Download packages from internet
./infra.yml -t repo_cache # Create metadata cache
./infra.yml -t repo_boot_pkg # Install bootstrap packages
./infra.yml -t repo_pkg # Download packages and dependencies
./infra.yml -t repo_create # Create local repository
./infra.yml -t repo_use # Add new repo to system
./infra.yml -t repo_nginx # Start Nginx file server
Manage Nginx
Nginx management tasks:
./infra.yml -t nginx # Reset Nginx component
./infra.yml -t nginx_index # Re-render homepage
./infra.yml -t nginx_config,nginx_reload # Re-render config and reload
Request HTTPS certificate:
./infra.yml -t nginx_certbot,nginx_reload -e certbot_sign=true
Manage Infrastructure Components
Management commands for various infrastructure components:
./infra.yml -t infra # Configure infrastructure
./infra.yml -t infra_env # Configure environment variables
./infra.yml -t infra_pkg # Install packages
./infra.yml -t infra_user # Set up OS user
./infra.yml -t infra_cert # Issue certificates
./infra.yml -t dns # Configure DNSMasq
./infra.yml -t nginx # Configure Nginx
./infra.yml -t victoria # Configure VictoriaMetrics/Logs/Traces
./infra.yml -t alertmanager # Configure AlertManager
./infra.yml -t blackbox # Configure Blackbox Exporter
./infra.yml -t grafana # Configure Grafana
./infra.yml -t infra_register # Register to VictoriaMetrics/Grafana
Common maintenance commands:
./infra.yml -t nginx_index # Re-render homepage
./infra.yml -t nginx_config,nginx_reload # Reconfigure and reload
./infra.yml -t vmetrics_config,vmetrics_launch # Regenerate VictoriaMetrics config and restart
./infra.yml -t vlogs_config,vlogs_launch # Update VictoriaLogs config
./infra.yml -t grafana_plugin # Download Grafana plugins
7 - CA and Certificates
Using self-signed CA or real HTTPS certificates
Pigsty uses a self-signed Certificate Authority (CA) by default for internal SSL/TLS encryption. This document covers:
Self-Signed CA
Pigsty automatically creates a self-signed CA during infrastructure initialization (infra.yml). The CA signs certificates for:
- PostgreSQL server/client SSL
- Patroni REST API
- etcd cluster communication
- MinIO cluster communication
- Nginx HTTPS (fallback)
- Infrastructure services
PKI Directory Structure
files/pki/
├── ca/
│ ├── ca.key # CA private key (keep secure!)
│ └── ca.crt # CA certificate
├── csr/ # Certificate signing requests
│ ├── misc/ # Miscellaneous certificates (cert.yml output)
│ ├── etcd/ # ETCD certificates
│ ├── pgsql/ # PostgreSQL certificates
│ ├── minio/ # MinIO certificates
│ ├── nginx/ # Nginx certificates
│ └── mongo/ # FerretDB certificates
└── infra/ # Infrastructure certificates
CA Variables
| Variable | Default | Description |
|---|
ca_create | true | Create CA if not exists, or abort |
ca_cn | pigsty-ca | CA certificate common name |
cert_validity | 7300d | Default validity for issued certificates |
| Variable | Default |
| :—————- | ————– | —————————————- |
| CA Certificate | 100 years | Hardcoded (36500 days) |
| Server/Client | 20 years | cert_validity (7300d) |
| Nginx HTTPS | ~1 year | nginx_cert_validity (397d) |
| > Note: Browser vendors limit trust to 398-day certificates. Nginx uses shorter validity for browser compatibility. | | |
Using External CA
To use your own enterprise CA instead of auto-generated one:
1. Set ca_create: false in your configuration.
2. Place your CA files before running playbook:
mkdir -p files/pki/ca
cp /path/to/your/ca.key files/pki/ca/ca.key
cp /path/to/your/ca.crt files/pki/ca/ca.crt
chmod 600 files/pki/ca/ca.key
chmod 644 files/pki/ca/ca.crt
3. Run ./infra.yml
Backup CA Files
The CA private key is critical. Back it up securely:
# Backup with timestamp
tar -czvf pigsty-ca-$(date +%Y%m%d).tar.gz files/pki/ca/
Warning: If you lose CA private key, all certificates signed by it become unverifiable. You’ll need to regenerate everything.
Issue Certificates
Use cert.yml to issue additional certificates signed by Pigsty CA.
Basic Usage
# Issue certificate for database user (client cert)
./cert.yml -e cn=dbuser_dba
# Issue certificate for monitor user
./cert.yml -e cn=dbuser_monitor
Certificates generated in files/pki/misc/<cn>.{key,crt} by default.
Parameters
| Parameter | Default | Description |
|---|
cn | pigsty | Common Name (required) |
san | [DNS:localhost, IP:127.0.0.1] | Subject Alternative Names |
org | pigsty | Organization name |
unit | pigsty | Organizational unit name |
expire | 7300d | Certificate validity (20 years) |
key | files/pki/misc/<cn>.key | Private key output path |
crt | files/pki/misc/<cn>.crt | Certificate output path |
Advanced Examples
# Issue certificate with custom SAN (DNS and IP)
./cert.yml -e cn=myservice -e san=DNS:myservice,IP:10.2.82.163
(File has more lines. Use ‘offset’ parameter to read beyond line 130)