This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Administration

Infrastructure components and INFRA cluster administration SOP: create, destroy, scale out, scale in, certificates, repositories…

This section covers daily administration and operations for Pigsty deployments.


Create INFRA Module

Use infra.yml playbook to install INFRA module on infra group:

./infra.yml     # Install INFRA module on infra group

Uninstall INFRA Module

Use dedicated infra-rm.yml playbook to remove INFRA module from infra group:

./infra-rm.yml  # Remove INFRA module from infra group

Manage Local Repository

Pigsty includes local yum/apt repo for software packages. Manage repo configuration:

Repo Variables

VariableDescription
repo_enabledEnable local repo on node
repo_upstreamUpstream repos to include
repo_removeRemove upstream repos if true
repo_url_pkgExtra packages to download
repo_cleanClean repo cache (makecache)
repo_pkgPackages to include

Repo Tasks

./infra.yml -t repo              # Create or update repo

Repo location: /www/pigsty served by Nginx.

More: Configuration: INFRA - REPO

1 - Ansible

Using Ansible to run administration commands

Ansible is installed by default on all INFRA nodes and can be used to manage the entire deployment.

Pigsty implements automation based on Ansible, following the Infrastructure-as-Code philosophy.

Ansible knowledge is useful for managing databases and infrastructure, but not required. You only need to know how to execute Playbooks - YAML files that define a series of automated tasks.


Installation

Pigsty automatically installs ansible and its dependencies during the bootstrap process. For manual installation, use the following commands:

# Debian / Ubuntu
sudo apt install -y ansible python3-jmespath

# EL 10
sudo dnf install -y ansible python-jmespath

# EL 8/9
sudo dnf install -y ansible python3.12-jmespath

# EL 7
sudo yum install -y ansible python-jmespath

macOS

macOS users can install using Homebrew:

brew install ansible
pip3 install jmespath

Basic Usage

To run a playbook, simply execute ./path/to/playbook.yml. Here are the most commonly used Ansible command-line parameters:

PurposeParameterDescription
Where-l / --limit <pattern>Limit target hosts/groups/patterns
What-t / --tags <tags>Only run tasks with specified tags
How-e / --extra-vars <vars>Pass extra command-line variables
Config-i / --inventory <path>Specify inventory file path

Limiting Hosts

Use -l|--limit <pattern> to limit execution to specific groups, hosts, or patterns:

./node.yml                      # Execute on all nodes
./pgsql.yml -l pg-test          # Only execute on pg-test cluster
./pgsql.yml -l pg-*             # Execute on all clusters starting with pg-
./pgsql.yml -l 10.10.10.10      # Only execute on specific IP host

Running playbooks without host limits can be very dangerous! By default, most playbooks execute on all hosts. Use with caution!


Limiting Tasks

Use -t|--tags <tags> to only execute task subsets with specified tags:

./infra.yml -t repo           # Only execute tasks to create local repo
./infra.yml -t repo_upstream  # Only execute tasks to add upstream repos
./node.yml -t node_pkg        # Only execute tasks to install node packages
./pgsql.yml -t pg_hba         # Only execute tasks to render pg_hba.conf

Passing Variables

Use -e|--extra-vars <key=value> to override variables at runtime:

./pgsql.yml -e pg_clean=true         # Force clean existing PG instances
./pgsql-rm.yml -e pg_rm_pkg=false    # Keep packages when uninstalling
./node.yml -e '{"node_tune":"tiny"}' # Pass variables in JSON format
./pgsql.yml -e @/path/to/config.yml  # Load variables from YAML file

Specifying Inventory

By default, Ansible uses pigsty.yml in the current directory as the inventory. Use -i|--inventory <path> to specify a different config file:

./pgsql.yml -i files/pigsty/full.yml -l pg-test

[!NOTE]

To permanently change the default config file path, modify the inventory parameter in ansible.cfg.

2 - Playbooks

Built-in Ansible playbooks in Pigsty

Pigsty uses idempotent Ansible playbooks for management and control. Running playbooks requires ansible-playbook to be in the system PATH; users must first install Ansible before executing playbooks.

Available Playbooks

ModulePlaybookPurpose
INFRAinstall.ymlOne-click Pigsty installation
INFRAinfra.ymlInitialize Pigsty infrastructure on infra nodes
INFRAinfra-rm.ymlRemove infrastructure components from infra nodes
INFRAcache.ymlCreate offline installation packages from target nodes
INFRAcert.ymlIssue certificates using Pigsty self-signed CA
NODEnode.ymlInitialize nodes, configure to desired state
NODEnode-rm.ymlRemove nodes from Pigsty
PGSQLpgsql.ymlInitialize HA PostgreSQL cluster, or add new replica
PGSQLpgsql-rm.ymlRemove PostgreSQL cluster, or remove replica
PGSQLpgsql-db.ymlAdd new business database to existing cluster
PGSQLpgsql-user.ymlAdd new business user to existing cluster
PGSQLpgsql-pitr.ymlPerform point-in-time recovery (PITR) on cluster
PGSQLpgsql-monitor.ymlMonitor remote PostgreSQL using local exporters
PGSQLpgsql-migration.ymlGenerate migration manual and scripts for PostgreSQL
PGSQLslim.ymlInstall Pigsty with minimal components
REDISredis.ymlInitialize Redis cluster/node/instance
REDISredis-rm.ymlRemove Redis cluster/node/instance
ETCDetcd.ymlInitialize ETCD cluster, or add new member
ETCDetcd-rm.ymlRemove ETCD cluster, or remove existing member
MINIOminio.ymlInitialize MinIO cluster
MINIOminio-rm.ymlRemove MinIO cluster
DOCKERdocker.ymlInstall Docker on nodes
DOCKERapp.ymlInstall applications using Docker Compose
FERRETmongo.ymlInstall Mongo/FerretDB on nodes

Deployment Strategy

The install.yml playbook orchestrates specialized playbooks in the following group order for complete deployment:

  • infra: infra.yml (-l infra)
  • nodes: node.yml
  • etcd: etcd.yml (-l etcd)
  • minio: minio.yml (-l minio)
  • pgsql: pgsql.yml

Circular Dependency Note: There is a weak circular dependency between NODE and INFRA: to register NODE to INFRA, INFRA must already exist; while INFRA module depends on NODE to work. The solution is to initialize infra nodes first, then add other nodes. To complete all deployment at once, use install.yml.


Safety Notes

Most playbooks are idempotent, which means some deployment playbooks may wipe existing databases and create new ones when protection options are not enabled. Use extra caution with pgsql, minio, and infra playbooks. Read the documentation carefully and proceed with caution.

Best Practices

  1. Read playbook documentation carefully before execution
  2. Press Ctrl-C immediately to stop when anomalies occur
  3. Test in non-production environments first
  4. Use -l parameter to limit target hosts, avoiding unintended hosts
  5. Use -t parameter to specify tags, executing only specific tasks

Dry-Run Mode

Use --check --diff options to preview changes without actually executing:

# Preview changes without execution
./pgsql.yml -l pg-test --check --diff

# Check specific tasks with tags
./pgsql.yml -l pg-test -t pg_config --check --diff

3 - Nginx Management

Nginx management, web portal configuration, web server, upstream services

Pigsty installs Nginx on INFRA nodes as the entry point for all web services, listening on standard ports 80/443.

In Pigsty, you can configure Nginx to provide various services through inventory:

  • Expose web interfaces for monitoring components like Grafana, VictoriaMetrics (VMUI), Alertmanager, and VictoriaLogs
  • Serve static files (software repos, documentation sites, websites, etc.)
  • Proxy custom application services (internal apps, database management UIs, Docker application interfaces, etc.)
  • Automatically issue self-signed HTTPS certificates, or use Certbot to obtain free Let’s Encrypt certificates
  • Expose services through a single port using different subdomains for unified access

Basic Configuration

Customize Nginx behavior via infra_portal parameter:

infra_portal:
  home: { domain: i.pigsty }
  grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
  prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:8428" }
  alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9059" }
  blackbox     : { endpoint: "${admin_ip}:9115" }
  vmalert      : { endpoint: "${admin_ip}:8880" }

Server Parameters

ParameterDescription

4 - Software Repository

Managing local APT/YUM software repositories

Pigsty supports creating and managing local APT/YUM software repositories for offline deployment or accelerated package installation.


Quick Start

To add packages to the local repository:

  1. Add packages to repo_packages (default packages)
  2. Add packages to repo_extra_packages (extra packages)
  3. Run the build command:
./infra.yml -t repo_build   # Build local repo from upstream
./node.yml -t node_repo     # Refresh node repository cache

Package Aliases

Pigsty predefines common package combinations for batch installation:

EL Systems (RHEL/CentOS/Rocky)

AliasDescription
node-bootstrapAnsible, Python3 tools, SSH related
infra-packageNginx, etcd, HAProxy, monitoring exporters, MinIO
pgsql-utilityPatroni, pgBouncer, pgBackRest, PG tools
pgsqlFull PostgreSQL (server, client, extensions)
pgsql-miniMinimal PostgreSQL installation

Debian/Ubuntu Systems

AliasDescription
node-bootstrapAnsible, development tools
infra-packageInfrastructure components (Debian naming)
pgsql-clientPostgreSQL client
pgsql-serverPostgreSQL server and related packages

Playbook Tasks

Main Tasks

TaskDescription
repoCreate local repo from internet or offline packages
repo_buildBuild from upstream if not exists
repo_upstreamAdd upstream repository files
repo_pkgDownload packages and dependencies
repo_createCreate/update YUM or APT repository
repo_nginxStart Nginx file server

Complete Task List

./infra.yml -t repo_dir          # Create local repository directory
./infra.yml -t repo_check        # Check if local repo exists
./infra.yml -t repo_prepare      # Use existing repo directly
./infra.yml -t repo_build        # Build repo from upstream
./infra.yml -t repo_upstream     # Add upstream repositories
./infra.yml -t repo_remove       # Delete existing repo files
./infra.yml -t repo_add          # Add repo to system directory
./infra.yml -t repo_url_pkg      # Download packages from internet
./infra.yml -t repo_cache        # Create metadata cache
./infra.yml -t repo_boot_pkg     # Install bootstrap packages
./infra.yml -t repo_pkg          # Download packages and dependencies
./infra.yml -t repo_create       # Create local repository
./infra.yml -t repo_use          # Add new repo to system
./infra.yml -t repo_nginx        # Start Nginx file server

Common Operations

Add New Packages

# 1. Configure upstream repositories
./infra.yml -t repo_upstream

# 2. Download packages and dependencies
./infra.yml -t repo_pkg

# 3. Build local repository metadata
./infra.yml -t repo_create

Refresh Node Repositories

./node.yml -t node_repo    # Refresh repository cache on all nodes

Complete Repository Rebuild

./infra.yml -t repo        # Create repo from internet or offline packages

5 - Domain Management

Configure local or public domain names to access Pigsty services.

Use domain names instead of IP addresses to access Pigsty’s various web services.

Quick Start

Add the following static resolution records to /etc/hosts:

10.10.10.10 i.pigsty g.pigsty a.pigsty

Replace IP address with your actual Pigsty node’s IP.


Why Use Domain Names

  • Easier to remember than IP addresses
  • Flexible pointing to different IPs
  • Unified service management through Nginx
  • Support for HTTPS encryption
  • Prevent ISP hijacking in some regions
  • Allow access to internally bound services via proxy

DNS Mechanism

  • DNS Protocol: Resolves domain names to IP addresses. Multiple domains can point to same IP.

  • HTTP Protocol: Uses Host header to route requests to different sites on same port (80/443).


Default Domains

6 - Module Management

INFRA module management SOP: define, create, destroy, scale out, scale in

This document covers daily management operations for the INFRA module, including installation, uninstallation, scaling, and component maintenance.


Install INFRA Module

Use the infra.yml playbook to install the INFRA module on the infra group:

./infra.yml     # Install INFRA module on infra group

Uninstall INFRA Module

Use the infra-rm.yml playbook to uninstall the INFRA module from the infra group:

./infra-rm.yml  # Uninstall INFRA module from infra group

Scale Out INFRA Module

Assign infra_seq to new nodes and add them to the infra group in the inventory:

all:
  children:
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }  # Existing node
        10.10.10.11: { infra_seq: 2 }  # New node

Use the -l limit option to execute the playbook on the new node only:

./infra.yml -l 10.10.10.11    # Install INFRA module on new node

Manage Local Repository

Local repository management tasks:

./infra.yml -t repo              # Create repo from internet or offline packages
./infra.yml -t repo_upstream     # Add upstream repositories
./infra.yml -t repo_pkg          # Download packages and dependencies
./infra.yml -t repo_create       # Create local yum/apt repository

Complete subtask list:

./infra.yml -t repo_dir          # Create local repository directory
./infra.yml -t repo_check        # Check if local repo exists
./infra.yml -t repo_prepare      # Use existing repo directly
./infra.yml -t repo_build        # Build repo from upstream
./infra.yml -t repo_upstream     # Add upstream repositories
./infra.yml -t repo_remove       # Delete existing repo files
./infra.yml -t repo_add          # Add repo to system directory
./infra.yml -t repo_url_pkg      # Download packages from internet
./infra.yml -t repo_cache        # Create metadata cache
./infra.yml -t repo_boot_pkg     # Install bootstrap packages
./infra.yml -t repo_pkg          # Download packages and dependencies
./infra.yml -t repo_create       # Create local repository
./infra.yml -t repo_use          # Add new repo to system
./infra.yml -t repo_nginx        # Start Nginx file server

Manage Nginx

Nginx management tasks:

./infra.yml -t nginx                       # Reset Nginx component
./infra.yml -t nginx_index                 # Re-render homepage
./infra.yml -t nginx_config,nginx_reload   # Re-render config and reload

Request HTTPS certificate:

./infra.yml -t nginx_certbot,nginx_reload -e certbot_sign=true

Manage Infrastructure Components

Management commands for various infrastructure components:

./infra.yml -t infra           # Configure infrastructure
./infra.yml -t infra_env       # Configure environment variables
./infra.yml -t infra_pkg       # Install packages
./infra.yml -t infra_user      # Set up OS user
./infra.yml -t infra_cert      # Issue certificates
./infra.yml -t dns             # Configure DNSMasq
./infra.yml -t nginx           # Configure Nginx
./infra.yml -t victoria        # Configure VictoriaMetrics/Logs/Traces
./infra.yml -t alertmanager    # Configure AlertManager
./infra.yml -t blackbox        # Configure Blackbox Exporter
./infra.yml -t grafana         # Configure Grafana
./infra.yml -t infra_register  # Register to VictoriaMetrics/Grafana

Common maintenance commands:

./infra.yml -t nginx_index                        # Re-render homepage
./infra.yml -t nginx_config,nginx_reload          # Reconfigure and reload
./infra.yml -t vmetrics_config,vmetrics_launch    # Regenerate VictoriaMetrics config and restart
./infra.yml -t vlogs_config,vlogs_launch          # Update VictoriaLogs config
./infra.yml -t grafana_plugin                     # Download Grafana plugins

7 - CA and Certificates

Using self-signed CA or real HTTPS certificates

Pigsty uses a self-signed Certificate Authority (CA) by default for internal SSL/TLS encryption. This document covers:


Self-Signed CA

Pigsty automatically creates a self-signed CA during infrastructure initialization (infra.yml). The CA signs certificates for:

  • PostgreSQL server/client SSL
  • Patroni REST API
  • etcd cluster communication
  • MinIO cluster communication
  • Nginx HTTPS (fallback)
  • Infrastructure services

PKI Directory Structure

files/pki/
├── ca/
│   ├── ca.key                # CA private key (keep secure!)
│   └── ca.crt                # CA certificate
├── csr/                      # Certificate signing requests
│   ├── misc/                     # Miscellaneous certificates (cert.yml output)
│   ├── etcd/                     # ETCD certificates
│   ├── pgsql/                    # PostgreSQL certificates
│   ├── minio/                    # MinIO certificates
│   ├── nginx/                    # Nginx certificates
│   └── mongo/                    # FerretDB certificates
└── infra/                    # Infrastructure certificates

CA Variables

VariableDefaultDescription
ca_createtrueCreate CA if not exists, or abort
ca_cnpigsty-caCA certificate common name
cert_validity7300dDefault validity for issued certificates
VariableDefault
:—————-————–—————————————-
CA Certificate100 yearsHardcoded (36500 days)
Server/Client20 yearscert_validity (7300d)
Nginx HTTPS~1 yearnginx_cert_validity (397d)
> Note: Browser vendors limit trust to 398-day certificates. Nginx uses shorter validity for browser compatibility.

Using External CA

To use your own enterprise CA instead of auto-generated one:

1. Set ca_create: false in your configuration.

2. Place your CA files before running playbook:

mkdir -p files/pki/ca
cp /path/to/your/ca.key files/pki/ca/ca.key
cp /path/to/your/ca.crt files/pki/ca/ca.crt
chmod 600 files/pki/ca/ca.key
chmod 644 files/pki/ca/ca.crt

3. Run ./infra.yml


Backup CA Files

The CA private key is critical. Back it up securely:

# Backup with timestamp
tar -czvf pigsty-ca-$(date +%Y%m%d).tar.gz files/pki/ca/

Warning: If you lose CA private key, all certificates signed by it become unverifiable. You’ll need to regenerate everything.


Issue Certificates

Use cert.yml to issue additional certificates signed by Pigsty CA.

Basic Usage

# Issue certificate for database user (client cert)
./cert.yml -e cn=dbuser_dba

# Issue certificate for monitor user
./cert.yml -e cn=dbuser_monitor

Certificates generated in files/pki/misc/<cn>.{key,crt} by default.

Parameters

ParameterDefaultDescription
cnpigstyCommon Name (required)
san[DNS:localhost, IP:127.0.0.1]Subject Alternative Names
orgpigstyOrganization name
unitpigstyOrganizational unit name
expire7300dCertificate validity (20 years)
keyfiles/pki/misc/<cn>.keyPrivate key output path
crtfiles/pki/misc/<cn>.crtCertificate output path

Advanced Examples

# Issue certificate with custom SAN (DNS and IP)
./cert.yml -e cn=myservice -e san=DNS:myservice,IP:10.2.82.163

(File has more lines. Use ‘offset’ parameter to read beyond line 130)