Validator Operations
Run a Talero validator node
This guide shows a production-oriented, public-safe Talero validator workflow using the repository's
containerized deployment path. It covers host preparation, validator-specific configuration, signer setup,
startup, verification, and routine operations without exposing secret material or unsafe admin surfaces.
Validator
Mainnet-oriented
Exec signer
Private RPC
No secrets published
Recommended mainnet path
Use an external signer backend.
The repository's primary path is
exec plus
validator_exec_signer_openssl, backed by OpenSSL provider or PKCS#11.
Do not do this
Do not publish validator RPC to the internet.
Keep RPC local to the host or behind a tightly controlled private edge.
This page is not a key-management manual. Talero does not generate a production validator private key for you.
The final signing key should stay inside your HSM, PKCS#11 provider, or equivalent controlled signer infrastructure.
Scope and operating model
A Talero validator is a node running with TALERO_ROLE=validator and a valid PoS/BFT signer backend.
On mainnet, the code requires an explicit validator signer backend. The intended steady-state model is an external signer path,
not a simple locally embedded validator secret.
| Mode |
When to use it |
Public-safe guidance |
| Exec signer |
Default production-oriented path |
Recommended. Keeps the node-side signer contract stable while delegating signing to a controlled backend. |
| Remote HTTP signer |
Internal signer service with strong transport controls |
Allowed, but non-loopback mainnet endpoints must use HTTPS and mTLS. See appendix. |
| Local signer |
Emergency rehearsal only |
Do not use as a normal mainnet production model. The code treats this as an exceptional path. |
Repository files used by this guide:
Dockerfile,
docker-compose.validator.exec-pkcs11.yml,
.env.validator.exec-pkcs11.example,
and the validator signer runbooks under docs/.
Requirements
Before you touch the validator configuration, make sure the host and deployment model are ready for a persistent service.
- A 64-bit Linux host with persistent storage for
/data.
- A public network path for Talero P2P traffic, normally TCP and UDP on port
30303.
- Docker Engine plus the Docker Compose plugin.
- The Talero node repository or release bundle on the host.
- The canonical mainnet chain identity material supplied by your release process:
TALERO_EXPECTED_CHAIN_ID,
TALERO_EXPECTED_GENESIS_HASH,
TALERO_GENESIS_TIMESTAMP_SECS,
and the current bootnode list.
- An external validator signing backend, typically PKCS#11 or another OpenSSL-provider-compatible key reference.
Do not take the testnet template docs/env.validator.example and reuse it for mainnet as-is.
It is a rehearsal-oriented baseline, not the mainnet deployment profile described here.
Install Docker on Ubuntu
The commands below follow Docker's official Ubuntu installation flow and are appropriate when you are preparing a fresh Ubuntu host.
If your environment already has a managed container runtime, adapt this step to your platform standard instead.
For distro-specific updates, check the
official Docker Engine install guide for Ubuntu.
sudo apt-get update
sudo apt-get install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo \"${UBUNTU_CODENAME:-$VERSION_CODENAME}\") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Optional convenience if you want to run Docker without prefixing every command with sudo:
sudo usermod -aG docker "$USER"
Log out and back in after changing group membership. If your organization already manages Docker with a hardened baseline,
follow that standard instead of reconfiguring the host by hand.
Prepare the deployment files
Put the Talero node repository or release bundle on the validator host. The commands below assume the deployment lives at
/opt/talero/node. If you use another path, adjust the commands consistently.
sudo mkdir -p /opt/talero
sudo chown "$USER":"$USER" /opt/talero
cd /opt/talero
# Place the Talero node release contents here
# Required files:
# Dockerfile
# docker-compose.validator.exec-pkcs11.yml
# .env.validator.exec-pkcs11.example
# docs/
# tools/
Build the runtime image
cd /opt/talero/node
docker compose -f docker-compose.validator.exec-pkcs11.yml build
The runtime image includes talero-node and the shipped validator signer wrappers,
including validator_exec_signer_openssl.
Mainnet chain identity
A fresh mainnet validator must start with the correct chain identity. The node checks the expected genesis hash and related values
before it inserts genesis into a new data directory.
What you need before first start
TALERO_EXPECTED_CHAIN_ID
TALERO_EXPECTED_GENESIS_HASH
TALERO_GENESIS_TIMESTAMP_SECS
- The current canonical bootnode list for mainnet
These values are not secrets, but they are chain-critical. They should come from the approved mainnet release or operator coordination process,
not from guesswork and not from a stale testnet example.
The code also rejects many consensus-critical environment overrides on mainnet. Do not add ad hoc values such as
TALERO_EVM_BLOCK_GAS_LIMIT, PQ_REQUIRED_FROM_HEIGHT, or similar overrides unless you are intentionally running a controlled emergency rehearsal.
Publish RPC privately on the host
The shipped validator compose file publishes the node ports. For a validator, the safe pattern is:
bind RPC on all container interfaces so Docker can reach it, but publish it only on
host loopback so the internet cannot reach it directly.
Create a local override compose file
cd /opt/talero/node
cat > docker-compose.validator.exec-pkcs11.private-rpc.yml <<'EOF'
services:
talero-validator:
ports:
- "127.0.0.1:8547:8547"
- "30303:30303"
- "30303:30303/udp"
EOF
With this override in place, the host can reach the validator on 127.0.0.1:8547, while external clients cannot connect to the validator RPC directly.
Container-side listener for this model: TALERO_RPC_LISTEN=0.0.0.0:8547.
Host-side exposure remains private because the published port is loopback-only.
Create the validator environment file
Start from the mainnet-oriented PKCS#11 example and then add the validator-specific network and identity values your release process approved.
cd /opt/talero/node
cp .env.validator.exec-pkcs11.example .env.validator.exec-pkcs11.prod
Populate the env file
At minimum, the production file should look like this shape. Replace placeholders with your real, approved values.
TALERO_NETWORK=mainnet
TALERO_ROLE=validator
TALERO_ROLE_STRICT=1
# Chain identity supplied by the approved mainnet release process
TALERO_EXPECTED_CHAIN_ID=REPLACE_WITH_CHAIN_ID
TALERO_EXPECTED_GENESIS_HASH=0xREPLACE_WITH_CANONICAL_GENESIS_HASH
TALERO_GENESIS_TIMESTAMP_SECS=REPLACE_WITH_CANONICAL_GENESIS_TIMESTAMP
# Keep validator RPC private on the host
TALERO_RPC_LISTEN=0.0.0.0:8547
TALERO_RPC_PUBLIC_MODE=0
TALERO_RPC_READONLY=0
# P2P
TALERO_P2P_LISTEN=0.0.0.0:30303
TALERO_P2P_SELF_ADDR=YOUR_PUBLIC_IP:30303
TALERO_BOOTNODES=BOOTNODE_A:30303,BOOTNODE_B:30303
# Validator signer backend
TALERO_VALIDATOR_SIGNER_BACKEND=exec
TALERO_VALIDATOR_EXEC_SIGNER_CMD=/usr/local/bin/validator_exec_signer_openssl
TALERO_VALIDATOR_EXEC_SIGNER_TIMEOUT_MS=10000
TALERO_VALIDATOR_EXEC_SIGNER_PUBKEY=0xREPLACE_WITH_VALIDATOR_PUBKEY
# OpenSSL provider or PKCS#11-backed signing path
TALERO_VALIDATOR_OPENSSL_BIN=openssl
TALERO_VALIDATOR_OPENSSL_SIGNER_PKEY=pkcs11:object=validator-key;type=private
TALERO_VALIDATOR_OPENSSL_SIGN_ARGS_JSON=["-provider","default","-provider","pkcs11","-provider-path","/usr/lib/x86_64-linux-gnu/ossl-modules"]
TALERO_VALIDATOR_OPENSSL_PUBKEY_ARGS_JSON=["-provider","default","-provider","pkcs11","-provider-path","/usr/lib/x86_64-linux-gnu/ossl-modules"]
TALERO_VALIDATOR_OPENSSL_TIMEOUT_MS=10000
# Hardening and logging
TALERO_PRIVACY_ENABLED=1
TALERO_SAFE_MODE_ENABLED=1
TALERO_SAFE_MODE_MANUAL_OVERRIDE=auto
TALERO_LOG=info
TALERO_LOG_FORMAT=json
Why this file is shaped this way
TALERO_ROLE=validator is explicit because mainnet refuses an implicit role.
TALERO_RPC_LISTEN=0.0.0.0:8547 is paired with the loopback-only host publish override from the previous step.
TALERO_BOOTNODES is required for a mainnet validator unless you deliberately allow an isolated startup.
TALERO_VALIDATOR_SIGNER_BACKEND=exec selects the production-oriented signer bridge.
Do not commit .env.validator.exec-pkcs11.prod to a public repository. It contains non-secret operational values,
but it also describes your infrastructure shape and signer wiring.
Configure the exec signer path
The repository's primary validator flow is exec plus validator_exec_signer_openssl.
The node calls the wrapper, the wrapper delegates to OpenSSL, and OpenSSL talks to the real key backend.
Step 1: provision the validator key outside Talero
Use your HSM, PKCS#11 provider, or equivalent signer system to create the real validator key.
Talero should see a key reference and be able to derive the public key and sign through that backend,
but the private key itself should stay outside the node process.
Step 2: derive the pinned public key
Once the provider reference is correct in .env.validator.exec-pkcs11.prod, derive the validator public key:
cd /opt/talero/node
TALERO_VALIDATOR_ENV_FILE=.env.validator.exec-pkcs11.prod \
docker compose \
-f docker-compose.validator.exec-pkcs11.yml \
-f docker-compose.validator.exec-pkcs11.private-rpc.yml \
run --rm talero-validator \
validator_exec_signer_openssl --print-pubkey
Copy the returned 0x... value into TALERO_VALIDATOR_EXEC_SIGNER_PUBKEY.
This pins the node to the expected validator identity.
Step 3: verify the final env before start
- The signer backend is
exec.
TALERO_VALIDATOR_EXEC_SIGNER_CMD points to /usr/local/bin/validator_exec_signer_openssl.
TALERO_VALIDATOR_EXEC_SIGNER_PUBKEY matches the key you just derived.
- The provider reference and provider args point to the real backend, not a rehearsal backend.
- No emergency-only or insecure toggles are left enabled.
If you are still qualifying the path, the repository also contains rehearsal and gate scripts such as
./tools/e2e_validator_exec_signer_pkcs11_gate.sh. Those are useful for qualification,
but the core production wiring remains the same.
Start the validator
Once the environment file and the private-RPC compose override are ready, start the validator as a background service:
cd /opt/talero/node
TALERO_VALIDATOR_ENV_FILE=.env.validator.exec-pkcs11.prod \
docker compose \
-f docker-compose.validator.exec-pkcs11.yml \
-f docker-compose.validator.exec-pkcs11.private-rpc.yml \
up -d
Watch the initial startup
cd /opt/talero/node
TALERO_VALIDATOR_ENV_FILE=.env.validator.exec-pkcs11.prod \
docker compose \
-f docker-compose.validator.exec-pkcs11.yml \
-f docker-compose.validator.exec-pkcs11.private-rpc.yml \
logs -f talero-validator
On first start with a fresh data directory, the validator will initialize the database, check chain identity, and then begin joining peers.
The signer startup guard will fail early if the signer backend or mainnet constraints are wrong.
Verify validator health
Do not treat the validator as healthy just because the container is running. Verify the node's self-reported role,
chain state, P2P reachability, and signer status.
Basic health check
curl -sS http://127.0.0.1:8547/health | jq '{
ok,
network,
role,
head,
finalized,
sync,
validatorSigner,
p2p
}'
What to look for
role should be "validator".
validatorSigner.configured should be true.
validatorSigner.startupGuard.ok should be true.
validatorSigner.backend should match the configured backend, typically "exec".
p2p.peersTotal should climb above zero after bootstrapping.
sync.hasSeenPeer should become true once the validator reaches the network.
Inspect network information
curl -sS -X POST http://127.0.0.1:8547/rpc \
-H 'content-type: application/json' \
-d '{"jsonrpc":"2.0","id":1,"method":"talero_getNetworkInfo","params":{}}' | jq '.result'
Inspect safe mode status
curl -sS -X POST http://127.0.0.1:8547/rpc \
-H 'content-type: application/json' \
-d '{"jsonrpc":"2.0","id":2,"method":"talero_safeModeStatus","params":{}}' | jq '.result'
The health payload includes a dedicated validatorSigner section, which is the fastest way to confirm that the signer startup guard accepted your configuration.
Exposure and firewall rules
Validators should be easy for peers to reach on P2P, but hard for the public internet to reach on RPC and observability endpoints.
| Surface |
Recommended exposure |
Reason |
| P2P TCP 30303 |
Public |
Required for normal peer connectivity. |
| P2P UDP 30303 |
Public |
Required when your deployment uses the UDP path. |
| RPC 8547 |
Host loopback only |
Validator RPC should stay private. |
| /metrics |
Private only |
Operational telemetry should not be internet-facing by default. |
- Open the P2P port in your host firewall and cloud security group.
- Do not open
8547/tcp to the public internet.
- If you must proxy validator RPC to another trusted system, keep it behind explicit ACLs or a private network.
- Do not add admin/debug RPC publication rules for validator hosts.
Upgrade and restart flow
A normal validator upgrade is a controlled stop, image refresh, and restart against the same persistent data directory.
The signer backend should remain pinned to the same validator public key unless you are intentionally rotating keys.
cd /opt/talero/node
# Optional: archive /data before a planned upgrade
tar -C ./data -czf validator-backup-$(date -u +%Y%m%d-%H%M%S).tgz validator-exec-pkcs11
# Rebuild or pull the updated image
docker compose -f docker-compose.validator.exec-pkcs11.yml build
# Restart with the same env and override files
TALERO_VALIDATOR_ENV_FILE=.env.validator.exec-pkcs11.prod \
docker compose \
-f docker-compose.validator.exec-pkcs11.yml \
-f docker-compose.validator.exec-pkcs11.private-rpc.yml \
up -d
After restart, repeat the health verification steps from this page. If you are rotating the signer key, derive the new public key first,
update TALERO_VALIDATOR_EXEC_SIGNER_PUBKEY, and only then restart into the new signer configuration.
Troubleshooting
These are the most common validator startup failures implied by the current codebase and what they usually mean in practice.
| Error shape |
Likely cause |
Fix |
TALERO_NETWORK=mainnet requires explicit TALERO_ROLE |
The role was omitted. |
Set TALERO_ROLE=validator. |
mainnet validator mode requires an explicit validator signer backend |
No signer backend was configured. |
Set TALERO_VALIDATOR_SIGNER_BACKEND and the matching signer variables. |
mainnet validator mode requires a remote signer backend |
The node fell back to a local signer path. |
Use exec or remote-http; do not rely on a local signer for normal production. |
requires TALERO_BOOTNODES |
No bootnode list was configured. |
Add the canonical mainnet bootnode list. |
requires TALERO_EXPECTED_GENESIS_HASH |
Fresh mainnet data directory without chain identity material. |
Set the canonical genesis hash and timestamp before first start. |
remote signer URL must use https:// |
You configured a non-loopback remote HTTP signer with plain HTTP. |
Use HTTPS for remote signer endpoints on mainnet. |
requires TALERO_VALIDATOR_REMOTE_SIGNER_CLIENT_IDENTITY_PEM |
mTLS client identity is missing for a non-loopback remote signer. |
Install the client identity PEM and reference it in the env file. |
Logs are still the first place to look
cd /opt/talero/node
TALERO_VALIDATOR_ENV_FILE=.env.validator.exec-pkcs11.prod \
docker compose \
-f docker-compose.validator.exec-pkcs11.yml \
-f docker-compose.validator.exec-pkcs11.private-rpc.yml \
logs --tail=200 talero-validator
Appendix: remote HTTP signer backend
The alternative validator path is TALERO_VALIDATOR_SIGNER_BACKEND=remote-http. This is suitable when the signer is an internal service
rather than an OpenSSL-provider-compatible local backend.
TALERO_VALIDATOR_SIGNER_BACKEND=remote-http
TALERO_VALIDATOR_REMOTE_SIGNER_URL=https://signer.example/sign
TALERO_VALIDATOR_REMOTE_SIGNER_PUBKEY=0xREPLACE_WITH_VALIDATOR_PUBKEY
TALERO_VALIDATOR_REMOTE_SIGNER_CA_PEM=/etc/talero/ca.pem
TALERO_VALIDATOR_REMOTE_SIGNER_CLIENT_IDENTITY_PEM=/etc/talero/client-identity.pem
TALERO_VALIDATOR_REMOTE_SIGNER_TIMEOUT_MS=10000
TALERO_VALIDATOR_REMOTE_SIGNER_RETRIES=2
TALERO_VALIDATOR_REMOTE_SIGNER_RETRY_BACKOFF_MS=250
- Non-loopback mainnet endpoints must use
https://.
- TLS certificate verification must stay enabled.
- For non-loopback endpoints, the node expects both a trusted CA PEM and a client identity PEM.
- Bearer tokens can exist as defense in depth, but they do not replace mTLS for non-loopback mainnet validators.
Appendix: SoftHSM rehearsal path
The repository ships a local rehearsal path for the validator signer stack. This is for qualification and CI-style testing,
not for a public production validator.
cd /opt/talero/node
./tools/provision_validator_softhsm.sh
./tools/e2e_validator_exec_signer_softhsm_stack.sh
For a shorter production transition checklist, the repository also provides:
MODE=rehearsal ./tools/validator_pkcs11_transition_checklist.sh
MODE=prod ./tools/validator_pkcs11_transition_checklist.sh
The point of rehearsal is to prove your signer path and operational workflow before you cut over to the real HSM or PKCS#11 backend.
It is not a substitute for production key provisioning.