
Learn how to use Ansible to automate the maintenance of a security warning banner in Zabbix. Step-by-step instructions, CLI examples, and best practices for ensuring
Optimize PostgreSQL for Zabbix 7.0 with TimescaleDB compression: guide to enable extensions, tune performance, implement retention policies, and scale time-series storage effectively.
Monitoring large-scale IT environments with Zabbix 7.0 demands a database backend capable of efficient time-series data handling. By pairing PostgreSQL with TimescaleDB (a time-series extension for PostgreSQL) you gain native partitioning, hypertables, and compression—drastically reducing storage and improving query performance in heavy-use environments. In this article we will walk through how to optimize PostgreSQL for Zabbix 7.0 using TimescaleDB compression: planning prerequisites, configuration, performance tuning, ongoing maintenance—and pitfalls to avoid.
Zabbix collects and stores massive volumes of time-series data (history, trends, events). Traditional relational designs struggle as data volume grows. TimescaleDB addresses this by:
|
|
|
💡NOTE: For Zabbix 7.0 specifically, documentation notes support for TimescaleDB versions (e.g., 2.15 + etc) and that compression is available when using the “Community” edition of TimescaleDB (which supports compression) rather than the Apache-licensed variant. |
Before enabling compression, ensure compatibility and stability of all components:
| Component | Recommended / Supported Versions | Notes |
|---|---|---|
| Zabbix Server | 7.0.x latest patch | Use latest minor to get bugfixes |
| PostgreSQL | Version supported by Zabbix 7.0 (e.g., 14-17) | See Zabbix doc for exact list |
| TimescaleDB | Community Edition 2.x (2.15 +, 2.16 +, etc) | The Community edition includes compression; Apache edition doesn’t. |
| Storage & I/O | High throughput (e.g., NVMe/SSD), sufficient memory & CPU | Performance-sensitive |
|
|
|
|
🟧 Enable extension |
sudo -u postgres psql -d zabbix -c "CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;"
If your database uses a non-default schema (e.g., zbx_schema):
sudo -u postgres psql -d zabbix -c "CREATE EXTENSION IF NOT EXISTS timescaledb SCHEMA zbx_schema CASCADE;"
🟧 Run Zabbix schema script |
For Zabbix 7.0, after enabling the extension run:
cat /usr/share/zabbix/sql-scripts/postgresql/timescaledb/schema.sql | sudo -u postgres psql -d zabbix
This script converts eligible tables into hypertables and sets default housekeeping/compression parameters.
🟧 Configure Housekeeping |
In the Zabbix UI: Administration → Housekeeping.
Ensure:
|
|
|
|
💡Important: The minimum allowed for compression age is 7 days — you cannot compress more recent data. |
Tuning the database layer is critical for performance. Here are recommended settings to adjust in postgresql.conf (or via ALTER SYSTEM):
shared_buffers = 25% of RAM
effective_cache_size = 50%–75% of RAM
maintenance_work_mem = 512MB–1GB
checkpoint_timeout = 15min
max_wal_size = 2GB
wal_buffers = 16MB
work_mem = 64MB
default_statistics_target = 100
Then specific to TimescaleDB:
timescaledb.max_background_workers = 8
timescaledb.telemetry_level = off
You may also run timescaledb-tune (provided by TimescaleDB) to automate best-practice suggestions.
🟧 Storage and I/O considerations |
|
|
|
Compression in TimescaleDB converts older chunks into more compact storage format: read-only, smaller on disk, faster to scan for certain queries. For Zabbix:
|
|
|
|
🟧 Considerations Table |
| Feature | Benefit | Limitation |
|---|---|---|
| Native compression | Lower disk usage, faster scans | Compressed chunks become immutable |
| Hypertables + partitioning | Better query performance for time-series | Requires downtime for conversion in large DBs |
| Chunk size controls | Optimal performance via smaller chunks | Needs monitoring/tuning for your data volume |
| Housekeeping override + compression | Simplifies retention policy | Must ensure Housekeeping is correctly configured |
🟧 Check hypertables and compression status |
-- List hypertables
SELECT * FROM timescaledb_information.hypertables WHERE table_schema = 'public';
-- Check compression status
SELECT hypertable_name, is_compressed
FROM timescaledb_information.hypertable_compression
WHERE table_schema = 'public';
🟧 Example: Re-compress a chunk manually |
ALTER TABLE history_hc_202501
SET (timescaledb.compress_segmentby = 'itemid')
;
SELECT compress_chunk('_timescaledb_internal.history_hc_202501');
🟧 Monitor disk space usage |
-- Show table sizes
SELECT
schemaname || '.' || tablename as table,
pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) AS size
FROM
pg_tables
WHERE
schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC
LIMIT 10;
🟧 Verify autovacuum settings |
SHOW autovacuum;
SELECT relname, last_autovacuum, last_vacuum, last_analyze
FROM pg_stat_user_tables
WHERE schemaname='public'
ORDER BY last_autovacuum NULLS FIRST
LIMIT 10;
Even with compression enabled, you still need clear policies for how long to retain raw and compressed data.
🟧 Suggested retention model for Zabbix 7.0 |
|
|
|
🟧 Housekeeping UI settings |
In Zabbix frontend: Administration → Housekeeping
|
|
|
💡Important: If you disable the override options and have compressed chunks, you may prevent data removal properly—leading to warnings in the Housekeeping and System Information sections. |
🟡 Monitor insertion rates and system load |
High values per second (e.g., 50 k NVPS+) require tuned I/O, high-parallelism, and enough WAL capacity. TimescaleDB handles this better than vanilla PostgreSQL.
🟡 Chunk time interval tuning |
By default Zabbix scripts set the chunk interval for trends (30 days). For many smaller items you might lower that to 7-10 days to keep chunk size manageable (and enhance compression efficiency).
🟡 Indexing |
Ensure indexes on common query fields (timestamp, itemid, value). After compression, queries on compressed chunks are efficient but still require good indexing for non-time filters.
🟡 Vacuum/Analyze |
Compressed chunks are read-only, but un-compressed chunks still accumulate. Ensure autovacuum is operating and statistics (pg_stat_*) show healthy values.
🟡 Backup & Restore strategy |
When using TimescaleDB and compression, backups must handle hypertables and extension state. Use pg_dump (custom format) and verify restores in a test environment. Note that some users report restore issues with large Zabbix + TimescaleDB combinations.
🟡 Migration downtime |
Converting existing non-TimescaleDB Zabbix databases may require downtime. The initial conversion of large history/trend tables can take hours or more. Plan maintenance windows accordingly.
|
|
|
|
|
Optimizing PostgreSQL for Zabbix 7.0 with TimescaleDB compression gives you a powerful architecture for managing large volumes of time-series monitoring data. The key takeaways are:
|
|
|
|
|
|
By following these steps, you’ll benefit from significantly reduced disk usage, improved query responsiveness, and a scalable backend as your monitored infrastructure grows.
Did you find this article helpful? Your feedback is invaluable to us! Feel free to share this post with those who may benefit, and let us know your thoughts in the comments section below.

Learn how to use Ansible to automate the maintenance of a security warning banner in Zabbix. Step-by-step instructions, CLI examples, and best practices for ensuring

Learn how to monitor SSL certificate expiry using Zabbix with automated scripts and triggers. Avoid service disruptions by setting up alerts for expiring certificates. Table

Learn how to implement full disk encryption on RHEL 9 or CentOS 9 for Zabbix running PostgreSQL database. This step-by-step guide helps secure your monitoring server
