Optimizing PostgreSQL for Zabbix 7.0 with TimescaleDB Compression

Zabbix 7.0 TimescaleDB compression

Optimize PostgreSQL for Zabbix 7.0 with TimescaleDB compression: guide to enable extensions, tune performance, implement retention policies, and scale time-series storage effectively.

Table of Contents

🔈Introduction

Monitoring large-scale IT environments with Zabbix 7.0 demands a database backend capable of efficient time-series data handling. By pairing PostgreSQL with TimescaleDB (a time-series extension for PostgreSQL) you gain native partitioning, hypertables, and compression—drastically reducing storage and improving query performance in heavy-use environments. In this article we will walk through how to optimize PostgreSQL for Zabbix 7.0 using TimescaleDB compression: planning prerequisites, configuration, performance tuning, ongoing maintenance—and pitfalls to avoid.


✅ Why TimescaleDB + PostgreSQL for Zabbix

Zabbix collects and stores massive volumes of time-series data (history, trends, events). Traditional relational designs struggle as data volume grows. TimescaleDB addresses this by:

  • Transforming heavy tables into hypertables, automatically partitioned by time (and optionally by space).
  • Providing native compression for older partitions, reducing disk space and I/O load.
  • Seamlessly integrating with PostgreSQL infrastructure (indexes, tools, replication).
💡NOTE: For Zabbix 7.0 specifically, documentation notes support for TimescaleDB versions (e.g., 2.15 + etc) and that compression is available when using the “Community” edition of TimescaleDB (which supports compression) rather than the Apache-licensed variant.

🔄 Prerequisites & Version Compatibility

Before enabling compression, ensure compatibility and stability of all components:

ComponentRecommended / Supported VersionsNotes
Zabbix Server7.0.x latest patchUse latest minor to get bugfixes
PostgreSQLVersion supported by Zabbix 7.0 (e.g., 14-17)See Zabbix doc for exact list
TimescaleDBCommunity Edition 2.x (2.15 +, 2.16 +, etc)The Community edition includes compression; Apache edition doesn’t.
Storage & I/OHigh throughput (e.g., NVMe/SSD), sufficient memory & CPUPerformance-sensitive

Checklist

  • Install PostgreSQL from official community repo (avoid vendor-modified PG versions).
  • Install TimescaleDB extension, matching PG version (e.g., TimescaleDB 2.17 with PG 17)
  • Ensure Zabbix is configured to use PostgreSQL (set DBHost, DBName, DBUser, DBPassword etc).
  • Backup your database before enabling hypertables or compression.

▶️ Enabling TimescaleDB for Zabbix 7.0

🟧 Enable extension

				
					sudo -u postgres psql -d zabbix -c "CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;"
				
			

If your database uses a non-default schema (e.g., zbx_schema):

				
					sudo -u postgres psql -d zabbix -c "CREATE EXTENSION IF NOT EXISTS timescaledb SCHEMA zbx_schema CASCADE;"
				
			

🟧 Run Zabbix schema script

For Zabbix 7.0, after enabling the extension run:

				
					cat /usr/share/zabbix/sql-scripts/postgresql/timescaledb/schema.sql | sudo -u postgres psql -d zabbix
				
			

This script converts eligible tables into hypertables and sets default housekeeping/compression parameters.

🟧 Configure Housekeeping

In the Zabbix UI: Administration → Housekeeping.
Ensure:

  • ✅ Override item history period” is checked
  • ✅ Override item trend period” is checked
  • ✅ Compression is enabled (if the UI shows it)
  • ✅ You set the “Compress records older than” value (default is 7d)
💡Important: The minimum allowed for compression age is 7 days — you cannot compress more recent data.

▶️ Optimizing PostgreSQL / TimescaleDB Parameters

Tuning the database layer is critical for performance. Here are recommended settings to adjust in postgresql.conf (or via ALTER SYSTEM):

				
					shared_buffers = 25% of RAM
effective_cache_size = 50%–75% of RAM
maintenance_work_mem = 512MB–1GB
checkpoint_timeout = 15min
max_wal_size = 2GB
wal_buffers = 16MB
work_mem = 64MB
default_statistics_target = 100
				
			

Then specific to TimescaleDB:

				
					timescaledb.max_background_workers = 8
timescaledb.telemetry_level = off
				
			

You may also run timescaledb-tune (provided by TimescaleDB) to automate best-practice suggestions.

🟧 Storage and I/O considerations

  • Use SSD/NVMe for WAL and data directories, separate from OS.
  • Enable autovacuum and ensure it keeps up; chunks and hypertables still rely on VACUUM.
  • Consider enabling fillfactor reduction (e.g., 70–80%) for heavy insert tables, though compressed hypertables reduce update activity.

▶️ TimescaleDB Compression: What It Means for Zabbix

Compression in TimescaleDB converts older chunks into more compact storage format: read-only, smaller on disk, faster to scan for certain queries. For Zabbix:

  • ✅ History/trend tables become hypertables, partitioned by time.
  • ✅ Older chunks (based on the threshold in Zabbix Housekeeping UI, default 7 days) get compressed automatically via the Housekeeper process.
  • ✅ After compression: inserts, updates, deletes into those compressed chunks are not allowed. This aligns with Zabbix’s usage (historical data is rarely modified) but should be noted.
  • ✅ Significant disk savings—case-studies show up to ~90% reduction in size.

🟧 Considerations Table

FeatureBenefitLimitation
Native compressionLower disk usage, faster scansCompressed chunks become immutable
Hypertables + partitioningBetter query performance for time-seriesRequires downtime for conversion in large DBs
Chunk size controlsOptimal performance via smaller chunksNeeds monitoring/tuning for your data volume
Housekeeping override + compressionSimplifies retention policyMust ensure Housekeeping is correctly configured

🖥️ CLI Examples: Monitoring & Maintenance

🟧 Check hypertables and compression status

				
					-- List hypertables
SELECT * FROM timescaledb_information.hypertables WHERE table_schema = 'public';
				
			
				
					-- Check compression status
SELECT hypertable_name, is_compressed
  FROM timescaledb_information.hypertable_compression
  WHERE table_schema = 'public';
				
			

🟧 Example: Re-compress a chunk manually

				
					ALTER TABLE history_hc_202501
  SET (timescaledb.compress_segmentby = 'itemid')
  ;
SELECT compress_chunk('_timescaledb_internal.history_hc_202501');
				
			

🟧 Monitor disk space usage

				
					-- Show table sizes
SELECT
  schemaname || '.' || tablename as table,
  pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) AS size
FROM
  pg_tables
WHERE
  schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC
LIMIT 10;
				
			

🟧 Verify autovacuum settings

				
					SHOW autovacuum;
SELECT relname, last_autovacuum, last_vacuum, last_analyze
FROM pg_stat_user_tables
WHERE schemaname='public'
ORDER BY last_autovacuum NULLS FIRST
LIMIT 10;
				
			

▶️ Retention Policies & Housekeeping Strategy

Even with compression enabled, you still need clear policies for how long to retain raw and compressed data.

🟧 Suggested retention model for Zabbix 7.0

  • Raw history data: keep 7 days minimum, preferably 14-30 days depending on scale
  • Trends data: compress older than 7 days, keep 90-365 days of trends
  • Events/logs: archive externally after 30-90 days

🟧 Housekeeping UI settings

In Zabbix frontend: Administration → Housekeeping

  • ✅ Set “Override item history period” to shortest acceptable (e.g., 14d)
  • ✅ Set “Override item trend period” accordingly
  • Enable compression: check “Enable compression”, set “Compress records older than” to 7d or more.
💡Important: If you disable the override options and have compressed chunks, you may prevent data removal properly—leading to warnings in the Housekeeping and System Information sections.

👉 Performance Tuning & Best Practices

🟡 Monitor insertion rates and system load

High values per second (e.g., 50 k NVPS+) require tuned I/O, high-parallelism, and enough WAL capacity. TimescaleDB handles this better than vanilla PostgreSQL. 

🟡 Chunk time interval tuning

By default Zabbix scripts set the chunk interval for trends (30 days). For many smaller items you might lower that to 7-10 days to keep chunk size manageable (and enhance compression efficiency).

🟡 Indexing

Ensure indexes on common query fields (timestamp, itemid, value). After compression, queries on compressed chunks are efficient but still require good indexing for non-time filters.

🟡 Vacuum/Analyze

Compressed chunks are read-only, but un-compressed chunks still accumulate. Ensure autovacuum is operating and statistics (pg_stat_*) show healthy values.

🟡 Backup & Restore strategy

When using TimescaleDB and compression, backups must handle hypertables and extension state. Use pg_dump (custom format) and verify restores in a test environment. Note that some users report restore issues with large Zabbix + TimescaleDB combinations.

🟡 Migration downtime

Converting existing non-TimescaleDB Zabbix databases may require downtime. The initial conversion of large history/trend tables can take hours or more. Plan maintenance windows accordingly.


🧰 Troubleshooting & Common Pitfalls

  • Compression not enabled: Ensure you are using TimescaleDB Community edition (compression feature) and Zabbix UI shows the compression option. If not, log will show a warning.
  • Large DB size even after compression: Check that “Override item history/trend period” is enabled; ensure autovacuum cleaned up old chunks.
  • Slow query performance after migration to TimescaleDB: Might be missing indexes, large chunk sizes, or I/O bottlenecks. One Reddit user reported slower map loading after migration.
  • Inserts rejected into compressed chunks: Remember compressed chunks are immutable. If you have very late arriving data (for example via proxies with large buffer backlogs) you may lose data if it lands beyond the compression threshold.
  • Restoration problems: Very large dumps and restores may fail if chunk/extension metadata is not handled correctly. Test restores in non-production first.

📌 Summary

Optimizing PostgreSQL for Zabbix 7.0 with TimescaleDB compression gives you a powerful architecture for managing large volumes of time-series monitoring data. The key takeaways are:

  • Use versions of PostgreSQL and TimescaleDB that are officially supported by Zabbix.
  • Enable TimescaleDB and convert Zabbix tables to hypertables.
  • Enable compression, set a sensible retention/housekeeping strategy, and enforce it via the frontend.
  • Tune PostgreSQL and TimescaleDB settings for I/O, memory, and query workload.
  • Monitor performance, storage usage, and vacuum/autovacuum health.
  • Plan for migration downtime, backups/restores, and test thoroughly in a lower-risk environment.

By following these steps, you’ll benefit from significantly reduced disk usage, improved query responsiveness, and a scalable backend as your monitored infrastructure grows.

Did you find this article helpful? Your feedback is invaluable to us! Feel free to share this post with those who may benefit, and let us know your thoughts in the comments section below.


📘 Related Posts