Skip to main content

Uniphore Customer Portal

Postgres Monitoring

Postgres needs an export er to scrape the metrics to prometheus and following is the command to run and create the postgres exportercontainer:

docker run -d --name postgres-exporter --net=host -e DATA_SOURCE_NAME="postgresql://postgres:Un190re@769163@18.216.167.150:5432/aumina?sslmode=disable" wrouesnel/postgres_exporter

Replace the connection string in the above command.

Default port for the postgres exporter is 9187. So, once the postgres-exporter docker service is up, user can get the metrics using

curl http://127.0.0.1:9187/metrics

Once we get the metrics, add the the following in prometheus.yml (/etc/prometheus/prometheus.yml) and restart the prometheus docker container:

- job_name: 'postgres'
metrics_path: '/metrics'
scrape_interval: 5s
static_configs:
- targets: ['172.31.36.120:9187']

Login to Grafana and import custom dashboard (PostgreSQL Database.json) for Postgres.

image120.PNG

The metrics that follow are visualized in Grafana dashboards:

  1. Process information

    1. Version - Postgres Version

    2. Running - Running Status True/False

    3. Uptime - Relative time since when the service is up

    4. Start time - Service start data and time

    image121.PNG
  2. General Counters, CPU, Memory and File descriptor statistics

    1. Current fetch data - Sum of rows fetched by queries till now

    2. Current insert data - Sum of rows inserted till now

    3. Current update data - Sum of rows updated till now

    4. Max connections - Maximum number of concurrent connections

    5. Average CPU Usage - Average user and system CPU time spent in seconds

    6. Average Memory Usage - Virtual and Resident memory size in bytes, averages over 5 min interval

    7. Open File Descriptors - Number of open file descriptors

    image122.PNG
  3. Settings

    1. Shared buffers - Parameter determines how much memory is dedicated to the server for caching data. 

    2. Effective Cache - Parameter estimates how much memory is available for disk caching by the operating system and within the database itself

    3. Maintainace work memory - parameter basically provides the maximum amount of memory to be used by maintenance operations like vacuum, create index, and alter table add foreign key operations. 

    4. Work memory -  parameter basically provides the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files

    image123.PNG
  4. Database statistics

    1. Active Session - Number of connections in the active state

    2. Transactions - Rate of number of transactions in this database that are committed and rolled back

    3. Fetch Data - Number of rows fetched by queries in this database

    4. Update Data - Number of rows updated by queries in this database

    5. Return Data - Number of rows returned by queries in this database

    6. Insert Data - Number of rows inserted by queries in this database

    7. Delete Data - Number of rows deleted by queries in this database

    8. Idle Sessions - Number of connections in the idle state

    9. Cache Hit Rate - Postgres itself actually tracks access patterns of your data and keeps frequently accessed data in cache. Generally, cache hit rate to be above 99%

    10. Buffers(bgwriter) - Number of buffers written directly by a backend

    11. Temp File(Bytes) - Rate of temp bytes written to the temp files

    12. Lock tables - PostgreSQL provides various lock modes to control concurrent access to data in tables and it has various types of lock against a table

    13. Conflicts/Deadlocks - One row per database, showing database-wide statistics about query cancels due to conflict with recovery on standby servers

    14. Checkpoint stats - Total amount of time that is spent in the portion of checkpoint processing where files are written to disk, in milliseconds

    image124.png