Monitoring and evaluating queries
The Query Monitor panel provides a real-time view of every session connected to the WarehousePG (WHPG) cluster. By combining live monitoring with an integrated SQL workbench and AI-assisted diagnostic tools, it allows you to maintain peak cluster performance.
Note
Access to the Query Monitor and its administrative functions is governed by user roles, ensuring that session management capabilities are restricted to authorized personnel.
- Admin/Operator: Full visibility, plus the ability to Cancel Query or Terminate Session.
- Viewer: Read-only access to all monitoring tabs; cannot stop or modify sessions.
Refer to the Role permissions matrix for details.
Monitoring active cluster workloads
Use the Active Queries tab to track execution progress and identify resource-heavy statements that may be impacting system stability.
- Check connection distribution: Observe the status bar to see how connections are distributed. Focus on sessions that are Idle in transaction or Waiting, as these typically indicate application-level leaks or locking contention that requires intervention.
- Identify memory spill activity: Review the Spill Activity section to identify queries forced to use disk space for processing. High Spill Size or Temp Files counts indicate that specific queries are exceeding their memory limits and need optimization.
- Search and filter queries: Use the Advanced Search to isolate queries based on duration, specific users, or databases. This is the fastest way to find queries that has been executing for an unexpectedly long time.
- Manage runaway queries: If you have Admin or Operator privileges, use the Cancel query tool to gracefully stop a statement that is consuming excessive resources. Use the Export feature to save a snapshot of current activity for later performance audits.
Understanding query statuses
The Active Queries tab features a status bar that provides a real-time count for each connection state. These metrics offer an immediate snapshot of cluster health and help you quickly identify if the system is under heavy load or if connections are piling up in a specific state:
- Active: Queries currently being processed by the CPU. Track the execution time to ensure it aligns with expected performance baselines.
- Idle: Established connections waiting for the next command. No immediate action is required; this is standard behavior for established connections.
- Idle in Transaction: Open transactions waiting for input; these must be monitored as they can prevent vacuuming and cause table bloat.
- Transaction (abort): Transactions that have encountered an error and are currently in an aborted state. The session needs to be rolled back or terminated to release system locks and resources.
- Fastpath: Sessions executing internal fast-path function calls.
- Disabled: Connections that have been administratively disabled or are currently restricted from executing new database operations.
- Waiting: Queries blocked while waiting for locks or system resources.
Auditing database sessions
Use the Sessions tab to identify connection leaks and manage dormant processes that are consuming system slots.
- Pinpoint dormant connections: Sort the list by Idle time to find connections that have been open for long periods without activity. These dormant sessions can prevent system maintenance tasks like vacuuming.
- Identify client application sources: Review the Application column to see which tools (such as psql, pgadmin, or etl drivers) are initiating connections. This helps you identify which specific service might be responsible for a connection spike.
- Terminate problematic sessions: If a session is unresponsive or holding critical locks, use the Terminate action to forcefully close the connection and release all associated system resources.
Testing and optimizing queries
Use the Query Editor tab as an interactive workbench to safely explore data and analyze execution plans in a read-only environment.
- Generate execution plans: Use the EXPLAIN and ANALYZE buttons to visualize how the engine intends to process a query. Review the motion analysis to see how data is redistributed, broadcast, or gathered across segments.
- Identify performance bottlenecks: Check the warnings provided in the execution plan. Focus on "sequential scans" on large tables or "missing statistics," which are common causes of slow performance.
- Format and refine: Use the Format SQL button to clean up raw sql for better readability and the example library to quickly pull templates for common administrative queries.
- Export CSV: Once a query successfully executes, you can download the entire result set as a CSV file. This allows for easy data portability into spreadsheets or external reporting tools.
Important
To ensure system safety, all queries executed through the Query Editor run in read-only mode. Use dedicated database tools for write operations if needed.
Leveraging the AI Assistant
Use the integrated AI Assistant to accelerate sql authoring and simplify the debugging of complex performance issues.
Note
This is an optional feature. An administrator must configure an ANTHROPIC_API_KEY for the assistant to be active. See Configuring WEM and Configuring WEM settings post-installation for details.
- Generate queries from natural language: Press
Ctrl + Kto ask the assistant to write a query for you using plain English. Because the assistant is schema-aware, it will reference your actual table and column names accurately. - Optimize slow-running statements: Paste a slow query into the assistant and ask for optimization suggestions. The assistant will analyze join efficiency, index usage, and cluster resource utilization to recommend a more efficient version of your code.
- Debug database errors: When a query fails, provide the postgres error message to the assistant. It will explain the failure in plain language and suggest the specific sql corrections needed to resolve the error.
Note
While the AI Assistant can suggest any SQL command (including DDL/DML), execution is strictly governed by your role. For example, Viewer roles are restricted to executing SELECT statements only.
Reviewing execution history
Use the Results subtab within the Query Editor tab to manage your recent activity and retrieve data without re-executing heavy statements.
- Retrieve recent data: View results from queries run in the last 24 hours to re-examine data grids and success indicators without placing a new load on the cluster.
- Refine past queries: Use the Open in Editor action to reload a previous statement for further tuning. This is ideal for iterative development of complex analytical queries.
Could this page be better? Report a problem or suggest an addition!