Auditing system logs

The Logs panel serves as a centralized diagnostic hub. By consolidating internal database metrics with external log aggregation, it allows you to correlate system-level events with specific query failures.

Understanding log levels

WarehousePG Enterprise Manager (WEM) displays severity levels generated directly by the underlying WarehousePG (WHPG) engine. These levels categorize every log entry based on its impact on database operations:

  • DEBUG: Contains granular technical details used primarily for deep-dive troubleshooting and development analysis.
  • INFO: Provides standard informational messages regarding routine system operations.
  • LOG: Reports standard engine-level events and process completions.
  • WARNING: Highlights events that are not fatal but could indicate potential configuration issues or approaching resource limits.
  • ERROR: Reports a problem that prevented a specific command or query from completing successfully.
  • FATAL: Indicates an error that caused a specific session to be terminated, though the rest of the database remains operational.
  • PANIC: Indicates a critical error that caused all database sessions to be disconnected; the system will usually attempt a restart after a PANIC.

Performing structured database analysis

Use the WHPG Log Tables tab to query internal database records for specific historical events. This is the primary method for investigating SQL errors and session-level failures.

  • Filter by severity impact: Narrow your search to critical event levels like ERROR, FATAL, or PANIC to bypass routine system noise. Use these logs to identify commands that failed or sessions that were terminated prematurely.
  • Isolate specific actors and environments: Filter results by user, database, or session ID. This allows you to determine if a performance issue is widespread or isolated to a single application service or developer account.
  • Investigate technical error context: Select the Details button in the actions column to view the full technical trace. Use the session ID, PID, and the specific source code file/line reference to pinpoint exactly where a query failed.
  • Debug prepared statements: Review the detail field in the log details modal to see bound parameters. This is essential for reproducing errors that only occur with specific input data.
  • Facilitate technical support and archiving: Select the Export CSV button to download your filtered results. This file is the primary resource to provide to technical support for deeper investigation. It is also ideal for long-term compliance archiving or performing bulk analysis in external tools.

Tracking live system logs

Use the Loki Logs tab for high-speed, full-text searching across the entire cluster infrastructure in real time.

  • Stream live system events: Watch logs as they are generated to observe the immediate impact of configuration changes or application deployments.
  • Navigate to specific incidents: Use the visual time-picker to jump to a specific moment in time when a system alert was triggered. This helps you see exactly what was happening across the cluster during a hardware spike or network interruption.
  • Search across the infrastructure: Utilize Loki’s optimized search engine to perform broad keyword searches (like "timeout" or "refused") across all nodes simultaneously, rather than checking individual host tables.

Could this page be better? Report a problem or suggest an addition!