Monitor execution, analyze logs, and debug automation issues
Qontinui Runner provides comprehensive logging and monitoring capabilities to help you understand what your automation is doing, diagnose issues, and optimize performance.
Three Types of Logs: Runner maintains separate logs for general execution (info, warnings, errors), image recognition results, and action execution details.
In development mode, logs are stored in the .dev-logs/ directory at the project root. In production, logs are stored in your system's application data directory.
Vite/React dev server output, HMR updates, and frontend console logs
.dev-logs/runner-frontend.logRust/Cargo build output, tracing logs, and Python stderr
.dev-logs/runner-backend.logDetailed Rust tracing logs (junction to AppData)
.dev-logs/runner-rust-logs/Python library logs: web extraction, vision, and state detection
.dev-logs/qontinui-lib.logAI chat logs in JSONL format (prompts and responses)
.dev-logs/ai-output.jsonl%LOCALAPPDATA%\qontinui-runner\logs\~/Library/Application Support/qontinui-runner/logs/~/.local/share/qontinui-runner/logs/Logs are categorized by severity level. You can filter logs by level in the Runner UI or when viewing log files.
Detailed diagnostic information for debugging
Examples:
Normal execution information and progress updates
Examples:
Warning conditions that don't prevent execution
Examples:
Error conditions that cause failures
Examples:
Adjusting Log Levels: Set the RUST_LOG environment variable to control log verbosity.
RUST_LOG=qontinui_runner=debug,tauri=infoMonitor automation execution in real-time through the Runner UI's Logs tab, which provides three specialized log views.
View all execution events, errors, and system messages in real-time
Track image matching results with similarity scores and coordinates
Hierarchical view of workflow execution with action details
Runner includes an automatic health monitoring system that tracks the Python executor's responsiveness.
Every 5 seconds, Runner sends a ping to the Python executor
Runner monitors the Python subprocess for unexpected termination
Measures response latency for health checks
Use the built-in Logs tab for real-time monitoring with filtering and search
Click the "Logs" tab in RunnerSelect log type: General, Image Recognition, or ActionsUse filters to narrow down by level or keywordClick actions for detailed informationTail log files directly for development and debugging
Windows PowerShell: Get-Content .dev-logs\runner-backend.log -Tail 100 -WaitmacOS/Linux: tail -f .dev-logs/runner-backend.logSearch for errors: Select-String -Path .dev-logs\runner-backend.log -Pattern "error" -CaseSensitive:$falseOpen log files in your preferred text editor for analysis
Navigate to .dev-logs/ directoryOpen runner-backend.log or runner-frontend.logSearch for timestamps or error patternsUse syntax highlighting for JSONL filesThe most recent logs contain crucial context about what went wrong. Look for ERROR or WARN level messages immediately before the failure.
Start with ERROR level to find failures, then switch to WARN or INFO to see context. Use DEBUG only when you need detailed execution traces.
Look for keywords like "failed", "timeout", "not found", "exception", or action names. This quickly narrows down the problem area.
If an automation works sometimes and fails other times, compare logs from both scenarios to identify what's different.
Low similarity scores (< 0.8) in Image Logs indicate images aren't matching well. This often means UI changed or image needs recapture.
Consecutive ping timeout warnings indicate the Python executor is overloaded or stuck. This can cause overall automation slowness.
The ai-output.jsonl file contains all AI interactions. This is useful for understanding what the AI agent was trying to do during AI_PROMPT actions.
Use logs to identify performance bottlenecks and optimize automation execution speed.
Action logs include execution timestamps
How to use: Compare start and end times to find slow actions. Common culprits: WAIT actions, slow API calls, complex image searches.
Image recognition logs show search duration
How to use: Multi-scale searches take longer. Use search regions to limit area and speed up matching.
Ping-pong latency indicates executor responsiveness
How to use: Rising latency over time suggests memory leaks or resource exhaustion. Restart executor if latency exceeds 500ms.
Logged when screenshots are captured
How to use: Slow capture (>100ms) can indicate display driver issues or high system load.
Logs can be cleared from the Runner UI or by deleting log files
Production logs rotate daily to prevent excessive disk usage
Share logs for troubleshooting or bug reports
Image not found on screenMeaning: The image recognition failed to locate the target image
Solutions:
Pong timeout: no response for X secondsMeaning: Python executor is not responding to health checks
Solutions:
Failed to parse executor messageMeaning: Rust couldn't parse JSON from Python stdout
Solutions:
Python process stdout closed unexpectedlyMeaning: Python executor terminated without clean shutdown
Solutions:
Always have the Logs tab visible when testing new workflows. This lets you see errors immediately and understand execution flow.
DEBUG level generates excessive output and can slow execution. INFO level provides sufficient detail for most scenarios.
Even successful runs may have warnings that indicate potential issues. Check for retry attempts or low similarity scores.
Before troubleshooting, copy the entire .dev-logs/ directory. This preserves the exact state for analysis.
Don't scroll through thousands of log lines. Use search functionality to find relevant entries quickly.
When debugging, review screenshots alongside logs. Screenshots show what the automation saw, logs show what it decided to do.