Create structured descriptions that enable AI to intelligently select and execute your automation workflows
When AI assistants like Claude control Qontinui Runner through the MCP (Model Context Protocol) server, they need to understand:
Structured workflow descriptions provide this context in a format that both humans and AI can easily understand and parse.
You make code changes to the web extraction feature and ask Claude:“Verify the extraction feature works end-to-end”
With good workflow descriptions, Claude will:
Use the existing description field in your workflow JSON. No schema changes or additional fields are required. Structure your description using this natural language format:
[One-line summary of what this workflow does] Use when: [Conditions that indicate this workflow should be run] Verifies: [What features/functionality this workflow tests] Prerequisites: [What must be true before running] Produces: [What state changes or outputs result from running] Depends on: [Other workflows that must run first, if any] Success indicators: [How to know the workflow succeeded] Failure indicators: [Signs that something went wrong]
| Field | Required | Purpose |
|---|---|---|
| Summary | Yes | First line, clear action-oriented description of what the workflow does |
| Use when | Yes | Conditions or situations when AI should choose this workflow |
| Verifies | Recommended | What features or functionality this workflow tests or validates |
| Prerequisites | Recommended | Required state before running (services running, apps open, login state, etc.) |
| Produces | Optional | Side effects or outputs (new data created, state changes, files written) |
| Depends on | Optional | Other workflow names that must run first (use exact names, case-sensitive) |
| Success indicators | Optional | Observable indicators that the workflow succeeded (visible UI elements, log messages, data created) |
| Failure indicators | Optional | Signs that something went wrong (error messages, missing elements, API failures) |
A simple workflow that navigates to a page and verifies it loads correctly.
Clicks Build > State Machine in the website navigation menu to open the State Machine Builder page. Use when: Need to test or verify the State Machine Builder feature, or after making changes to state machine related code (state-machine-canvas, state nodes, transitions). Verifies: Navigation menu works, Build dropdown opens, State Machine Builder page loads, canvas renders correctly, no console errors. Prerequisites: qontinui-web frontend running on localhost:3001, user logged in to the application, a project is selected. Success indicators: State Machine canvas is visible, toolbar appears, no errors in browser console, URL shows /build/state-machine, page title shows "State Machine". Failure indicators: 404 error, blank page, canvas doesn't render, console errors about missing components, navigation menu doesn't respond to clicks.
A workflow that creates new data which other workflows may depend on.
Opens the runner's extraction panel and performs a new web extraction on the currently visible application.
Use when: Need to create new extraction data for testing, or to verify the extraction feature works after code changes to extraction, element detection, or screenshot capture.
Verifies: Runner extraction panel opens, screenshot capture works, element detection runs, accessibility tree is parsed, states are identified and classified.
Prerequisites: qontinui-runner is running, target application is visible on screen and fully loaded, a project is loaded in the runner with valid configuration.
Produces: New extraction data (states, screenshots, element annotations, state metadata) in the current project configuration. Data is immediately available for web display.
Success indicators: Extraction completes without errors, at least one state is detected, screenshots are captured successfully, elements are annotated with bounding boxes, state names are generated.
Failure indicators: Extraction hangs or times out, no states detected ("0 items found" in logs), screenshot capture fails with permission errors, accessibility tree parse errors, Python subprocess crashes.A workflow that depends on data from another workflow.
Navigates to the Web Extraction page in the website and verifies that extraction data is displayed correctly in the UI. Use when: After creating new extraction data, need to verify it appears correctly in the web interface. Use when testing web display logic, image rendering, or state list components. Verifies: Web Extraction page loads, extraction data is fetched from API, images render correctly without broken image icons, state list is populated with correct count, element annotations are visible on hover. Prerequisites: qontinui-web frontend running on localhost:3001, user logged in to the application, extraction data exists in the current project (must have run extraction first). Depends on: "Start New Web Extraction" (if no extraction data exists yet for this project) Success indicators: Extraction data visible in the UI, images load successfully, state count matches expected (e.g., 3 states detected), element bounding boxes render on hover, state metadata (timestamps, confidence scores) displays correctly, no API errors in network tab. Failure indicators: Empty state list, broken image icons, "No extractions found" message appears, API returns 404 or 500 errors, network tab shows failed /api/extractions requests, images fail to load with CORS errors, state count is 0 when data should exist.
For complex verification tasks that require multiple workflows, the Depends on field enables AI to understand ordering requirements and execute workflows in the correct sequence.
When you ask: “Verify web extraction works end-to-end”
Load and analyze: AI loads the workflow config and reads all descriptions
Identify relevant workflows: Finds “Start New Web Extraction” and “Navigate to Web Extraction Page” based on “Use when” and “Verifies” fields
Determine order: Sees that page verification “Depends on” extraction workflow (which “Produces” data)
Execute in sequence: Runs extraction first (produces data), then page verification (consumes data)
Verify results: Checks success/failure indicators in logs, screenshots, and API responses
Report or fix: Reports findings and can autonomously fix issues discovered during verification
Begin your summary with an active verb that describes what the workflow does.
Good
“Clicks the Submit button and verifies the form submits”
Bad
“Form submission workflow”
Clearly describe what state the system should be in before and after the workflow.
Good
Prerequisites: User logged in with admin role, database contains test data
Produces: New project record in database with status “active”
Bad
Prerequisites: Logged in
Produces: New project
Success and failure indicators should be things AI can verify in logs, screenshots, or API responses.
Good
Success: “Success message appears”, API returns 200 status, log shows “Project created”
Failure: 404 error, blank screen, console error “Cannot read property”
Bad
Success: It works
Failure: Something broke
Use “Depends on” to create workflow chains. Reference the exact workflow name as it appears in the config.
Good
Depends on: “Create Test User” (if no test user exists), “Start Backend Server”
Bad
Depends on: The user workflow
The description is stored as a single string field in the workflow JSON. Use \n for newlines:
{
"id": "workflow-navigate-state-machine",
"name": "Navigate to State Machine Builder",
"description": "Clicks Build > State Machine in the website navigation menu to open the State Machine Builder page.\n\nUse when: Need to test or verify the State Machine Builder feature, or after making changes to state machine related code.\nVerifies: Navigation menu works, State Machine Builder page loads, canvas renders correctly.\nPrerequisites: Website running on localhost:3001, user logged in.\nSuccess indicators: Canvas visible, no console errors, URL shows /build/state-machine.\nFailure indicators: 404 error, blank page, console errors.",
"category": "Main",
"format": "graph",
"version": "1.0.0",
"actions": [
// ... workflow actions
],
"connections": {
// ... action connections
}
}Organize workflows into categories to help AI understand their purpose and whether they can be executed directly:
| Category | Purpose | Executable |
|---|---|---|
| Main | Primary workflows for execution | Yes |
| Testing | Test verification workflows | Yes |
| UI Automation | UI interaction workflows | Yes |
| Utilities | Helper workflows | Yes |
| Transitions | State machine transitions | Via state machine only |
Learn more about AI-powered automation with Qontinui Runner: