Practical Examples
Three examples of increasing complexity: from the minimal RAG flow to a complete insurance agent with form, categorization, routing, suspension, and differentiated reports.
Example 1 — RAG Chat (Simple)
Objective: Answer user questions by drawing on company documents and the Knowledge Base. The simplest and most common flow.
Flow:
Begin → Retrieval → LLM → MessageComplete Configuration
Begin Node
| Parameter | Value |
|---|---|
| mode | conversational |
| inputFields | (empty) |
Retrieval Node (retrieval_1)
| Parameter | Value |
|---|---|
| query | {{sys.query}} |
| topK | 10 |
| scoreThreshold | 0.5 |
| useReranking | true |
| companyDocs | true |
| knowledgeBase | true |
| legalSources | false |
LLM Node (llm_1)
| Parameter | Value |
|---|---|
| model | writer |
| temperature | 0.3 |
| maxTokens | 2048 |
System Prompt:
You are a precise and professional corporate assistant. Always respond in English.
Base your answers EXCLUSIVELY on the documents provided in context.
If the requested information is not in the documents, say so explicitly: do not invent.
Always cite sources with numbered references [N].User Prompt:
Context documents:
{{retrieval_1.formalized_content}}
User question:
{{sys.query}}Message Node (message_1)
| Parameter | Value |
|---|---|
| content | {{llm_1.content}} |
| format | markdown |
| showCitations | true |
What You Learn
- How to use
{{sys.query}}as a retrieval query. - How to pass
{{retrieval_1.formalized_content}}to the LLM prompt. - The fundamental pattern Begin → Retrieval → LLM → Message.
- How the Message node handles citations and formatting.
Example 2 — Smart Router (Medium)
Objective: Handle different types of questions with different LLMs. Technical questions use the writer model for in-depth responses; simple questions use the planner model for speed; out-of-scope questions receive a static message.
Flow:
Begin → Categorize → [Technical] → LLM writer → Message technical response
→ [Simple] → LLM planner → Message quick response
→ [Out of scope] → Static MessageComplete Configuration
Begin Node
| Parameter | Value |
|---|---|
| mode | conversational |
Categorize Node (categorize_1)
| Parameter | Value |
|---|---|
| input | {{sys.query}} |
| minConfidence | 0.65 |
Categories:
| Name | Description | Goto |
|---|---|---|
| Technical | Questions about products, technical specifications, procedures, manuals, installations, configurations, technical faults | retrieval_technical |
| Simple | General questions, greetings, short clarification requests, questions about basic services | llm_quick |
| OutOfScope | Questions not relevant to company activities, personal topics, prohibited content | message_out_of_scope |
Technical Branch
Retrieval Node (retrieval_technical):
| Parameter | Value |
|---|---|
| query | {{sys.query}} |
| topK | 15 |
| useReranking | true |
| companyDocs | true |
| knowledgeBase | true |
LLM writer Node (llm_technical):
| Parameter | Value |
|---|---|
| model | writer |
| temperature | 0.2 |
| maxTokens | 3000 |
System Prompt:
You are a corporate technical expert. Provide detailed, precise, and structured answers.
Use bullet points and headings where useful. Cite sources with [N].
The question was classified as TECHNICAL (confidence: {{categorize_1.confidence}}).User Prompt:
Available technical documentation:
{{retrieval_technical.formalized_content}}
Technical question:
{{sys.query}}Message Node (message_technical):
| Parameter | Value |
|---|---|
| content | {{llm_technical.content}} |
| format | markdown |
| showCitations | true |
Simple Branch
LLM planner Node (llm_quick):
| Parameter | Value |
|---|---|
| model | planner |
| temperature | 0.5 |
| maxTokens | 512 |
System Prompt:
You are a friendly corporate assistant. Respond concisely and directly.User Prompt:
{{sys.query}}Message Node (message_quick):
| Parameter | Value |
|---|---|
| content | {{llm_quick.content}} |
| format | markdown |
| showCitations | false |
Out of Scope Branch
Message Node (message_out_of_scope):
| Parameter | Value |
|---|---|
| content | This question falls outside the scope of the assistant. You can ask me about products, technical procedures, or company services. |
| format | plain |
| showCitations | false |
What You Learn
- How to use Categorize for semantic routing without rigid conditions.
- How to create parallel branches with different LLMs for efficiency and quality.
- How to use
{{categorize_1.confidence}}in the prompt for transparency. - How to terminate branches with static or dynamic messages.
Example 3 — Insurance Claim Analysis (Complex)
Objective: Manage the entire evaluation pipeline for an insurance claim. The flow collects initial data, categorizes the claim type, retrieves applicable policies and regulations, analyzes coverage, and — if coverage is not confirmed — requests supplementary documentation from the user before generating the final report.
Complete flow:
Begin (form)
→ Categorize (claim type)
→ VariableAssigner (prepare query)
→ Retrieval (policies + regulations)
→ LLM Analysis (coverage assessment)
→ Switch (COVERAGE CONFIRMED?)
→ [YES] LLM Positive Report → Message Positive Outcome
→ [NO] UserFillUp (supplementary documents)
→ Supplementary Retrieval
→ LLM Negative Report → Message Negative OutcomeComplete Configuration
Begin Node
| Parameter | Value |
|---|---|
| mode | task |
Input Fields:
| Name | Label | Type | Required |
|---|---|---|---|
claimType | Claim Type | select [Auto, Life, Home, Liability, Other] | Yes |
claimDate | Date of Incident | date | Yes |
description | Description of Event | text | Yes |
policyNumber | Policy Number | string | Yes |
estimatedAmount | Estimated Damage Amount (€) | number | No |
Categorize Node (categorize_1)
| Parameter | Value |
|---|---|
| input | {{sys.inputs.claimType}}: {{sys.inputs.description}} |
| minConfidence | 0.70 |
Categories:
| Name | Description | Goto |
|---|---|---|
| Auto | Road accidents, vehicle damage, car theft, auto liability | varassign_query |
| Life | Death, permanent disability, serious illness, injuries | varassign_query |
| Home | Property damage, fire, flood, home theft, home liability | varassign_query |
| Liability | Civil liability toward third parties, damage to persons or property caused by the insured | varassign_query |
| Other | Claims not classifiable in the above categories | varassign_query |
(All categories converge on the next node; the value {{categorize_1.category}} is used in the query)
VariableAssigner Node (varassign_query)
assignments:
- name: searchQuery
type: concat
value: "Claim {{categorize_1.category}} policy {{sys.inputs.policyNumber}} coverage conditions exclusions {{sys.inputs.description}}"
- name: claimSummary
type: concat
value: |
CLAIM TYPE: {{sys.inputs.claimType}} (category: {{categorize_1.category}})
DATE OF INCIDENT: {{sys.inputs.claimDate}}
POLICY NUMBER: {{sys.inputs.policyNumber}}
ESTIMATED AMOUNT: {{sys.inputs.estimatedAmount}} EUR
DESCRIPTION: {{sys.inputs.description}}Main Retrieval Node (retrieval_policies)
| Parameter | Value |
|---|---|
| query | {{varassign_query.searchQuery}} |
| topK | 20 |
| scoreThreshold | 0.45 |
| useReranking | true |
| companyDocs | true |
| knowledgeBase | true |
| legalSources | true |
Coverage Analysis LLM Node (llm_analysis)
| Parameter | Value |
|---|---|
| model | writer |
| temperature | 0.1 |
| maxTokens | 3000 |
| jsonMode | false |
System Prompt:
You are a senior insurance assessor with 20 years of experience.
Analyze the provided documentation and determine whether the claim is covered by the policy.
CRITICAL RULES:
1. Base the assessment EXCLUSIVELY on the provided documents. Do not assume undocumented coverage.
2. Always cite contractual and regulatory references with [N].
3. ALWAYS conclude your analysis with one of these exact phrases:
- "VERDICT: COVERAGE CONFIRMED" if the claim falls within the policy coverage
- "VERDICT: COVERAGE NOT CONFIRMED" if there is insufficient evidence of coverage or applicable exclusions apply
4. After the verdict, list any additional documentation required (if any).User Prompt:
CLAIM DATA:
{{varassign_query.claimSummary}}
POLICY AND REGULATORY DOCUMENTATION:
{{retrieval_policies.formalized_content}}
Perform the complete assessment following the operational rules.Switch Node (switch_coverage)
| Parameter | Value |
|---|---|
| elseGoto | userfillup_supplementary |
Conditions:
| Variable | Operator | Value | Goto |
|---|---|---|---|
{{llm_analysis.content}} | contains | COVERAGE CONFIRMED | llm_report_positive |
Positive Branch — LLM Report (llm_report_positive)
| Parameter | Value |
|---|---|
| model | writer |
| temperature | 0.2 |
| maxTokens | 2500 |
System Prompt:
You are an insurance communication expert. Draft professional and clear reports.User Prompt:
Based on the following assessment, draft an official claim acceptance report
in formal English, structured with sections: Summary, Documentation Reviewed,
Coverage Assessment, Next Steps.
CLAIM DATA:
{{varassign_query.claimSummary}}
ASSESSMENT:
{{llm_analysis.content}}Positive Outcome Message Node (message_positive)
| Parameter | Value |
|---|---|
| content | {{llm_report_positive.content}} |
| format | markdown |
| showCitations | true |
Negative Branch — UserFillUp (userfillup_supplementary)
| Parameter | Value |
|---|---|
| message | The initial analysis did not identify certain coverage. To proceed with the assessment, we ask you to provide the supplementary documentation indicated below. |
Fields:
| Name | Label | Type | Required |
|---|---|---|---|
damagePictures | Photographs of the damage | file | Yes |
appraisal | Technical appraisal or damage estimate | file | No |
ownershipDocument | Proof of ownership of the insured property | file | No |
additionalNotes | Additional notes or clarifications | text | No |
Tips: Accepted formats: PDF, JPG, PNG, DOCX. Maximum size per file: 20MB.
Supplementary Retrieval (retrieval_supplementary)
| Parameter | Value |
|---|---|
| query | {{varassign_query.searchQuery}} exclusions limitations deductible maximum limits |
| topK | 15 |
| companyDocs | true |
| knowledgeBase | true |
| legalSources | true |
Negative Report LLM (llm_report_negative)
| Parameter | Value |
|---|---|
| model | writer |
| temperature | 0.1 |
| maxTokens | 3000 |
System Prompt:
You are an insurance communication expert. Draft professional, transparent, and precise reports.
The report must be clear about the reasons for denial or the supplementary request,
and accurately indicate the documentation required to proceed.User Prompt:
Draft an official supplementary documentation request report for the following claim.
Structure the report with sections: Claim Summary, Preliminary Analysis,
Reason for Supplementary Request, Documentation Received, Next Steps.
CLAIM DATA:
{{varassign_query.claimSummary}}
INITIAL ASSESSMENT:
{{llm_analysis.content}}
SUPPLEMENTARY DOCUMENTATION RECEIVED:
{{userfillup_supplementary.inputs.additionalNotes}}
ADDITIONAL REGULATIONS AND CONDITIONS:
{{retrieval_supplementary.formalized_content}}Negative Outcome Message Node (message_negative)
| Parameter | Value |
|---|---|
| content | {{llm_report_negative.content}} |
| format | markdown |
| showCitations | true |
What You Learn
- How to build a structured form with Begin in
taskmode. - How to use Categorize for semantic routing with multiple categories.
- How VariableAssigner prepares composite data for subsequent nodes.
- How LLM with a rigorous System Prompt produces predictable output for Switch.
- How Switch routes based on the textual content of the LLM response.
- How UserFillUp suspends the flow and collects additional documents.
- How a second Retrieval enriches context after user input.
- How to differentiate the tone and structure of reports for positive and negative outcomes.
Queria v3.1.2 -- Canvas Agent Builder