Ground every answer
in what you know.
Connect documents from anywhere, choose a retrieval blueprint, and let every bot response cite real sources. From Hybrid Search to Agentic RAG — test, tune, and deploy with confidence.
One source of truth
Connect 16 data sources into a single knowledge engine. SaaS apps, databases, cloud storage, and file uploads — all indexed, searchable, and synced automatically.
Blueprint-powered retrieval
Choose from 13 retrieval strategies from Hybrid Search to Agentic RAG. Each blueprint has benchmarks for latency, relevance, and cost so you pick what fits.
Citation-backed answers
Every bot response that uses KB context includes inline citation chips. Users see exactly which document and page the answer came from — building trust and accountability.
How it works
Connect. Ingest. Retrieve. Ground.
Four stages that turn scattered documents into citation-backed bot answers — with live testing and auto-tune at every step.
The knowledge pipeline
connect → ingest → retrieve → groundConnect
16 source types, one-click sync
Ingest
Parse, chunk, embed, and index
Retrieve
13 blueprints from Hybrid Search to Agentic RAG
Ground
Citation-backed answers in every response
Connect
16 source types, one-click sync
Ingest
Parse, chunk, embed, and index
Retrieve
13 blueprints from Hybrid Search to Agentic RAG
Ground
Citation-backed answers in every response
Connect
16 source types, one-click sync
Connect documents from anywhere — file uploads, cloud storage, SaaS apps, databases, APIs, and web crawls. Each source supports configurable sync schedules from real-time to manual, with OAuth flows for apps like Notion, Google Drive, and Slack.
How it works
- 16 source types across 5 categories: Upload, Cloud Storage, SaaS, Databases, API & Web
- OAuth, token, and connection-string authentication per source type
- Sync schedules: real-time, hourly, daily, weekly, or manual
Ingest
Parse, chunk, embed, and index
Documents flow through a full pipeline: parsing PDFs, DOCX, spreadsheets, and code files into text; chunking with configurable size and overlap; embedding with your chosen model; and indexing into Qdrant with multi-tenant isolation.
How it works
- Supports 12+ file formats including PDF, DOCX, CSV, JSON, XML, and code files
- Configurable chunk size, overlap, and embedding model per connection
- BYOK embeddings: OpenAI, Voyage, and Cohere
Retrieve
13 blueprints from Hybrid Search to Agentic RAG
Choose the retrieval strategy that fits your use case. From simple Hybrid Search to advanced Agentic RAG with multi-hop reasoning, each blueprint defines its own pipeline of chunking, embedding, searching, reranking, and validation.
How it works
- 8 blueprints available today, 5 advanced strategies coming soon
- Each blueprint has benchmarks for latency, relevance, and cost per query
- Auto-tune suggests better blueprints based on live performance metrics
Ground
Citation-backed answers in every response
Link a knowledge base to any bot and Rylvo automatically patches its prompt with grounding rules. The LLM learns to cite sources, admit when information is missing, and stay within the bounds of your documents — every time.
How it works
- Auto-injected KB usage blocks with citation formatting rules
- Inline source chips rendered in bot responses with document name and page
- Idempotent, versioned prompt patching with full audit trail
Data source types
16 sourcesUpload
Cloud Storage
SaaS
Databases
API & Web
Retrieval blueprints
13 strategiesHybrid Search
Beginner
Dense vector + keyword fused with Reciprocal Rank Fusion
Classic RAG
Beginner
Dense-only similarity search with threshold fallback
Parent-Child
Intermediate
Small child chunks for matching, retrieve parent for context
HyDE
Intermediate
Generates hypothetical answer, embeds that instead of raw query
Contextual Retrieval
Intermediate
Prepends document context to each chunk before embedding
Self-RAG
Advanced
Post-retrieval judge grades each chunk and drops bad ones
CRAG
Advanced
Confidence judge with fallback validation architecture
Agentic RAG
Advanced
LLM agent with kb_search tool, up to 3 hops
Coming soon
Live playground
Test retrieval before deploying to bots.
Refund policies are governed by Section 4.2 of the Terms of Service. Customers may request a full refund within 30 days of purchase.
The standard processing time for refunds is 5-7 business days. Expedited processing is available for Premium accounts.
For disputes, contact support@company.com with your order number and reason for the refund request.
Gift cards and subscription renewals are non-refundable as per the updated policy effective March 2024.
Below threshold — would be dropped at runtime
Bot linking & citations
Auto-wire knowledge into every response.
Linked bots
Citation in response
You can request a full refund within 30 days of purchase. Terms of Service §4.2
Processing takes 5-7 business days. Refund Guide p.12
Answers
What teams usually ask
What data sources can I connect to my knowledge base?
Rylvo supports 16 source types: file upload, Amazon S3, Google Cloud Storage, Azure Blob, Notion, Confluence, Google Drive, SharePoint, Zendesk, Intercom, Slack, PostgreSQL, MySQL, MongoDB, API, and web crawl.
What is a retrieval blueprint?
A blueprint is a retrieval strategy that defines how documents are chunked, embedded, searched, and ranked. Rylvo includes 13 blueprints ranging from simple Hybrid Search to advanced Agentic RAG, each with its own pipeline, benchmarks, and recommended use cases.
How do citations work in bot responses?
When a bot uses knowledge base context, Rylvo injects source markers into the prompt. The bot's response includes citation references that are rendered as clickable inline chips showing the source document and page number.
Can I test retrieval before deploying it to a bot?
Yes. The Playground lets you run live queries against any knowledge base connection and see exactly which chunks were retrieved, their similarity scores, and whether they pass the configured threshold.
What happens when I link a knowledge base to a bot?
Rylvo automatically patches the bot's prompt with a KB usage block that teaches the LLM how to ground its answers, cite sources, and handle missing information. The patch is idempotent, versioned, and fully audit-trailed.
Does Rylvo support custom embedding models?
Yes. Rylvo supports Bring Your Own Key embeddings from OpenAI (text-embedding-3-small), Voyage, and Cohere. You can add your provider key in Settings.
Turn documents into answers
Your knowledge, finally usable.
Connect your first data source in minutes. Choose a blueprint, test in the playground, and watch your bots answer with confidence and citations.
