Artificial Intelligence
AI, but useful.
AI is a revolution that's here to stay. We help you integrate it into your existing solution or design new AI-based products.
Two ways to approach it
Build a product designed around AI from day one, or integrate AI into what already exists.
Our client successes
We build our own products. That's why we know how to build yours.
AI has moved from the demo-effect phase to product foundations. What we build for you falls into two cases: (1) a product where AI is the core source of value — agent, assistant, automation — or (2) an AI building block added to an existing product: semantic search, classification, data extraction. If you're looking for a POC for a committee or a PowerPoint, we're not the right fit. If you want something that ships to production with real users and a controlled inference cost, we'll work well together.
Typical use cases
Vertical business agent
Sales, legal or HR agent with RAG over internal documents, session memory and human fallback.
Structured extraction
OCR + LLM pipeline that turns PDFs, scans and invoices into structured, usable data (quotes, contracts, IDs).
AI product support
Chatbot embedded in the product with access to user context, human escalation and observability of response quality.
Our method in brief
Four phases: (1) Use-case audit — we identify where AI actually creates value vs. gimmick. (2) Stack selection — proprietary model (Claude, GPT, Mistral) or self-hosted open-weight (Llama, Qwen) depending on GDPR, cost and latency constraints. (3) Iterative build — we start on one use case, measure (quality, cost, latency), then expand. (4) Production — observability (LangSmith, Langfuse), guardrails, fallbacks, drift monitoring. No endless experimentation: one product milestone every 2 weeks.
Stack & technologies
LLMs: Claude 4 Opus / Sonnet / Haiku, GPT-4o / o1, Gemini 2.0, Mistral Large, Llama 3.3 and Qwen 2.5 self-hosted. Frameworks: LangChain, LangGraph, Vercel AI SDK, Mastra. Vector DBs: pgvector, Qdrant, Pinecone. Observability: LangSmith, Langfuse, Helicone. Hosting: Vercel, AWS Bedrock, Azure OpenAI depending on regulatory requirements.
// 6 AI products shipped to production in 2024-2025 (including Moriarty, The Patch, CS Consulting)
Frequently asked questions
How much does a production AI product cost?+
Development ranges from €30k for a focused integration to €150k+ for a full vertical agent. Inference cost matters more: €200 to €5,000/month depending on volume and model choice. We size this line item at scoping to avoid surprises.
Open-source or proprietary API?+
For 80% of cases, proprietary APIs (Claude, GPT, Mistral) win on quality, latency and time-to-market. We go self-hosted (Llama, Qwen) when GDPR/sovereignty requires it, or when volume justifies the infra investment. The choice is made at scoping, not by dogma.
How do you handle hallucinations?+
Three levers: (1) strict RAG with source citations visible to the user, (2) structured validation (JSON schema, function calling) rather than free text whenever possible, (3) application-level guardrails and human-in-the-loop on high-impact decisions. We measure the hallucination rate continuously via automated evaluations.
What are typical timelines?+
A working POC in 4-6 weeks, an AI MVP in production in 8-12 weeks, a mature AI product in 4-6 months. We ship in 2-week cycles with demos, so you see progress from week three.
Got a project?
Nothing beats a conversation to shape the right solution together.





