Mann+Hummel Challenge · Water & Membrane Solutions · Call 28

The expertise doesn't leave
when the engineer does

KORE is an intelligent memory layer that captures institutional knowledge, compounds it with every interaction, and delivers expert-quality recommendations at scale — automatically.

Product overview — 3 minutes

Replace this placeholder with your recorded video

3:00
See how it works
The problem

MANN+HUMMEL doesn't have a speed problem.
It has a knowledge transfer problem.

Every senior engineer who leaves takes institutional knowledge with them. Every new inquiry starts from scratch. Every recommendation depends on who happens to be available. The PDQ process isn't broken — it's a symptom of an organisation that has never had a way to make expertise portable.

01

Incomplete customer inputs

PDQs arrive with missing parameters. Technical teams spend days following up instead of solving. Every delay erodes customer confidence.

02

Knowledge lives in people, not systems

When Alex is unavailable, the organisation stalls. There is no codified decision logic to fall back on. Every case that requires judgment creates a bottleneck.

03

Manual specification translation

"Full capacity in summer" means something specific in membrane engineering. Translating customer language into technical thresholds requires experience that can't be looked up.

04

Competitor analysis is manual and slow

Engineers research incomplete datasheets and rely on memory to identify MH alternatives. Inconsistent, slow, and impossible to scale as competitor product ranges grow.

"The solution isn't faster search. It's a system that learns to think like their best engineer — and gets better every time it's used."

Three personas. Three different needs.
One system that serves all three.

Ben — Sales Engineer
Alex — Senior Expert
The Customer

Ben

Applications / Sales Engineer

Ben's anxiety isn't speed — it's confidence under uncertainty. When he sends a recommendation, he's putting his professional judgment on the line. A system that gives him a shortlist he can't defend won't get used.

What Ben needs is to walk into every customer conversation already prepared. KORE delivers this automatically, before he even opens the inquiry.

What disappears: manual parameter extraction, catalogue searching, cross-referencing past cases, formatting outputs, remembering compliance requirements.

Ben's journey with KORE

Today: Receives PDQ via email. 2 hours to extract parameters, search catalogue, chase Alex for edge cases. Output inconsistent.

With KORE: Opens inquiry. Parameters already extracted and confidence-scored. Shortlist generated. Historical cases surfaced. Compliance checked. Customer summary drafted.

Ben's job: Read in 90 seconds. Apply judgment. Send. The system did the hour. Ben did the judgment.

Alex

Senior Technical Expert

Alex is simultaneously the most valuable person in the process and its biggest bottleneck. He's been asked to document his knowledge before. It never worked — because it required him to stop working and start writing.

KORE captures Alex's knowledge through his normal workflow. Every override, every case he touches, every one-sentence rationale he gives is absorbed into the memory layer automatically.

Result: His accumulated knowledge handles 80% of cases. Only the 20% that genuinely need him reach him — as structured decisions, not open-ended reviews.

Alex's journey with KORE

Today: Reviews all complex cases. Bottleneck for every edge case. Institutional knowledge trapped in his head, not in the system.

With KORE: Only genuinely novel cases reach him. Presented as structured one-sentence decisions with full context. Override rationale captured automatically.

Over time: His past decisions are in the system. His impact multiplies while his cognitive load drops.

The Customer

Industrial Engineer / Procurement

The customer knows their operational problem. They don't know membrane science. Asking them to complete a 40-field PDQ presupposes a translation capability they don't have — which is why inputs are incomplete and follow-up cycles are long.

KORE replaces the form with a conversation. The customer describes their challenge in plain language. The system builds the technical parameter picture invisibly, one clarifying question at a time.

Output: A recommendation document clear enough to share internally with procurement — without needing MH on the call to explain it.

Customer journey with KORE

Today: Receives 40-field PDQ. Half the fields are unclear. Guesses at answers. Waits days for follow-up from Ben.

With KORE: Describes challenge in plain language. System asks one clarifying question at a time. Parameters built invisibly.

Result: Receives a clean recommendation they can take to procurement without needing MH present to translate.

The product

Five agents. Zero manual steps
between inquiry and answer.

Intake Agent

Extract + score

🔍

Gap Detector

Flag + follow up

🎯

Matching Engine

Rank + explain

Compliance

Filter by regs

⚔️

Competitor

Equivalency map

Intake Agent

Processes incoming inquiries in any format — PDQ, email, PDF, free text. Runs a two-pass extraction: first pulling every parameter mentioned explicitly or implied by context, then scoring each by confidence. A customer who writes "our plant operates at full capacity during summer months" has communicated a duty cycle. The agent knows this. Missing critical parameters trigger targeted follow-up questions — specific questions whose answers resolve the highest-uncertainty parameters first.

KORE — Ben's Workspace
Inbox
Pharma MX — UF System
● High confidence
Municipal SG — RO
● Medium confidence
Food & Bev JP
● Low confidence
Pharma MX — UF System
OVERALL CONFIDENCE
Match confidence78%
Parameters extracted
Feed pH6.8
Temp range20–40°C
Flow rate120 m³/h
SDI⚠ missing — follow-up sent
Chlorine tol.⚠ missing — follow-up sent
Shortlist
01
MemPulse UF-400X
94%
02
AquaCore MF-200
81%
03
PharmaFilter NF-50
73%
Memory context
3 analogous cases ① DE Pharma 2024 — UF-400X → ✓
② SG Municipal 2023 — similar pH
③ JP Food & Bev 2022 — diff. outcome
Compliance
EU GMP Annex 1
FDA 21 CFR
NSF/ANSI 61

Three domains. One event-driven backbone.
Everything compounds.

The architecture is event-driven because a PDQ arriving is not a single transaction — it triggers five parallel operations. Request-response would serialise what KORE runs concurrently. Hover any component to see its role and the reasoning behind the choice.

INTERACTION DOMAIN Ben's Workspace Sales engineer Alex's Console Expert override Customer Portal Conversational intake APACHE KAFKA — EVENT BUS INTELLIGENCE DOMAIN Intake Agent Extract + score Claude API Gap Detector Flag + follow up LangGraph Matching Engine Rank + explain Constraint + LLM Compliance Agent Filter by regs Rule engine Competitor Agent Equivalency map Continuous ingestion MEMORY WRITE BUS — ASYNC DATA DOMAIN Product Catalogue PostgreSQL Decision Memory Qdrant + MongoDB Knowledge Graph Neo4j Community Competitor Graph Neo4j Hover any component to see its role and the reasoning behind the choice
Technology

Every choice made against
a simpler alternative.

No tool chosen fashionably. Every component earns its place by enabling a specific capability the simpler alternative couldn't provide. All open-source. All self-hostable. No proprietary data leaves controlled infrastructure.

Backend
Python + FastAPI
vs Node.js / Django / Flask
Python because the AI ecosystem is Python-native. FastAPI because async-native with Pydantic schema validation — essential when complex nested parameter sets flow between five agents without blocking.
Event Bus
Apache Kafka
vs RabbitMQ / Redis Streams
Durable and replayable. Events written to disk. If the memory layer needs rebuilding, replay the entire history. RabbitMQ deletes consumed messages — for a memory system, losing events is architecturally unacceptable.
Vector Store
Qdrant
vs Pinecone / pgvector / Weaviate
Payload filtering — semantic search combined with structured constraints. Pinecone: data leaves infrastructure. pgvector: degrades past ~100k vectors. Qdrant: open-source, self-hosted, Rust-native, sub-100ms at tens of millions of vectors.
Knowledge Graph
Neo4j Community
vs PostgreSQL join tables
Equivalency is multi-hop and many-to-many. "Competitor X matches MH Product A for municipal but Product B for pharmaceutical" is a graph problem. Relational tables work at small scale and become unmaintainable as the graph grows.
Agent Orchestration
LangGraph
vs LangChain / AutoGen
Agent workflows branch, loop, and execute conditionally. LangChain's sequential chain model breaks here. LangGraph models workflows as directed graphs with typed state — the correct primitive for a multi-stage pipeline where the path depends on intermediate results.
LLM
Claude API + Ollama fallback
vs GPT-4 only
Claude for reasoning-heavy PDQ extraction. Ollama (Mistral/Llama 3) local fallback for classification — no data egress. LLM abstraction layer means provider switching is a config change, not a rewrite. Zero proprietary data sent externally during local processing.

Not better features.
Better thinking.

01

It compounds. Others repeat.

Every validated recommendation, every expert override, every post-deployment outcome is absorbed into a persistent memory layer. The system at month 12 is a different class of product from the one at month 1. Competing tools start from the same baseline every time. KORE builds a moat that widens with every use.

02

It reasons, not just retrieves.

Episodic memory for precedent. Knowledge graph for relational reasoning. Constraint-satisfaction for technical correctness. No single mechanism achieves all three simultaneously. Combining them produces recommendations that are precedent-grounded, relationally aware, and technically correct.

03

Expertise captured without asking.

Every knowledge management initiative fails at the same point: it requires experts to stop working and start writing. KORE captures Alex's knowledge through his normal workflow — every override, every one-sentence rationale — without asking him to do anything differently. The knowledge base fills itself.

04

IP boundary is architectural, not a setting.

Competitor tools rely on permission filters — a misconfiguration exposes proprietary data under time pressure. KORE's internal/external boundary is enforced at the data layer. Proprietary specifications are structurally inaccessible to external outputs, not hidden behind a toggle that can be forgotten.

Built before. In harder domains.

GridTrace — Nous Research Hermes Hackathon

Institutional memory agent for US power grid operators. Fed 600 real incident records spanning 2019–2026. When a new fault arrives, GridTrace doesn't say "this happened before" — it says "here is the specific fix recommended after this incident that was never implemented, which may be why it's happening again." Session 50 is measurably faster and more accurate than session 1. This compounding memory logic is exactly what KORE applies to membrane product selection.

▶ Watch demo

Cassandra — DevOps Institutional Memory

A DevOps-native agent giving every codebase a permanent, queryable institutional memory. Intercepts events across the software development lifecycle, captures reasoning at the moment of decision, surfaces it precisely when a developer needs it. The problem Cassandra solves — decisions made, context lost, next person starts blind — is structurally identical to the problem KORE solves for MANN+HUMMEL.

▶ Watch demo
Risk & honesty

Stress-tested,
not just designed.

Risk
Impact if realised
Mitigation in place
Ben doesn't trust the recommendations
High — adoption failure
Most predictable failure mode for AI in expert workflows.
Ben uses system for extraction only, ignores recommendations. Returns to manual selection. System becomes an expensive form parser.
Full reasoning trace on every recommendation — which parameters drove the decision, which cases were analogous, what would change the result. Transparency is the adoption mechanism, not a UI feature.
LLM extraction errors on complex PDQs
High — trust damage
Real PDQs are messy — ambiguous language, contradictory parameters, regional shorthand.
Wrong extraction → wrong recommendation → customer trust damaged. Ben stops using system output.
System never silently produces a wrong answer. Confidence scoring surfaces uncertainty explicitly. Ben reviews extracted parameters before matching engine runs. Draft, not fact — always.
Customer data leaking across tenants
Catastrophic
Customer A's process parameters appearing in Customer B's recommendation.
Legal liability. Contract termination. Reputational damage to MANN+HUMMEL. Regulatory exposure.
Partitioned at storage layer — separate Qdrant collections, PostgreSQL row-level security. Memory learns from anonymised patterns only. Cross-tenant access is structurally impossible, not permission-blocked.

The deliberate trade-off

No fine-tuning at launch. The system uses foundation models with structured prompting rather than domain-specific fine-tuned models. Fine-tuning would improve extraction quality significantly — but requires substantial labelled training data that doesn't exist yet. After 12 months of operation, accumulated validated cases become the fine-tuning dataset. Fine-tuning is the next evolution, not the starting point. Starting with fine-tuning would mean building on a foundation before the data exists to justify it.

KORE is ready to build.
Not after planning. Now.

Selection unlocks prototype development through the June–August evaluation window. The architecture is defined. The technology is chosen and justified. The risks are mapped with specific mitigations. The prior builds — GridTrace and Cassandra — demonstrate that the core capability works in production, in harder domains, under real conditions. What's needed: access to MANN+HUMMEL's product catalogue data and six months to demonstrate what a compounding memory layer looks like when pointed at the right problem.

"Every other submission will propose a faster way to do what MANN+HUMMEL already does. KORE proposes something different: a system that makes institutional expertise permanent, portable, and compounding. Not selecting this is not a conservative choice. It's leaving the most valuable capability on the table."