- How It Works, Examples & Query Translation Logic
- 1 Purpose
- 2 Philosophy: Ask → Anticipate → Act
- 3 High-Level Architecture
- 4 Query Translation Process
- 5 Prompt Template Example
- 6 Example Queries & Translations
- 7 Security and Guardrails
- 8 Visualization Options
- 9 Conversational Context
- 10 Multi-Modal Results
- 11 KPIs for NLQ Effectiveness
- 12 Why It Matters
- 13 Takeaway
How It Works, Examples & Query Translation Logic #
1 Purpose #
The NLQ layer is what turns EA 2.0 from a repository into an intelligent colleague.
It lets architects, analysts, or executives simply ask questions in English and get graph-driven answers — no Cypher, SQL, or modeling knowledge required.
2 Philosophy: Ask → Anticipate → Act #
- Ask — The user expresses intent: “Show apps supporting Finance that are out of support.”
- Anticipate — The reasoning engine interprets, checks context, and expands the query intelligently.
- Act — Results return as visual graphs, metrics, or recommended actions.
The goal: fluency between human intent and enterprise data.
3 High-Level Architecture #
[User Query (Text)]
↓
Language Parser (NLP + Prompt Templates)
↓
Reasoning API / LLM Translator
↓
Graph Query Generator (Cypher / Gremlin)
↓
Graph DB → Result Set → Formatter
↓
Visualization (UI + Dashboard)
Each stage is stateless, audited, and explainable.
The translation chain is logged — so you can see why an answer was produced.
4 Query Translation Process #
Step 1 – Intent Detection
The LLM identifies question type (list, metric, trend, relation).
Step 2 – Entity Extraction
Keywords mapped to ontology classes (Application, Capability, Risk, Control…).
Step 3 – Template Selection
Matches a pre-approved query pattern, e.g.
pattern: “apps supporting {capability} with {condition}”
Step 4 – Safe Expansion
Adds filters for tenant, sensitivity, or timeframe.
Step 5 – Execution
Executes only on read-allowed nodes.
Step 6 – Result Formatting
Returns JSON for table, chart, or network view.
5 Prompt Template Example #
PROMPT = """
You are an Enterprise Architecture assistant.
Translate the user's question into a Cypher query over the ontology:
(Capability)-[:USES]->(Application)-[:STORES]->(Data)-[:HAS_RISK]->(Risk)
Return only read-safe properties.
Question: {user_query}
"""
This template constrains the LLM — avoiding hallucinations or unsafe queries.
6 Example Queries & Translations #
| Natural Language Query | Generated Cypher |
|---|---|
| “List all applications used by the Finance capability.” | MATCH (c:Capability{name:'Finance'})-[:USES]->(a:Application) RETURN a.name; |
| “Show capabilities impacted if CRM is decommissioned.” | MATCH (a:Application{name:'CRM'})<-[:USES]-(c:Capability) RETURN c.name; |
| “Which data sources contain PII and are linked to high risk?” | MATCH (d:Data)-[:HAS_RISK]->(r:Risk{level:'High'}) WHERE d.sensitivity='PII' RETURN d.name,r.level; |
| “Total cost of applications without owners.” | MATCH (a:Application) WHERE NOT (a)-[:OWNED_BY]->(:Person) RETURN sum(a.cost); |
7 Security and Guardrails #
- Read-Only Scope: The LLM has no write/delete permissions.
- Query Limiter: Row limit = 5000, time limit = 10 s.
- Audit Trail: Original prompt, query, execution time logged.
- PII Shield: Sensitive fields (auto-masked).
- Policy Checker: Each generated query validated by Open Policy Agent before execution.
This prevents “prompt injection” or privilege escalation attacks.
8 Visualization Options #
- Tabular: Quick summaries, CSV export.
- Graph: Force-directed network (nodes + edges).
- Metric Card: KPI aggregates (“Average Decision Latency”).
- Timeline: Change events over time.
- Heatmap: Risk or cost density.
The React-based UI selects visualization automatically from metadata.
9 Conversational Context #
EA 2.0 maintains session memory for the conversation:
Q1: “Show Finance applications.”
Q2: “Now filter to cloud-based only.”
The engine remembers entities from previous queries, creating a natural dialogue.
10 Multi-Modal Results #
Besides text and charts, the NLQ engine can generate:
- Links to deeper dashboards (Power BI).
- Smart cards summarizing each node.
- Exportable JSON for automation.
Every output carries metadata: query ID, timestamp, confidence.
11 KPIs for NLQ Effectiveness #
| Metric | Target | Interpretation |
|---|---|---|
| Translation Accuracy | ≥ 90 % | Correct mapping of intent → query |
| Avg Response Time | ≤ 3 s | Optimized graph performance |
| User Satisfaction | ≥ 4.5 / 5 | Ease of use survey |
| Query Re-use Rate | ≥ 50 % | Popular patterns shared across teams |
| Error Rejection Rate | < 1 % | Low invalid prompt ratio |
12 Why It Matters #
- Accessibility: Decision-makers without EA tools can ask questions directly.
- Speed: Insights in seconds instead of manual reports.
- Learning: Every query trains the system on user intent.
- Governance: All queries audited for policy compliance.
13 Takeaway #
NLQ is the user interface of intelligence.
When architecture speaks human, the enterprise finally listens.