Curiosity

Technical Support: AI Use Cases

Two end-to-end AI patterns from the sample app: a "similar cases" recommender that respects ACLs, and a permission-aware support chat with citations. Both are written as custom endpoints so they can be called from the front-end or from a chat tool with the same shape.

Pattern 1 — "suggest similar items" with constraints

Pure semantic similarity is rarely what users want — they want similarity inside their world (this manufacturer, this device family, things they're allowed to see). The endpoint below does all three.

The endpoint

// Endpoint path: similar-cases
// Mode: Sync
// Read Only: true
// Authorization: Restricted

var caseId    = Body.FromJson<SimilarRequest>().CaseId;
var caseNode  = Q().StartAt(nameof(Nodes.SupportCase), caseId).Single();
var deviceUID = caseNode.Out(Edges.ForDevice).AsUIDEnumerable().FirstOrDefault();

if (deviceUID == default) return Ok(new { results = Array.Empty<object>() });

// Sibling cases on the same device, excluding the current one.
var siblings = Q().StartAt(deviceUID)
                  .In(Edges.ForDevice)
                  .Where(c => c.Key != caseId)
                  .AsUIDEnumerable()
                  .ToArray();

var request = SearchRequest.For(caseNode["Content"].AsString());
request.BeforeTypesFacet = new HashSet<string> { nameof(Nodes.SupportCase) };
request.TargetUIDs       = siblings;
request.HybridSearch     = true;

var query = await Graph.CreateSearchAsUserAsync(request, CurrentUser);
return Ok(query.Take(5).EmitWithScores());

What the result looks like for case CS-0142 ("Screen flickers..."):

{
  "results": [
    { "uid": "...", "key": "CS-0210", "summary": "Display artifacts after sleep",      "score": 0.81 },
    { "uid": "...", "key": "CS-0174", "summary": "External monitor flickers post-wake", "score": 0.78 },
    { "uid": "...", "key": "CS-0098", "summary": "Sleep/wake regression on update",     "score": 0.69 }
  ]
}

Why the layering matters

  1. Graph constraint firstTargetUIDs = siblings on this device rules out 99% of the corpus before semantic ranking runs. Cheap and predictable.
  2. Semantic ranking second — within that set, we rank by hybrid score so wording variation doesn't tank recall.
  3. ACL lastCreateSearchAsUserAsync filters anything the caller can't see. The endpoint never had to know which cases were sensitive.

Returning scores in the response lets the front-end show "82% match" and explain ordering.

Pattern 2 — RAG support chat with citations

A chat endpoint that retrieves relevant cases, asks the LLM to answer using only those, and returns answer + citations + an audit record.

The endpoint

// Endpoint path: support-chat
// Mode: Sync (or Pooling for long answers)
// Read Only: false   // we write an audit node

public record ChatRequest(string Question, string? DeviceName);
public record Citation  (string CaseId, string Summary, double Score);
public record ChatReply (string Answer, Citation[] Citations);

var req = Body.FromJson<ChatRequest>();
await RelayStatusAsync("Retrieving relevant cases...");

// 1. Retrieve — scoped to the device if given, ACL-filtered for the caller.
var search = SearchRequest.For(req.Question);
search.BeforeTypesFacet = new HashSet<string> { nameof(Nodes.SupportCase) };
search.HybridSearch     = true;
if (req.DeviceName is not null)
{
    search.TargetUIDs = Q().StartAt(nameof(Nodes.Device), req.DeviceName)
                            .In(Edges.ForDevice)
                            .AsUIDEnumerable()
                            .ToArray();
}

var hits = (await Graph.CreateSearchAsUserAsync(search, CurrentUser))
           .Take(5)
           .EmitWithScores()
           .ToList();

if (hits.Count == 0)
    return Ok(new ChatReply("I couldn't find any cases on file that match your question.", Array.Empty<Citation>()));

// 2. Build the prompt — quote case ID and summary so the model can cite them.
var sb = new StringBuilder();
sb.AppendLine("Answer the user's question using only the cases below. Cite cases as [CS-####].");
sb.AppendLine();
foreach (var (node, score) in hits)
{
    sb.AppendLine($"[{node["Id"].AsString()}] {node["Summary"].AsString()}");
    sb.AppendLine(node["Content"].AsString());
    sb.AppendLine("---");
}
sb.AppendLine();
sb.AppendLine($"Question: {req.Question}");

// 3. Generate
await RelayStatusAsync("Asking the assistant...");
var answer = await ChatAI.CompleteAsync(sb.ToString(), CancellationToken);

// 4. Audit — store the prompt, the answer, and the cited UIDs.
var audit = Graph.AddOrUpdate(new ChatAuditEntry
{
    Id        = Guid.NewGuid().ToString(),
    UserUID   = CurrentUser,
    Question  = req.Question,
    Answer    = answer,
    Timestamp = DateTimeOffset.UtcNow,
});
foreach (var (node, _) in hits)
    Graph.Link(audit, node, "Cited", "CitedBy");
await Graph.CommitPendingAsync();

return Ok(new ChatReply(
    Answer:    answer,
    Citations: hits.Select(h => new Citation(
                   h.Node["Id"].AsString(),
                   h.Node["Summary"].AsString(),
                   h.Score)).ToArray()));

Expected response for "My MacBook screen is flickering after sleep, what should I try?":

{
  "answer": "Several past cases describe this symptom. Try the steps from [CS-0142]: ...\nIf it persists, [CS-0174] suggests reseating the display cable.",
  "citations": [
    { "case_id": "CS-0142", "summary": "Screen flickers after waking from sleep", "score": 0.84 },
    { "case_id": "CS-0174", "summary": "External monitor flickers post-wake",     "score": 0.77 }
  ]
}

Why this shape works

  • Permission-aware retrieval. CreateSearchAsUserAsync ensures the LLM only ever sees cases the caller is entitled to. You can't leak private cases by clever prompting.
  • Closed-set generation. The prompt explicitly restricts the model to the retrieved cases, so the answer is grounded.
  • Verifiable citations. Citations are real graph nodes — the UI can deep-link to them and an auditor can replay the exact context.
  • Audit trail. Every chat call writes a ChatAuditEntry linked to the cited cases. Useful for compliance and for evals (was the right case actually cited?).

Evals against the sample dataset

A handful of fixed prompts make it easy to compare prompt changes:

Prompt Expected top citation Acceptable substitutes
"MacBook screen flickers after sleep" CS-0142 CS-0174 (related), CS-0098
"Dell laptop battery drains overnight" CS-0091 any Dell battery case
"Fans loud and laptop hot under load" CS-0188 CS-0214 (thermal)

Run this in the Evaluation framework — a regression there will catch retrieval drift before prompt changes ship.

© 2026 Curiosity. All rights reserved.
Powered by Neko