Curiosity

AI Tools

AI Tools are C# classes the chat LLM can invoke. They are how you let the assistant search, fetch, and act on data inside Curiosity Workspace — under the user's permissions, with citations, and with a full audit trail.

A tool is just a class with one or more methods decorated with [Tool] and parameters annotated with [Parameter]. The workspace's chat orchestrator discovers your tools, advertises them to the LLM, and routes tool calls back to your code.

Curiosity Workspace AI Tools

The ToolScope contract

Every tool method receives a ToolScope as its first parameter:

Member Purpose
scope.Graph The graph engine — same surface as in a custom endpoint.
scope.CurrentUser The user whose chat invoked this tool. Use this for ReBAC.
scope.CancellationToken Propagated cancellation. Honor it for long-running work.
scope.ChatAI.GetTextFromNode(uid, limit) Pull indexed text for grounding (text + OCR + STT).
scope.AddSnippet(uid, text) Register a citation; returns an integer ID the LLM uses in [1] style references.
scope.SetToolCallDisplayName(text) A human-readable label shown in the chat trace.
scope.Logger Structured logger.

The orchestrator passes the LLM only the tool descriptions and signatures — never your code. The LLM decides when to call a tool based on the [Tool] description, which makes the description the most important piece of design work.

Hello-world tool

public class GreeterTool
{
    [Tool("Greet the user by name. Use this when the user says 'hi' or asks for an introduction.")]
    public static string Greet(ToolScope scope,
        [Parameter("The name of the person to greet", required: true)] string name)
    {
        scope.SetToolCallDisplayName($"Greeted {name}");
        return $"Hello, {name}! How can I help you today?";
    }
}
return new GreeterTool();

The return new GreeterTool(); at the end is the convention: the workspace expects the tool class instance as the script's return value.

Permission-aware search tool

The most common shape: search the graph as the calling user and register snippets so the LLM can cite results.

public class TicketTools
{
    [Tool("Search the support-ticket knowledge base for tickets matching the user's question. " +
          "Use this whenever the user describes a symptom, asks 'have we seen this before?', " +
          "or asks about historical issues. Cite results with the bracketed snippet id, e.g. [1].")]
    public static async Task<string> SearchTickets(ToolScope scope,
        [Parameter("The user's question or symptom, in their own words", required: true)] string query,
        [Parameter("Optional product SKU to scope the search", required: false)] string productSku,
        [Parameter("Maximum number of results to return", required: false)] int limit)
    {
        var search = SearchRequest.For(query);
        search.BeforeTypesFacet = new([] { "Ticket" });

        if (!string.IsNullOrWhiteSpace(productSku))
            search.TargetUIDs = scope.Graph.Q()
                .StartAt("Product", productSku)
                .In("ForProduct")
                .AsUIDEnumerable()
                .ToArray();

        var q = await scope.Graph.CreateSearchAsUserAsync(
            search, scope.CurrentUser, scope.CancellationToken);

        var results = q.Take(limit > 0 ? limit : 10).AsEnumerable().Select(n => {
            var text = scope.ChatAI.GetTextFromNode(n.UID, limit: 4_000);
            var id   = scope.AddSnippet(uid: n.UID, text: text);
            return new {
                snippetId = id,
                ticketId  = n.GetString("Id"),
                subject   = n.GetString("Subject"),
                body      = text,
            };
        }).ToArray();

        scope.SetToolCallDisplayName($"Looked for tickets like '{query}'");
        return results.ToJson();
    }
}
return new TicketTools();

When this tool returns, the chat UI gets:

  • a typed JSON payload the LLM can quote;
  • a citation map indexed by snippetId that the UI renders as clickable source cards.

Instruct the LLM to use the [snippetId] bracket convention in your prompt (or in the tool description) so citations stay clickable.

Action tool with guardrails

Action tools mutate state. They should validate inputs, check permissions explicitly, and never bypass scope.CurrentUser:

public class TicketActions
{
    [Tool("Assign a support ticket to a teammate. Use this only when the user explicitly asks. " +
          "Always confirm the change in your response.")]
    public static async Task<string> AssignTicket(ToolScope scope,
        [Parameter("Ticket ID (e.g., T-9182)", required: true)] string ticketId,
        [Parameter("Assignee user ID or email", required: true)] string assigneeId)
    {
        if (scope.CurrentUser is null)
            return "{\"error\":\"Sign-in required\"}";

        var ticket = Node.FromKey("Ticket", ticketId);
        if (!await scope.Graph.CanUserSeeAsync(ticket, scope.CurrentUser))
            return "{\"error\":\"Ticket not found\"}";

        var assignee = Node.FromKey("_User", assigneeId);
        scope.Graph.Link(ticket, assignee, "AssignedTo");
        await scope.Graph.CommitPendingAsync();

        scope.Logger.LogInformation("ticket {Ticket} assigned to {Assignee} by {User}",
            ticketId, assigneeId, scope.CurrentUser.UID);
        scope.SetToolCallDisplayName($"Assigned {ticketId} → {assigneeId}");
        return $"{{\"ok\":true,\"ticketId\":\"{ticketId}\",\"assignee\":\"{assigneeId}\"}}";
    }
}
return new TicketActions();

For destructive actions (delete, archive, reassign-many), use a two-step propose → confirm pattern: the tool returns a "plan" object first; a separate ConfirmAction tool executes it after the user confirms in chat.

Writing good tool descriptions

The [Tool] description is the LLM's only signal about when to call this tool. Treat it like a UX writing exercise:

  • Be specific about the intent: "Search the support-ticket knowledge base" beats "look up tickets".
  • Describe the trigger: "Use this when the user describes a symptom" beats "search for tickets".
  • Constrain the scope: "Only for the customer support knowledge base, not for product docs" tells the LLM where the tool doesn't apply.
  • Hint at the output convention: "Cite results with [snippet id]" makes citations clickable.

Same for [Parameter] descriptions:

  • Tell the LLM the form of the input ("in the user's own words", "as a SKU like PRO-14").
  • Mark optional parameters clearly so the LLM doesn't guess at them.

Tool selection at runtime

The orchestrator narrows the tool set per chat turn:

  • Admin-only tools never show up for non-admin users.
  • Tools may be tagged for specific chat surfaces (support chat vs ops chat).
  • The LLM picks at most a handful of tools per turn — keep your tool catalog focused.

A common mistake is putting many overlapping tools in the catalog. If two tools cover the same intent, the LLM picks unreliably; consolidate them or make their descriptions sharply distinct.

Testing tools

  • Inside the AI Tools editor, the workspace exposes an inline "test as user" console.
  • The chat trace shows every tool call: which tool ran, with which args, how long it took, and what it returned.
  • For automated tests, build the same tool logic into a custom endpoint (or share a private helper class) and call it from CI.

Operational metrics

GET /api/chatai/tools/metrics exposes per-tool counts, average latency, p95, and error rate (admin-scoped). Wire this to your monitoring stack — slow or flaky tools degrade the whole chat experience. See Monitoring.

Security checklist

  • Use scope.CurrentUser for all retrieval; never call system-context graph or search.
  • Validate inputs the LLM passes — assume they may be hostile or malformed.
  • For mutating tools, log the caller and target IDs explicitly.
  • Bound expensive operations with a hard Take(...) and respect the cancellation token.
  • Don't return secrets, raw credentials, or arbitrary file paths.
  • Prefer propose → confirm for destructive actions; don't trust the LLM to confirm intent.

See Security → AI tool security.

Next steps

© 2026 Curiosity. All rights reserved.
Powered by Neko