Skip to content

Conversation

@bfollington
Copy link
Contributor

@bfollington bfollington commented Nov 4, 2025

  • Implement generateText built-in
  • Usage example in note.tsx
  • Add mise.toml to specify deno version

Summary by cubic

Adds a generateText built-in for simple prompt-to-text LLM calls with streaming partial results. Wires it through the API, runner, and updates the Note pattern; addresses Linear CT-947.

  • New Features

    • Added generateText built-in with prompt/system, model, maxTokens; returns pending, result, partial, requestHash.
    • Exposed generateText in @commontools/api, builder/factory, runner builtins, and generated typings.
    • Updated note.tsx example to use generateText for translation.
  • Refactors and Fixes

    • Shared cell initialization across llm, generateObject, and generateText to reduce duplication.
    • Cleared partial/result when inputs are missing and reset request hash on errors to prevent stale output and allow retries.
    • Pinned Deno to 2.5.2 via mise.toml.

Written for commit 5a6694a. Summary will update automatically on new commits.

@linear
Copy link

linear bot commented Nov 4, 2025

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 9 files

Prompt for AI agents (all 1 issues)

Understand the root cause of the following 1 issues and fix them.


<file name="packages/runner/src/builtins/llm.ts">

<violation number="1" location="packages/runner/src/builtins/llm.ts:331">
If generateText hits this catch path, previousCallHash still equals the request hash. The next run sees the guard at the top and bails, so the same prompt can never retry after a failure until the input changes. Please reset previousCallHash (or otherwise clear the guard) when handling errors.</violation>
</file>

React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.

.catch(async (error) => {
if (thisRun !== currentRun) return;

console.error("Error generating text", error);
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If generateText hits this catch path, previousCallHash still equals the request hash. The next run sees the guard at the top and bails, so the same prompt can never retry after a failure until the input changes. Please reset previousCallHash (or otherwise clear the guard) when handling errors.

Prompt for AI agents
Address the following comment on packages/runner/src/builtins/llm.ts at line 331:

<comment>If generateText hits this catch path, previousCallHash still equals the request hash. The next run sees the guard at the top and bails, so the same prompt can never retry after a failure until the input changes. Please reset previousCallHash (or otherwise clear the guard) when handling errors.</comment>

<file context>
@@ -182,6 +208,143 @@ export function llm(
+      .catch(async (error) =&gt; {
+        if (thisRun !== currentRun) return;
+
+        console.error(&quot;Error generating text&quot;, error);
+
+        await runtime.idle();
</file context>
Fix with Cubic

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 9 files

Prompt for AI agents (all 1 issues)

Understand the root cause of the following 1 issues and fix them.


<file name="packages/runner/src/builtins/llm.ts">

<violation number="1" location="packages/runner/src/builtins/llm.ts:273">
If `prompt` is empty we return before clearing `result`/`partial`, so the previous LLM output sticks around after the prompt is cleared. Please reset those cells before this early return to avoid stale data.</violation>
</file>

React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.

inputsCell.getAsQueryResult([], tx) ?? {};

// If no prompt is provided, don't make a request
if (!prompt) {
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If prompt is empty we return before clearing result/partial, so the previous LLM output sticks around after the prompt is cleared. Please reset those cells before this early return to avoid stale data.

Prompt for AI agents
Address the following comment on packages/runner/src/builtins/llm.ts at line 273:

<comment>If `prompt` is empty we return before clearing `result`/`partial`, so the previous LLM output sticks around after the prompt is cleared. Please reset those cells before this early return to avoid stale data.</comment>

<file context>
@@ -182,6 +208,143 @@ export function llm(
+      inputsCell.getAsQueryResult([], tx) ?? {};
+
+    // If no prompt is provided, don&#39;t make a request
+    if (!prompt) {
+      pendingWithLog.set(false);
+      return;
</file context>
Fix with Cubic

@bfollington bfollington force-pushed the ben/ct-947-generatetext-built-in branch from e40a4b6 to e8d3671 Compare November 4, 2025 22:11
@bfollington bfollington merged commit bcadd07 into main Nov 4, 2025
8 checks passed
@bfollington bfollington deleted the ben/ct-947-generatetext-built-in branch November 4, 2025 22:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants