Skip to content

Conversation

@bfollington
Copy link
Contributor

@bfollington bfollington commented Jun 24, 2024

  • Integrate llm-client with lookslike-highlevel
  • Replace findSuggestions logic with LLM call
  • Rework llm-client API for convenience of consumer (simplify appending messages)

@bfollington bfollington requested review from jsantell and seefeldb June 24, 2024 22:11
@bfollington bfollington marked this pull request as ready for review June 24, 2024 22:11
Comment on lines +36 to +47
Copy link
Contributor Author

@bfollington bfollington Jun 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like I have to do this to perform an async computation from a signal?

@bfollington bfollington force-pushed the feat/2024-06-24-llm-suggestions branch from 1796152 to 17e276c Compare June 24, 2024 22:13
Comment on lines +45 to +56
if (cache[cacheKey]) {
console.log(
"Cache hit!",
(cacheKey.slice(0, 20) + "..." + cacheKey.slice(-20)).replaceAll(
"\n",
"",
),
);
return new Response(JSON.stringify(cache[cacheKey]), {
headers: { "Content-Type": "application/json" },
});
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simple memory cache to re-use responses for the same request. Useful to save on token spend in dev since we send the same requests every hot-reload.

@bfollington bfollington merged commit d8419d7 into main Jun 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants