-
Notifications
You must be signed in to change notification settings - Fork 9
LLM Powered Recipe Suggestions #98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like I have to do this to perform an async computation from a signal?
1796152 to
17e276c
Compare
| if (cache[cacheKey]) { | ||
| console.log( | ||
| "Cache hit!", | ||
| (cacheKey.slice(0, 20) + "..." + cacheKey.slice(-20)).replaceAll( | ||
| "\n", | ||
| "", | ||
| ), | ||
| ); | ||
| return new Response(JSON.stringify(cache[cacheKey]), { | ||
| headers: { "Content-Type": "application/json" }, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simple memory cache to re-use responses for the same request. Useful to save on token spend in dev since we send the same requests every hot-reload.
So as to avoid collision with good old CPU threads
llm-clientwithlookslike-highlevelfindSuggestionslogic with LLM callllm-clientAPI for convenience of consumer (simplify appending messages)