-
Notifications
You must be signed in to change notification settings - Fork 7
Only re-render CharmList when connection status actually changes #1038
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
Comments suppressed due to low confidence (1)
jumble/src/contexts/SyncStatusContext.tsx:50
- Consider using a functional state update for 'setHasConnected' to prevent potential stale closure issues in asynchronous sync events. For example, use setHasConnected(prev => prev || true) to ensure the update is based on the current state.
if (!hasConnected) {
anotherjesse
pushed a commit
that referenced
this pull request
Apr 15, 2025
* Only re-render CharmList when connection status actually changes * Update types
ubik2
added a commit
that referenced
this pull request
Apr 20, 2025
* chore: commit wip with working serverside traverse overhaul * chore: finished updating tests * chore: commit wip while debugging lack of schema * fix: wrong charm title (#1023) * add gemini-flash-lite (#1035) support flash-lite * chore: Do not attempt to serve static directories (#1037) * Only re-render CharmList when connection status actually changes (#1038) * Only re-render CharmList when connection status actually changes * Update types * track loads differently based on their schema import as FactModule instead of Fact to avoid collision expose the schema associated with the facts from QueryView make schemaContext optional in SchemaPathSelector, since I don't want to traverse commit logs Change default storage to cached * Llm feedback (#1036) User Feedback UI - Implemented FeedbackActions and FeedbackDialog components in Jumble to allow users to submit positive or negative feedback on generated content. - Added interactive thumbs-up/down buttons with feedback form submission workflow. Trace Span ID Management - Added functions (setLastTraceSpanID, getLastTraceSpanID) to manage trace IDs in the builder environment. - Updated builder exports to expose new trace ID functions. Integration of Trace ID in LLM Client - Modified LLMClient to capture x-ct-llm-trace-id from response headers and store it via setLastTraceSpanID. Backend Feedback Handling - Extended toolshed API (/api/ai/llm/feedback) to accept user feedback and relay it to Phoenix observability backend. Instrumentation and Span Filtering - Enhanced OpenTelemetry instrumentation to include specific spans relevant to LLM inference requests for better observability. * fix schema query with commit+json * chore: commit wip * chore: env variable to toggle schema sub * changed to have subscribing queries not delete themselves with the task/return changed so the server sends each response to each subscriber, even though there may be duplicates. a schema subscription on the client needs to know its updated list of entities added lots of debugging to try to track down this stall issue * we should be able to inspect both the graph/query and the subscribe for the combo * reverted incomplete change to docIsSyncing that caused me to stall on load * fix blunder with pulling value from fact cleaned up logging and comments * change todo comment for linter * remove incorrect comment in test * fix the other half of the value blunder * Remove unused import * Added start-ci task for running integration tests that makes BroadcastChannel available. * Change cache to only use the BroadcastChannel if it's available. * better import of isObj * just inline the isObj test * Remove the BroadcastChannel, since we're not really using it. --------- Co-authored-by: Irakli Gozalishvili <contact@gozala.io> Co-authored-by: Jesse Andrews <anotherjesse@gmail.com> Co-authored-by: Jordan Santell <jsantell@users.noreply.github.com> Co-authored-by: Ben Follington <5009316+bfollington@users.noreply.github.com> Co-authored-by: Jake Dahn <jake@common.tools>
ubik2
added a commit
that referenced
this pull request
Apr 23, 2025
* chore: commit wip with working serverside traverse overhaul * chore: finished updating tests * chore: commit wip while debugging lack of schema * fix: wrong charm title (#1023) * add gemini-flash-lite (#1035) support flash-lite * chore: Do not attempt to serve static directories (#1037) * Only re-render CharmList when connection status actually changes (#1038) * Only re-render CharmList when connection status actually changes * Update types * track loads differently based on their schema import as FactModule instead of Fact to avoid collision expose the schema associated with the facts from QueryView make schemaContext optional in SchemaPathSelector, since I don't want to traverse commit logs Change default storage to cached * Llm feedback (#1036) User Feedback UI - Implemented FeedbackActions and FeedbackDialog components in Jumble to allow users to submit positive or negative feedback on generated content. - Added interactive thumbs-up/down buttons with feedback form submission workflow. Trace Span ID Management - Added functions (setLastTraceSpanID, getLastTraceSpanID) to manage trace IDs in the builder environment. - Updated builder exports to expose new trace ID functions. Integration of Trace ID in LLM Client - Modified LLMClient to capture x-ct-llm-trace-id from response headers and store it via setLastTraceSpanID. Backend Feedback Handling - Extended toolshed API (/api/ai/llm/feedback) to accept user feedback and relay it to Phoenix observability backend. Instrumentation and Span Filtering - Enhanced OpenTelemetry instrumentation to include specific spans relevant to LLM inference requests for better observability. * fix schema query with commit+json * chore: commit wip * chore: env variable to toggle schema sub * changed to have subscribing queries not delete themselves with the task/return changed so the server sends each response to each subscriber, even though there may be duplicates. a schema subscription on the client needs to know its updated list of entities added lots of debugging to try to track down this stall issue * we should be able to inspect both the graph/query and the subscribe for the combo * reverted incomplete change to docIsSyncing that caused me to stall on load * fix blunder with pulling value from fact cleaned up logging and comments * change todo comment for linter * remove incorrect comment in test * fix the other half of the value blunder * Remove unused import * Added start-ci task for running integration tests that makes BroadcastChannel available. * Change cache to only use the BroadcastChannel if it's available. * better import of isObj * just inline the isObj test * Remove the BroadcastChannel, since we're not really using it. * Remove remote provider Needs more work, since we fail tests which try to use indexdb in deno. * correct merge error --------- Co-authored-by: Irakli Gozalishvili <contact@gozala.io> Co-authored-by: Jesse Andrews <anotherjesse@gmail.com> Co-authored-by: Jordan Santell <jsantell@users.noreply.github.com> Co-authored-by: Ben Follington <5009316+bfollington@users.noreply.github.com> Co-authored-by: Jake Dahn <jake@common.tools>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.