-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: rag based variable provider #13977
Closed
sdirix
wants to merge
58
commits into
eclipse-theia:master
from
eclipsesource:feat/ai-chat-rag-provider
Closed
feat: rag based variable provider #13977
sdirix
wants to merge
58
commits into
eclipse-theia:master
from
eclipsesource:feat/ai-chat-rag-provider
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- add ui - add openai integration
- introduce ChatResponseParts - LanguageModelProvider can be used in both backend and frontend - frontend access is generically implemented independent of the actual LanguageModelProvider implementation - split code into four packages: - ai-agent: containing the AgentDispatcher. At the moment just delegates to the LanguageModelProvider. Can run in both frontend and backend - ai-chat: only containing the UI part of the chat. - ai-model-provider: containing the infrastructure of the LanguageModelProvider and its frontend bridge - ai-openao: only contains the Open AI LanguageModelProvider
Implements the LanguageModelProviderRegistry which is able to handle an arbitrary number of LanguageModelProviders. Refactors the LanguageModelProvider to only return a simple text or stream of text. It's now the agent's responsibility to convert this into response parts. Therefore the interfaces are also moved to the agent package. The LanguageModelProviderRegistry implementation for the frontend handles all LanguageModelProvider registered in the frontend as well as in the backend. Fixes the StreamNode in the tree-widget to update itself correctly when new tokens arrive.
Introduces ChatModel, including nested ChatRequestModel and ChatResponseModel to represent chat sessions. The chat models allow to inspect and track requests and their responses. Also introduces the ChatService which can be used to manage chat sessions and sending requests. Architecture is inspired by the VS Code implementation, however it intends to be more generic.
Change-Id: I179432698332ff52b33aba7b1f7e203f2bee9c77
Change-Id: I80d33303ceadf940f17265b7d910a5c13b59ec89
eclipsesource/osweek-2024#47 Change-Id: Ib9dd82e3ba062990f5642883bc9439aca52931ad
Change-Id: I186190dede14d729992977c2805e2c07100c2d17
Change-Id: Ia257c9a65b5f2bb9aa3e9ccc506f6394d744ff8f
Change-Id: I419790bda5497433bb2ba3f562b1121391cd7da0
Logs LanguageModel requests and their results to a separate output channel per LanguageModel.
- rename the open button to 'Select Folder' - set the default folder name to 'prompt-templates'
- check if a template with a given id was overridden - adapt calls to return the overridden template if so
- add temporary test command - implement initial cutomization service reading the templates on preferences change - no file watching yet
Review and adapt prompt templates
Fixes an issue with circular injections when using the PromptService in an agent.
Fixes a circular dependency by removing prompt collection. Instead the PromptService is filled programatically on start.
Co-authored-by: Alexandra Buzila <[email protected]>
Co-authored-by: Olaf Lessenich <[email protected]>
Adds a new ai-code-completion Theia extension which provides the CodeCompletionAgent. The agent is integrated via a CompletionItemProvider into Monaco for all files. The extension offers two preferences to enable/disable the feature as well as control its behavior. Co-authored-by: Stefan Dirix <[email protected]>
Change-Id: Ie7f4cfc1923db5afbaef6455089ad5cb21107db7
The language model selection value should be initialized with the value from settings if it exists
Implements: - Copy - Insert at cursor - Monaco Editor - Navigating to the location of the file (if provided) Co-authored-by: Lucas Koehler <[email protected]>
- Ensure we always create a variable part even for undefined variables -- Prompt text will then default to user text (including '#') - Allow adopters to register resolvers with priority -- Given a particular variable name, argument and context - Automatically resolve all variable parts in a chat request -- Ensure parts always provide a matching prompt text - Make sure variable service is part of core -- Generic variable handling for all agents and UI layers -- Chat-specific variable handling only in the chat layer -- Provide example of 'today' variable Fixes eclipsesource/osweek-2024#46 Co-authored-by: Christian W. Damus <[email protected]>
Added a view for displaying all the configured llamafiles. Configured llamafiles can be started and killed. One llamafile can be set as active, then being used in the chat. The chat integration is currently hardcoded to use the active llamafile language model. This should be changed as soon as the chat integration has a dropdown to select the language model (#42). A follow up will be created to describe the next steps.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What it does
Adds a RAG based variable provider specialized for the GLSP documentation.
It uses
langchain
for performing RAGHow to test
[email protected]:eclipse-glsp/glsp-website-source.git
repositoryhttps://github.com/eclipse-glsp/glsp-server-node/blob/main/CHANGELOG.md
to thecontent/documentation
root directoryEither modify the
GLSPVariableContribution
to default to your clone of the source repo or hand it over as a parameter in the chat.In the Chat View, ask GLSP questions with and without
#glsp
and compare the difference. See the log for the RAG output.Follow-ups
Review checklist
Reminder for reviewers