TalksAWS re:Invent 2025 - Grounding GenAI on Enterprise Data with AWS AgentCore + Coveo (MAM221)
AWS re:Invent 2025 - Grounding GenAI on Enterprise Data with AWS AgentCore + Coveo (MAM221)
Grounding GenAI on Enterprise Data with AWS AgentCore + Coveo
Importance of Grounding Large Language Models (LLMs)
LLMs need to be grounded on enterprise data to be factually accurate and contextually relevant
Grounding provides traceability and trust, allowing users to see where information is coming from
Grounding enables dynamic knowledge updates, allowing LLMs to use the latest information
Grounding is crucial to reduce hallucination and ensure LLMs provide reliable, truthful answers
Key Requirements for an Effective Retriever
Depth of Knowledge: The retriever needs to have access to a large, comprehensive set of enterprise data to ground the LLM.
Contextual Awareness: The retriever should be able to personalize the information returned based on the user's identity, access, and history.
Relevance Quality: The retriever must provide highly relevant and accurate information to the LLM, as this directly impacts the quality of the LLM's responses.
Execution Speed: The retrieval process needs to be fast to enable seamless integration with the LLM and provide a responsive user experience.
Format Flexibility: The retriever should support various output formats, such as passages, full documents, and links, to accommodate different use cases and LLM requirements.
Conciseness: The retriever should return the most concise and precise information to make it easier for the LLM to consume and process.
Coveo's Approach to Grounding LLMs
Coveo offers a set of retrieval tools, including passage retrieval, answer generation, search, and full document retrieval, through its MCP (Managed Content Platform) server.
Coveo's architecture integrates the retrieval tools with AWS AgentCore, allowing the agent to leverage Coveo's capabilities to ground the LLM.
The agent can access Coveo's tools through a gateway provided by AWS AgentCore, ensuring secure and authenticated access to the enterprise data.
The "Secret Sauce": Prompting and MCP Description
Prompting:
Crafting a comprehensive prompt is crucial for the agent to effectively utilize the retrieval tools.
The prompt should include clear directives, define the agent's role and responsibilities, and provide guidance on when to use different retrieval tools.
It should also differentiate between memory and fresh information from the retriever, and specify requirements for source attribution.
MCP Description:
The MCP (Managed Content Platform) server provided by Coveo contains the retrieval tools that the agent can leverage.
It is important to define clear and concise descriptions for these tools, following best practices such as using snake_case for naming and frontloading key information.
The tool descriptions are used by the LLM to determine when and how to utilize the retrieval capabilities, so they need to be precise and informative.
Conclusion and Next Steps
Coveo offers a comprehensive solution for grounding LLMs on enterprise data, leveraging its retrieval tools and integration with AWS AgentCore.
By providing a robust retriever and a well-designed prompting and MCP description approach, Coveo enables enterprises to build reliable and trustworthy LLM-powered applications.
Attendees are encouraged to visit the Coveo booth at the Atlassian booth (1529) and explore the AI Masterclass webinar series for further information and guidance.
These cookies are used to collect information about how you interact with this website and allow us to remember you. We use this information to improve and customize your browsing experience, as well as for analytics.
If you decline, your information won’t be tracked when you visit this website. A single cookie will be used in your browser to remember your preference.