Amazon Bedrock: Analyzing Jira tickets using LLMs (DEV211)
Summarizing the Video Transcription
Understanding the Jira Ticket Use Case
The speaker is supporting a National Lab and has a new team member who is asking about how to better support customers and identify their needs.
The lab has generated over 4,000 Jira tickets over the last 5 years, which contain a treasure trove of data that could be leveraged.
However, manually reading through 4,000 Jira tickets is not a practical solution.
The goal is to find an automated process to extract insights from the Jira ticket data, create better FAQs for customers, and identify emerging trends.
Diving into Amazon Bedrock and Large Language Models
The speaker initially used Amazon Comprehend for entity recognition, but encountered issues with the data cleanliness.
Amazon Bedrock was then introduced, which provides access to large language models from companies like Facebook, Anthropic, and Amazon.
Setting up the communication pathway with Bedrock using the Boto3 SDK was a crucial step.
Creating a knowledge base, which is an Open Search serverless Vector store database, was another important aspect.
The speaker highlighted the importance of determining the right chunking strategy for the data, depending on the file types (e.g., small Jira tickets vs. large PDFs or Word documents).
Selecting the appropriate large language models for both the prompting and the vector store database synchronization was also a lesson learned.
The Way Forward
The team has been able to play with around 50 Jira tickets so far and is working on extending the analysis to all 4,000 tickets.
The future goals include exploring prompt engineering, using different large language models, and analyzing the data to identify recurring themes, popular topics, and problem areas.
The team also plans to create a more automated pipeline and take a DevOps approach to the development process.
The ultimate aim is to use the insights from the Jira ticket data to create better FAQs and identify emerging trends that can help improve customer support.
Key Takeaways
Understanding the data and its characteristics is crucial, even if the goal is to avoid manual processing.
Properly setting up the communication with Bedrock and selecting the right large language models is essential.
Determining the right chunking strategy for the data is important, especially when dealing with diverse file types.
Leveraging the knowledge base and vector store database can significantly improve the performance and efficiency of the large language model-based analysis.
Adopting a DevOps approach and creating automated pipelines can help scale the solution and make it more robust.
These cookies are used to collect information about how you interact with this website and allow us to remember you. We use this information to improve and customize your browsing experience, as well as for analytics.
If you decline, your information won’t be tracked when you visit this website. A single cookie will be used in your browser to remember your preference.