Presented by:
Yuan Garcia
No materials for the event yet, sorry!
In this lightning talk, we will speak on how we are utilizing Snap!'s ~400 page documentation, along with RAG (Retrieval-Augmented Generation) to create a LLM that pulls from the documentation in order to help answer common questions. We first have the Large Language Model process Snap!’s documentation and create an index, breaking down the 400 pages into a smaller, more searchable format. The Large Language Model, when prompted, will then search and utilize the most relevant chunk of text/images. This will be useful because it will remove learning friction and allow beginners access to tools they might not be aware of, as well as improving accessibility and is accurate and contextually relevant (due to it referencing the already created documentation).
- Date:
- 2024 July 31 - 14:40 CET
- Duration:
- 7 min
- Room:
- Online Room 1
- Conference:
- Snap!shot 2024
- Type:
- Lightning Talk