More Than Just Fast: A Holistic Guide to High-Performance AI Agents
At SnapLogic, while building and refining an AI Agent for a large customer in the healthcare industry, we embarked on a journey of holistic performance optimization. We didn't just want to make it faster. We tried to make it better across the board. This journey taught us that significant gains are found by looking at the entire system, from the back-end data sources to the pixels on the user's screen. Here’s our playbook for building a truly high-performing AI agent, backed by real-world metrics. The Foundation: Data and Architecture Before you can tune an engine, you have to build it on a solid chassis. For an AI Agent, that chassis is its core architecture and its relationship with data. Choose the Right Brain for the Job: Not all LLMs are created equal. The "best" model depends entirely on the nature of the tasks your agent needs to perform. A simple agent with one or two tools has very different requirements from a complex agent that needs to reason, plan, and execute dynamic operations. Matching the model to the task complexity is key to balancing cost, speed, and capability. Task Complexity Model Type Characteristics & Best For Simple, Single-Tool Tasks Fast & Cost-Effective Goal: Executing a well-defined task with a limited toolset (e.g., simple data lookups, classification). These models are fast and cheap, perfect for high-volume, low-complexity actions. Multi-Tool Orchestration Balanced Goal: Reliably choosing the correct tool from several options and handling moderately complex user requests. These models offer a great blend of speed, cost, and improved instruction-following for a good user experience. Complex Reasoning & Dynamic Tasks High-Performance / Sophisticated Goal: Handling ambiguous requests that require multi-step reasoning, planning, and advanced tool use like dynamic SQL query generation. These are the most powerful (and expensive) models, essential for tasks where deep understanding and accuracy are critical. Deconstruct Complexity with a Multi-Agent Approach: A single, monolithic agent designed to do everything can become slow and unwieldy. A more advanced approach is to break down a highly complex agent into a team of smaller, specialized agents. This strategy offers two powerful benefits: It enables the use of faster, cheaper models. Each specialized agent has a narrower, more defined task, which often means you can use a less powerful (and faster) LLM for that specific job, reserving your most sophisticated model for the "manager" agent that orchestrates the others. It dramatically increases reusability. These smaller, function-specific agents and their underlying tools are modular. They can be easily repurposed and reused in the next AI Agent you build, accelerating future development cycles. Set the Stage for Success with Data: An AI Agent is only as good as the data it can access. We learned that optimizing data access is a critical first step. This involved: Implementing Dynamic Text-to-SQL: Instead of relying on rigid, pre-defined queries, we empowered the agent to build its own SQL queries dynamically from natural language. This flexibility required a deep initial investment in analyzing and understanding the critical columns and data formats our agent would need from sources like Snowflake. Generating Dedicated Database Views: To support the agent, we generated dedicated views on top of our source tables. This strategy serves two key purposes: it dramatically reduces query times by pre-joining and simplifying complex data, and it allows us to remove sensitive or unnecessary data from the source, ensuring the agent only has access to what it needs. Pre-loading the Schema for Agility: Making the database schema available to the agent is critical for accurate dynamic SQL generation. To optimize this, we pre-load the relevant schemas at startup. This simple step saves precious time on every single query the agent generates, contributing significantly to the overall responsiveness. The Engine: Tuning the Agent’s Logic and Retrieval Our Diagnostic Toolkit: Using AI to Analyze AI Before we could optimize the engine, we needed to know exactly where the friction was. Our diagnostic process followed a two-step approach: High-Level Analysis: We started in the SnapLogic Monitor, which provides a high-level, tabular view of all pipeline executions. This dashboard is the starting point for any performance investigation. As you can see below, it gives a list of all runs, their status, and their total duration. By clicking the Download table button, you can export this summary data as a CSV. This allows for a quick, high-level analysis to spot outliers and trends without immediately diving into verbose log files. AI-Powered Deep Dive: Once we identified a bottleneck from the dashboard—a pipeline that was taking longer than expected—we downloaded the detailed, verbose log files for those specific pipeline runs. We then fed these complex logs into an AI tool of our choice. This "AI analyzing AI" approach helped us instantly pinpoint key issues that would have taken hours to find manually. For example, this process uncovered an unnecessary error loop caused by duplicate JDBC driver versions, which significantly extended the execution time of our Snowflake Snaps. Fixing this single issue was a key factor in the 68% performance improvement we saw when querying our technical knowledge base. With a precise diagnosis in hand, we turned our attention to the agent's "thinking" process. This is where we saw some of our most dramatic performance gains. How We Achieved This: Crafting the Perfect Instructions (System Prompts): We transitioned from generic prompts to highly customized system prompts, optimized for both the specific task and the chosen LLM. A simpler model gets a simpler, more direct prompt, while a sophisticated model can be instructed to "think step-by-step" to improve its reasoning. A Simple Switch for Production Speed: One of the most impactful, low-effort optimizations came from how we use a key development tool: the Record Replay Snap. During the creation and testing of our agent's pipelines, this Snap is invaluable for capturing and replaying data, but it adds about 2.5 seconds of overhead to each execution. For a simple agent run involving a driver, a worker, and one tool, this adds up to 7.5 seconds of unnecessary latency in a production environment. Once our pipelines were successfully tested, we switched these Snaps to "Replay Only" mode. This simple change instantly removed the recording overhead, providing a significant speed boost across all agent interactions. Smarter, Faster Data Retrieval (RAG Optimization): For our Retrieval-Augmented Generation (RAG) tools, we focused on two key levers: Finding the Sweet Spot (k value): We tuned the k value—the number of documents retrieved for context. For our product information retrieval use case, adjusting this value was the key to our 63% speed improvement. It’s the art of getting just enough context for an accurate answer without creating unnecessary work for the LLM. Surgical Precision with Metadata: Instead of always performing a broad vector search, we enabled the agent to use metadata. If it knows a document's unique_ID, it can fetch that exact document. This is the difference between browsing a library and using a call number. It's swift and precise. Ensuring Consistency: We set the temperature to a low value during the data extraction and indexing process. This ensures that the data chunks are created consistently, leading to more reliable and repeatable search results. The Results: A Data-Driven Transformation Our optimization efforts led to significant, measurable improvements across several key use cases for the AI Agent. Use Case Before Optimization After Optimization Speed Improvement Querying Technical Knowledge Base 92 seconds 29 seconds ~68% Faster Processing Sales Order Data 32 seconds 10.7 seconds ~66% Faster RAG Retrieval 5.8 seconds 2.1 seconds ~63% Faster Production Optimization (Replay Only) 20 seconds 17.5 seconds ~12% Faster* (*This improvement came from switching development Snaps to a production-ready "Replay Only" mode, removing the latency inherent to the testing phase.) The Experience: Focusing on the User Ultimately, all the back-end optimization in the world is irrelevant if the user experience is poor. The final layer of our strategy was to focus on the front-end application. Engage, Don't Just Wait: A simple "running..." message can cause user anxiety and make any wait feel longer. Our next iteration will provide a real-time status of the agent's thinking process (e.g., "Querying product database...", "Synthesizing answer..."). This transparency keeps the user engaged and builds trust. Guide the User to Success: We learned that a blank text box can be intimidating. By providing predefined example prompts and clearly explaining the agent's capabilities, we guide the user toward successful interactions. Deliver a Clear Result: The final output must be easy to consume. We format our results cleanly, using tables, lists, and clear language to ensure the user can understand and act on the information instantly. By taking this holistic approach, we optimized the foundation, the engine, and the user experience to build an AI Agent that doesn't just feel fast. It feels intelligent, reliable, and genuinely helpful.37Views0likes0CommentsA Comparison of Assistant and Non-Assistant Tool Calling Pipelines
Introduction At a high level, the logic behind assistant tool calling and non-assistant tool calling is fundamentally the same: the model instructs the user to call specific function(s) in order to answer the user's query. The user then executes the function and returns the result to the model, which uses it to generate an answer. This process is identical for both. However, since the assistant specifies the function definitions and access to tools as part of the Assistant configuration within the OpenAI or Azure OpenAI dashboard rather than within your pipelines, there will be major differences in the pipeline configuration. Additionally submitting tool responses to an Assistant comes with significant changes and challenges since the Assistant owns the conversational history rather than the pipeline. This article focuses on contrasting these differences. For a detailed understanding of assistant pipelines and non-assistant pipelines, please refer to the following article: Non-assistant pipelines: Introducing Tool Calling Snaps and LLM Agent Pipelines Assistant pipelines: Introducing Assistant Tool Calling Pipelines Part 1: Which System to Use: Non-Assistant or Assistant? When to Use Non-Assistant Tool Calling Pipelines: Non-Assistant Tool Calling Pipelines offer greater flexibility and control over the tool calling process, making them suitable for the following specific scenarios. When preferring a “run-time“ approach: Non-Assistant pipelines exhibit greater flexibility in function definition, offering a more "runtime" approach. You can dynamically adjust the available functions by simply adding or removing Function Generator snaps within the pipeline. In contrast, Assistant Tool Calling Pipelines necessitate a "design-time" approach. All available functions must be pre-defined within the Assistant configuration, requiring modifications to the Assistant definition in the OpenAI/Azure OpenAI dashboard. When wanting detailed chat history: Non-Assistant pipelines provide a comprehensive history of the interaction between the model and the tools in the output message list. The message list within the Non-Assistant pipeline preserves every model response and the results of each function execution. This detailed logging allows for thorough debugging, analysis, and auditing of the tool calling process. In contrast, Assistant pipelines maintain a more concise message history, focusing on key steps and omitting some intermediate details. While this can simplify the overall view of the message list, it can also make it more difficult to trace the exact sequence of events or diagnose issues that may arise during tool execution in child pipelines. When needing easier debugging and iterative development: Non-Assistant pipelines facilitate more granular debugging and iterative development. You can easily simulate individual steps of the agent by making calls to the model with specific function call histories. This allows for more precise control and experimentation during development, enabling you to isolate and address issues more effectively. For example, by providing three messages, we can "force" the model to call the second tool, allowing us to inspect the tool calling process and its result against our expectations. In contrast, debugging and iterating with Assistant pipelines can be more cumbersome. Since Assistants manage the conversation history internally, to simulate a specific step, you often need to replay the entire interaction from the beginning, potentially requiring multiple iterations to reach the desired state. This internal management of history makes it less straightforward to isolate and debug specific parts of the interaction. To simulate calling the third tool, we need to start a new thread from scratch and then call tool1 and tool2, repeating the preceding process. The current thread cannot be reused. When to Use Assistant Tool Calling Pipelines: Assistant Tool Calling Pipelines also offer a streamlined approach to integrating LLMs with external tools, prioritizing ease of use and built-in functionalities. Consider using Assistant pipelines in the following situations: For simplified pipeline design: Assistant pipelines reduce pipeline complexity by eliminating the need for Tool Generator snaps. In Non-Assistant pipelines, these snaps are essential for dynamically generating tool definitions within the pipeline itself. With Assistant pipelines, tool definitions are configured beforehand within the Assistant settings in the OpenAI/Azure OpenAI dashboard. This pre-configuration results in shorter, more manageable pipelines, simplifying development and maintenance. When leveraging built-in tools is required: If your use case requires functionalities like searching external files or executing code, Assistant pipelines offer these capabilities out-of-the-box through their built-in File Search and Code Interpreter tools (see Part 5 for more details). These tools provide a convenient and efficient way to extend the LLM's capabilities without requiring custom implementation within the pipeline. Part 2: A brief introduction to two pipelines Non-assistant tool calling pipelines Key points: Functions are defined in the worker. The worker pipeline's Tool Calling snap manages all model interactions. Function results are collected and sent to the model in the next iteration via the Tool Calling snap. Assistant tool calling pipelines Key points: No need to define functions in any pipeline. Functions are pre-defined in the assistant. Two snaps : interact with the model: Create and Run Thread, and Submit Tool Outputs. Function results are collected and sent to the model immediately during the current iteration. Part 3: Comparison between two pipelines Here are two primary reasons why the assistant and non-assistant pipelines differ, listed in decreasing order of importance: Distinct methods of submitting tool results: For non-assistant pipelines, tool results are appended to the message history list and subsequently forwarded to the model during the next iteration. Non-assistant pipelines exhibit a "while-loop" behavior, where the worker interacts with the model at the beginning of the iteration, and while any tools need to be called, the worker executes those tool(s). In contrast, for assistants, tool results are specifically sent to a dedicated endpoint designed to handle tool call results within the current iteration. The assistant pipelines operate more like a "do-while-loop." The driver initiates the interaction by sending the prompt to the model. Subsequently, the worker execute the tool(s) first and interacts with the model at the end of the iteration to deliver tool results. Predefined and stored tool definitions for assistants: Unlike non-assistant pipelines, assistants have the capability to predefine and store function definitions. This eliminates the need for the three Function Generator snaps to repeatedly transmit tool definitions to the model with each request. Consequently, the worker pipeline for assistants appears shorter. Due to the aforementioned differences, non-assistant pipelines have only one interaction point with the model, located in the worker. In contrast, assistant pipelines involve two interaction points: the driver sends the initial prompt to the model, while the worker sends tool results back to the model. Part 4: Differences in snap settings Stop condition of Pipeloop A key difference in snap settings lies in the stop condition of the pipeloop. Assistant pipeline’s stop condition: $run.required_action == null . Non-assistant pipeline’s stop condition: $finish_reason != "tool_calls" . Assistant’s output Example when tool calls are required: Example when tool calls are NOT required: Non-assistant’s output Example when tool calls are required: Example when tool calls are NOT required: Part 5: Assistant’s two built-in tools The assistant not only supports all functions that can be defined in non-assistant pipelines but also provides two special built-in functions, file search and code interpreter, for user convenience. If the model determines that either of these tools is required, it will automatically call and execute the tool within the assistant without requiring manual user intervention. You don't need a tool call pipeline to experiment with file search and code interpreter. A simple create and run thread snap is sufficient. File search File Search augments the Assistant with knowledge from outside its model, such as proprietary product information or documents provided by your users. OpenAI automatically parses and chunks your documents, creates and stores the embeddings, and use both vector and keyword search to retrieve relevant content to answer user queries. Example Prompt: What is the number of federal fires between 2018 and 2022? The assistant’s response is as below: The assistant’s response is correct. As the answer to the prompt is in the first row of a table on the first page of wildfire_stats.pdf, a document accessible to the assistant via a vector store. Answer to the prompt: The file is stored in a vector store used by the assistant: Code Interpreter Code Interpreter allows Assistants to write and run Python code in a sandboxed execution environment. This tool can process files with diverse data and formatting, and generate files with data and images of graphs. Code Interpreter allows your Assistant to run code iteratively to solve challenging code and math problems. When your Assistant writes code that fails to run, it can iterate on this code by attempting to run different code until the code execution succeeds. Example Prompt: Find the number of federal fires between 2018 and 2022 and use Matplotlib to draw a line chart. * Matplotlib is a python library for creating plots. The assistant’s response is as below: From the response, we can see that the assistant indicated it used file search to find 5 years of data and then generated an image file. This file can be downloaded from the assistant's dashboard under storage-files. Simply add a file extension like .png to see the image. Image file generated by assistant: Part 6: Key Differences Summarized Feature Non-Assistant Tool Calling Pipelines Assistant Tool Calling Pipelines Function Definition Defined within the worker pipeline using Function Generator snaps. Pre-defined and stored within the Assistant configuration in the OpenAI/Azure OpenAI dashboard. Tool Result Submission Appended to the message history and sent to the model in the next iteration. Sent to a dedicated endpoint within the current iteration. Model Interaction Points One (in the worker pipeline). Two (driver sends initial prompt, worker sends tool results). Built-in Tools None. File Search and Code Interpreter. Pipeline Complexity More complex pipeline structure due to function definition within the pipeline. Simpler pipeline structure as functions are defined externally.804Views4likes0CommentsLLM response logging for analytics
Why do we need LLM Observability? GenAI applications are great, they answer like how a human does. But how do you know if GPT isn’t being “too creative” to you when results from the LLM shows “Company finances are facing issues due to insufficient sun coverage”? As the scope of GenAI apps broaden, the vulnerability expands, and since LLM outputs are non-deterministic, a setup that once worked isn’t guaranteed to always work. Here’s an example of comparing the reasons why an LLM prompt fails vs why a RAG application fails. What could go wrong in the configuration? LLM prompts Suboptimal model parameters Temperature too high / tokens too small Uninformative System prompts RAG Indexing The data wasn’t chunked with the right size, information is sparse yet the window is small. Wrong distance was used. Used Euclidean distance instead of cosine Dimension was too small / too large Retrieval Top K too big, too much irrelevant context fetched Top K too small, not enough relevant context to generate result Filter misused And everything in LLM Prompts Although observability does not magically solve all problems, it gives us a good chance to figure out what might have gone wrong. LLM Observability provides methodologies to help developers better understand LLM applications, model performances, biases, and can help resolve issues before they reach the end users. What are common issues and how observability helps? Observability helps understanding in many ways, from performance bottlenecks to error detection, security and debugging. Here’s a list of common questions we might ask ourselves and how observability may come in handy. How long does it take to generate an answer? Monitor LLM response times and database query times helps identify potential bottlenecks of the application. Is the context retrieved from the Vector Database relevant? Logging database query and results retrieved helps identify better performing queries. Can assist on chunk size configuration based on retrieved results. How many tokens are used in a call? Monitor token usage can help determine the cost of each LLM call. How much better/worse is my new configuration setup doing? Parameter monitoring and response logging helps compare the performance of different models and model configurations. How is the GenAI application performing overall? Tracing stages of the application and evaluation helps identify the performance of the application What are users asking? Logging and analyzing user prompts help understand user needs and can help evaluate if optimizations can be introduced to reduce costs. Helps identify security vulnerabilities by monitoring malicious attempts and help proactively respond to mitigate threats. What should be tracked? GenAI applications involve components chained together. Depending on the use case, there are events and input/output parameters that we want to capture and analyze. A list of components to consider: Vector Database metadata Vector dimension: The vector dimension used to in the vector database Distance function: The way two vectors are compared in the vector database Vector Indexing parameters Chunk configuration: How a chunk is configured, including the size of the chunk, the unit of chunks, etc. This affects information density in a chunk. Vector Query parameters Query: The query used to retrieve context from the Vector Database Top K: The maximum number of vectors to retrieve from the Vector Database Prompt templates System prompt: The prompt to be used throughout the application Prompt Template: The template used to construct a prompt. Prompts work differently in different models and LLM providers LLM request metadata Prompt: The input sent to the LLM model from each end-user, combined with the template Model name: The LLM model used for generation, which affects the capability of the application Tokens: The number of tokens limit for a single request Temperature: The parameter for setting the creativity and randomness of the model Top P: The range of selection of words, the smaller the value the narrower the word selection is sampled from. LLM response metadata Tokens: The number of tokens used in input and output generation, affects costs Request details: May include information such as guardrails, id of the request, etc. Execution Metrics Execution time: Time taken to process individual requests Pipeline examples Logging a Chat completions pipeline We're using MongoDB to store model parameters and LLM responses as JSON documents for easy processing. Logging a RAG pipeline In this case, we're storing parameters to the RAG system (Agent Retrieve in this case) and the model. We're using JSON Generator Snaps to parameterize all input parameters to the RAG system and the LLM models. We then concat the response from the Vector Database, LLM model, and the parameters we provided for the requests.1.3KViews3likes1CommentWhat is Retrieval-Augmented Generation (RAG)?
What is Retrieval-Augmented Generation (RAG)? Retrieval-Augmented Generation (RAG) is the process of enhancing the reference data used by language models (LLMs) through integrating them with traditional information retrieval systems. This hybrid approach allows LLMs to access and utilize external knowledge bases, databases, and other authoritative sources of information, thereby improving the accuracy, relevance, and currency of the generated responses without requiring extensive retraining. Without RAG, LLMs generate responses based on the information they were trained on. With RAG, the response generation process is enriched by integrating external information into the generation. How does Retrieval-Augmented Generation work? Retrieval-Augmented Generation works through bringing multiple systems or services to generate the prompt to the LLM. This means there will be required setup to support the different systems and services to feed the appropriate data for a RAG workflow. This involves several key steps: 1. External Data Source Creation: External data refers to information outside the original training data of the LLM. This data can come from a variety of sources such as APIs, databases, document repositories, and web pages. The data is pre-processed and converted into numerical representations (embeddings) using embedding models, and then stored in a searchable vector database along with reference to the data that was used to generate the embedding. This forms a knowledge library that can be used to augment a prompt when calling into the LLM for generation of a response to a given input. 2. Retrieval of Relevant Information: When a user inputs a query, it is embedded into a vector representation and matched against the entries in the vector database. The vector database retrieves the most relevant documents or data based on semantic similarity. For example, a query about company leave policies would retrieve both the general leave policy document and the specific role leave policies. 3. Augmentation of LLM Prompt: The retrieved information is then integrated into the prompt to send to the LLM using prompt engineering techniques. This fully formed prompt is sent to the LLM, providing additional context and relevant data that enables the model to generate more accurate and contextually appropriate responses. 4. Generation of Response: The LLM processes the augmented prompt and generates a response that is coherent, contextually appropriate, and enriched with accurate, up-to-date information. The following diagram illustrates the flow of data when using RAG with LLMs. Why use Retrieval-Augmented Generation? RAG addresses several inherent challenges of using LLMs by leveraging external data sources: 1. Enhanced Accuracy and Relevance: By accessing up-to-date and authoritative information, RAG ensures that the generated responses are accurate, specific, and relevant to the user's query. This is particularly important for applications requiring precise and current information, such as specific company details, release dates and release items, new features available for a product, individual product details, etc.. 2. Cost-Effective Implementation: RAG enables organizations to enhance the performance of LLMs without the need for expensive and time-consuming fine-tuning or custom model training. By incorporating external knowledge libraries, RAG provides a more efficient way to update and expand the model's basis of knowledge. 3. Improved User Trust: With RAG, responses can include citations or references to the original sources of information, increasing transparency and trust. Users can verify the source of the information, which enhances the credibility and trust of an AI system. 4. Greater Developer Control: Developers can easily update and manage the external knowledge sources used by the LLM, allowing for flexible adaptation to changing requirements or specific domain needs. This control includes the ability to restrict sensitive information retrieval and ensure the correctness of generated responses. Doing this in conjunction with an evaluation framework (link to evaluation pipeline article) can help to roll out newer content more rapidly to downstream consumers. Snaplogic GenAI App Builder: Building RAG with Ease Snaplogic GenAI App Builder empowers business users to create large language model (LLM) powered solutions without requiring any coding skills. This tool provides the fastest path to developing generative enterprise applications by leveraging services from industry leaders such as OpenAI, Azure OpenAI, Amazon Bedrock, Anthropic Claude on AWS, and Google Gemini. Users can effortlessly create LLM applications and workflows using this robust platform. With Snaplogic GenAI App Builder, you can construct both an indexing pipeline and a Retrieval-Augmented Generation (RAG) pipeline with minimal effort. Indexing Pipeline This pipeline is designed to store the contents of a PDF file into a knowledge library, making the content readily accessible for future use. Snaps used: File Reader, PDF Parser, Chunker, Amazon Titan Embedder, Mapper, OpenSearch Upsert. After running this pipeline, we would be able to view these vectors in OpenSearch. RAG Pipeline This pipeline enables the creation of a chatbot capable of answering questions based on the information stored in the knowledge library. Snap used: HTTP Router, Amazon Titan Embedder, Mapper, OpenSearch Query, Amazon Bedrock Prompt Generator, Anthropic Claude on AWS Messages. To implement these pipelines, the solution utilizes the Amazon Bedrock Snap Pack and the OpenSearch Snap Pack. However, users have the flexibility to employ other LLM and vector database Snaps to achieve similar functionality.1.3KViews4likes0CommentsEmbeddings and Vector Databases
What are embeddings Embeddings are numerical representations of real-world objects, like text, images or audio. They are generated by machine learning models as vectors, an array of numbers, where the distance between vectors can be seens as the degree of similarity between objects. While an embedding model may have its own meaning for each of the dimensions, there’s no guarantee between embedding models of the meaning for each of the dimensions used by the embedding models. For example, the word “cat”, “dog” and “apple” might be embedded into the following vectors: cat -> (1, -1, 2) dog -> (1.5, -1.5, 1.8) apple -> (-1, 2, 0) These vectors are made-up for a simpler example. Real vectors are much larger, see the Dimension section for details. Visualizing these vectors as points in a 3D space, we can see that "cat" and "dog" are closer, while "apple" is positioned further away. Figure 1. Vectors as points in a 3D space By embedding words and contexts into vectors, we enable systems to assess how related two embedded items are to each other via vector comparison. Dimension of embeddings The dimension of embeddings refers to the length of the vector representing the object. In the previous example, we embedded each word into a 3-dimensional vector. However, a 3-dimensional embedding inevitably leads to a massive loss of information. In reality, word embeddings typically require hundreds or thousands of dimensions to capture the nuances of language. For example, OpenAI's text-embedding-ada-002 model outputs a 1536-dimensional vector Google Gemini's text-embedding-004 model outputs a 768-dimensional vector Amazon Titan's amazon.titan-embed-text-v2:0 model outputs a default 1024-dimensional vector Figure 2. Using text-embedding-ada-002 to embed the sentence “I have a calico cat.” In short, an embedding is a vector that represents a real-world object. The distance between these vectors indicates the similarity between the objects. Limitation of embedding models Embedding models are subject to a crucial limitation: the token limit, where a token can be a word, punctuation mark, or subword part. This constraint defines the maximum amount of text a model can process in a single input. For instance, the Amazon Titan Text Embeddings models can handle up to 8,192 tokens. When input text exceeds the limit, the model typically truncates it, discarding the remaining information. This can lead to a loss of context and diminished embedding quality, as crucial details might be omitted. To address this, several strategies can help mitigate its impact: Text Summarization or Chunking: Long texts can be summarized or divided into smaller, manageable chunks before embedding. Model Selection: Different embedding models have varying token limits. Choosing a model with a higher limit can accommodate longer inputs. What is a Vector Database Vector databases are optimized for storing embeddings, enabling fast retrieval and similarity search. By calculating the similarity between the query vector and the other vectors in the database, the system returns the vectors with the highest similarity, indicating the most relevant content. The following diagram illustrates a vector database search. A query vector 'favorite sport' is compared to a set of stored vectors, each representing a text phrase. The nearest neighbor, 'I like football', is returned as the top result. Figure 3. Vector Query Example Figure 4. Store Vectors into Database Figure 5. Retrieve Vectors from Database When working with vector databases, two key parameters come into play: Top K and similarity measure (or distance function). Top K When querying a vector database, the goal is often to retrieve the most similar items to a given query vector. This is where the Top K concept comes into play. Top K refers to retrieving the top K most similar items based on a similarity metric. For instance, if you're building a product recommendation system, you might want to find the top 10 products similar to the one a user is currently viewing. In this case, K would be 10. The vector database would return the 10 product vectors closest to the query product's vector. Similarity Measures To determine the similarity between vectors, various distance metrics are employed, including: Cosine Similarity: This measures the cosine of the angle between two vectors. It is often used for text-based applications as it captures semantic similarity well. A value closer to 1 indicates higher similarity. Euclidean Distance: This calculates the straight-line distance between two points in Euclidean space. It is sensitive to magnitude differences between vectors. Manhattan Distance: Also known as L1 distance, it calculates the sum of the absolute differences between corresponding elements of two vectors. It is less sensitive to outliers compared to Euclidean distance. Figure 6. Similarity Measures There are many other similarity measures not listed here. The choice of distance metric depends on the specific application and the nature of the data. It is recommended to experiment with various similarity metrics to see which one produces better results. What embedders are supported in SnapLogic As of October 2024, SnapLogic has supported embedders for major models and continues to expand its support. Supported embedders include: Amazon Titan Embedder OpenAI Embedder Azure OpenAi Embedder Google Gemini Embedder What vector databases are supported in SnapLogic Pinecone OpenSearch MongoDB Snowflake Postgres AlloyDB Pipeline examples Embed a text file Read the file using the File Reader snap. Convert the binary input to a document format using the Binary to Document snap, as all embedders require document input. Embed the document using your chosen embedder snap. Figure 7. Embed a File Figure 8. Output of the Embedder Snap Store a Vector Utilize the JSON Generator snap to simulate a document as input, containing the original text to be stored in the vector database. Vectorize the original text using the embedder snap. Employ a mapper snap to format the structure into the format required by Pinecone - the vector field is named "values", and the original text and other relevant data are placed in the "metadata" field. Store the data in the vector database using the vector database's upsert/insert snap. Figure 9. Store a Vector into Database Figure 10. A Vector in the Pinecone Database Retrieve Vectors Utilize the JSON Generator snap to simulate the text to be queried. Vectorize the original text using the embedder snap. Employ a mapper snap to format the structure into the format required by Pinecone, naming the query vector as "vector". Retrieve the top 1 vector, which is the nearest neighbor. Figure 11. Retrieve Vectors from a Database [ { "content" : "favorite sport" } ] Figure 12. Query Text Figure 13. All Vectors in the Database { "matches": [ { "id": "db873b4d-81d9-421c-9718-5a2c2bd9e720", "score": 0.547461033, "values": [], "metadata": { "content": "I like football." } } ] } Figure 14. Pipeline Output: the Closest Neighbor to the Query Embedder and vector databases are widely used in applications such as Retrieval Augmented Generation (RAG) and building chat assistants. Multimodal Embeddings While the focus thus far has been on text embeddings, the concept extends beyond words and sentences. Multimodal embeddings represent a powerful advancement, enabling the representation of various data types, such as images, audio, and video, within a unified vector space. By projecting different modalities into a shared semantic space, complex relationships and interactions between these data types can be explored. For instance, an image of a cat and the word "cat" might be positioned closely together in a multimodal embedding space, reflecting their semantic similarity. This capability opens up a vast array of possibilities, including image search with text queries, video content understanding, and advanced recommendation systems that consider multiple data modalities.3.2KViews5likes0Comments