Introduction
At a high level, the logic behind assistant tool calling and non-assistant tool calling is fundamentally the same: the model instructs the user to call specific function(s) in order to an...
Introduction
OpenAI and Azure OpenAI assistants can invoke models and utilize tools to accomplish tasks. This article primarily focuses on constructing pipelines to leverage the tool-calling capabilit...
Multimodal processing in Generative AI represents a transformative leap in how AI systems extract and synthesize information from multiple data types—such as text, images, audio, and video—simultaneou...
Introduction
In this article, we will be introducing the following.
Part 1: Four new classes of snaps for LLM function calling: Function Generator, Tool Calling, Function Result Generator, and Messag...
We all love the Pipeline Execute Snap, it greatly simplifies a complex pipeline by extracting sections into a sub-pipeline. But sometimes, we’d really want the ability to run a pipeline multip...
What is Retrieval-Augmented Generation (RAG)?
Retrieval-Augmented Generation (RAG) is the process of enhancing the reference data used by language models (LLMs) through integrating them with tradition...
Why do we need LLM Observability?
GenAI applications are great, they answer like how a human does. But how do you know if GPT isn’t being “too creative” to you when results from the LLM shows “Company...
What are embeddings
Embeddings are numerical representations of real-world objects, like text, images or audio. They are generated by machine learning models as vectors, an array of numbers, where the...
In the rapidly evolving field of Generative AI (GenAI), foundational knowledge can take you far, but it's the mastery of advanced patterns that truly empowers you to build sophisticated, scalable, and...
GenAI is a powerful toolset designed to help you develop and optimize large language models (LLMs) such as OpenAI, Claude, Google Gemini, and more, within your own data pipelines on the SnapLogic plat...