Gartner - 10 Best Practices for Scaling Generative AI
I recently came back from Gartner's Data and Analytics Summit in Orlando, Floria. As expected, GenAI was a big area of focus and interest. One of the sessions that I attended was "10 best practices for scaling Generative AI." The session highlighted the rapid adoption of generative AI, with 45% of organizations piloting and 10% already in production as of September 2023. While the benefits like workforce productivity, multi-domain applications, and competitive differentiation are evident, there are also significant risks around data loss, hallucinations, black box nature, copyright issues, and potential misuse. Through 2025, Gartner predicts at least 30% of generative AI projects will be abandoned after proof-of-concept due to issues like poor data quality, inadequate risk controls, escalating costs, or unclear business value. To successfully scale generative AI, the session outlined 10 best practices: Continuously prioritize use cases aligned to the organization's AI ambition and measure business value. Create a decision framework for build vs. buy, evaluating model training, security, integration, and pricing. Pilot use cases with an eye towards future scalability needs around data, privacy, security etc. Design a composable platform architecture to improve flexibility and avoid vendor lock-in. Put responsible AI principles at the forefront across fairness, ethics, privacy, compliance etc. Evaluate risk mitigation tools. Invest in data and AI literacy programs across functions and leadership. Instill robust data engineering practices like knowledge graphs and vector embeddings. Enable seamless human-AI collaboration with human-in-the-loop and communities of practice. Apply FinOps practices to monitor, audit and optimize generative AI costs. Adopt an agile, product-centric approach with continuous updates based on user feedback. The session stressed balancing individual and organizational needs while making responsible AI the cornerstone for scaling generative AI capabilities. Hope you found these useful. What are you thoughts on best practices for scaling GenAI?5.4KViews0likes1CommentGenAI App Builder Getting Started Series: Part 1 - HR Q&A example
๐ Welcome! Hello everyone, and welcome to our technical guide to getting started with GenAI App Builder on SnapLogic! At the time of publishing, GenAI App Builder is available for testing and will be generally available in our February release. For existing customers and partners, you can request access for testing GenAI App Builder by speaking to your Customer Success Manager or other member of your account team. If you're not yet a customer, you can speak to your Sales team about testing GenAI App Builder. ๐ค What is GenAI App Builder? Before we begin, let's take a moment to understand what GenAI App Builder is and at least a high-level talk about the components. GenAI App Builder is the latest offering in SnapLogic AI portfolio, focused on helping modern enterprises create applications with Generative AI faster, using a low-/no-code interface. That feels like a mouthful of buzzwords, so let me paint a picture (skip this if you're familiar with GenAI or watch our video, "Enabling employee and customer self-service"). Imagine yourself as a member of an HR team responsible for recruiting year round. Every new employee has an enrollment period after or just before their start date, and every existing employee has open enrollment once per year. During this time, employees need to choose between different medical insurance offerings, which usually involves a comparison of deductibles, networks, max-out-of-pocket, and other related features and limits. As you're thinking about all of this material, sorting out how to explain it all to your employees, you're interrupted by your Slack or Teams DM noise. Bing bong! Questions start flooding in: Hi, I'm a new employee and I'm wondering, when do I get paid? What happens payday is on a weekend or holiday? Speaking of holidays, what are company-recognized holidays this year? Hi, my financial account said I should change my insurance plan to one with an HSA. Can you help me figure out which plan(s) include an HSA and confirm the maximum contribution limits for a family this year? Hi, how does vacation accrual work? When does vacation rollover? Is unpaid vacation paid out or lost? All these questions and many others are answered in documents the HR team manages, including the employee handbook, insurance comparison charts, disability insurance sheets, life insurance sheets, other data sheets, etc. What if, instead of you having to answer all these questions, you would leverage a human-sounding large language model (LLM) to field these questions for you by making sure they referenced only the source documents you provide, so you don't have to worry about hallucinations?! Enter GenAI Builder! ๐ Building an HR Q&A example Once you have access to test GenAI App Builder, you can use the following steps to start building out an HR Q&A example that will answer questions using only the employee handbook or whichever document that you provide. In this guide we will cover the two pipelines used, one that loads data and one that we will use to answer questions. We will not get into Snap customization or Snap details with this guide - it is just meant to show a quick use case. We do assume that you are familiar enough with SnapLogic to create a new Pipeline or import and existing one, search for Snaps, connect Snaps, and a few other simple steps. We will walk you through anything that is new to SnapLogic or that needs some additional context. We also assume you have some familiarity with Generative AI in this guide. We will also make a video with similar content in the near future, so I'll update or reply to this post once that content is available. Prerequisites In order to complete this guide, you will need the items below regardless of whether or not you use the Community-supported chatbot UI from SnapLogic. Access to a Pinecone instance (sign up for a free account at https://www.pinecone.io) with an existing index Access to Azure OpenAI or OpenAI You need a file to load, such as your company's employee handbook Loading data Our first step is to load data into the vector database using a Pipeline similar to the one below, which we will call the "Indexer" Pipeline since it helps populate the Pinecone Index. If you cannot find the patterns in the Pattern Library, you can find it attached below as "Indexer_Feb2024.slp". The steps below assume you have already imported the Pipeline or are building it as we go through. To add more color here, loading data into the vector database is only something that needs to be done when the files are updated. In the HR scenario, this might be once a year for open enrollment documents and maybe a few times a year for the employee handbook. We will explore some other use cases in the future where document updates would be much frequent. Click on the "File Reader" Snap to open its settings Click on the icon at the far right of the "File" field as shown in the screenshot below Click the "Upload" button in the upper-right corner of the window that pops up Select the PDF file from your local system that you want to index (we are using an employee handbook and you're welcome to do the same) to upload it, then make sure it is selected Save and close the "File Reader" Snap once your file is selected Leave the "PDF Parser" Snap with default settings Click on the "Chunker" Snap to open it, then mirror the settings in the screenshot below. Now open the "Azure OpenAI Embedder" or "OpenAI Embedder" Snap (you may need to replace the embedder that came in the Pattern or import with the appropriate one you have an account with). Go to the "Account" tab and create a new account for the embedder you're using. You need to replace the variables {YOUR_ACCOUNT_LABEL} with a label for the account that makes sense for you, then replace {YOUR_ENDPOINT} with the appropriate snippet from your Azure OpenAI endpoint. Validate the account if you can to make sure it works. After you save your new account you can go back to the main "Settings" tab on the Snap. If the account setup was successful, you should now be able to click the chat bubble icon at the far right of the "Deployment ID" field to suggest a "Deployment ID" - in our environment shown in the screenshot below, you can see we have one named "Jump-emb-ada-002" which I can now select. Finally, make sure the "Text to embed" field is set as shown below, then save and close this Snap. Now open the "Mapper" Snap so we can map the output of the embedder Snap to the "Pinecone Upsert" Snap as shown in the screenshot below. If it is difficult to see the mappings in the screenshot above, here is a zoomed in version: For a little more context here, we're mapping the $embedding object coming out of the embedder Snap to the $values object in Pinecone, which is required. If that was all you mapped though, your Q&A example would always reply with something like "I don't know" because there is no data. To do that, we need to make use of the very flexible "metadata" object in Pinecone by mapping $original.chunk to $metadata.chunk. We also statically set $metadata.source to "Employee Handbook.pdf" which allows the retriever Pipeline to return the source file used in answering a question (in a real-world scenario, you would probably determine the source dynamically/programmatically such as using the filename so this pipeline could load other files too). Save and close the "Mapper" Snap Finally, open the "Pinecone Upsert" Snap then click the "Account" tab and create a new account with your Pinecone API Key and validate it to make sure it works before saving Back on the main "Settings" tab of the "Pinecone Upsert" Snap, you can now click on the chat bubble icon to suggest existing indexes in Pinecone. For example, in our screenshot below you can see we have four which have been obscured and one named "se-demo." Indexes cannot be created on the fly, so you will have to make sure the index is created in the Pinecone web interface. The last setting we'll talk about for the Indexer pipeline is the "Namespace" field in the "Pinecone Upsert" Snap. Setting a namespace is optional. Namespaces in Pinecone create a logical separation between vectors within an index and can be created on-the-fly during Pipeline execution. For example, you could create an index like "2024_enrollment" for all documents published in 2024 for open enrollment and another called "2024_employeehandbook" to separate those documents into separate namespaces. Although these can be used just for internal purposes of organization, you can also direct a chatbot to only use one namespace to answer questions. We'll talk about this more in the "Answering Questions" section below which covers the Retriever Pipeline. Save and close the "Pinecone Upsert" Snap You should now be able to validate the entire Pipeline to see what the data looks like as it flows through the Snaps, and when you're ready to commit the data to Pinecone, you can Execute the Pipeline. Answering Questions To answer questions using the data we just loaded into Pinecone, we're going to recreate or import the Retriever Pipeline (attached as "Retriever_Feb2024.slp"). If you import the Pipeline you may need to add additional "Mapper" Snaps as shown below. We will walk through that in the steps below, just know this is what we'll end up with at the end of our first article. The screenshot above shows what the pattern will look like when you import it. Since this first part of the series will only take us up to the point of testing in SnapLogic, our first few steps will involve some changes with that in mind. Right-click on the "HTTP Router" Snap, click "Disable Snap" Click the circle between "HTTP Router" and embedder Snap to disconnect them Drag the "HTTP Router" Snap somewhere out of the way on the canvas (you can also delete it if you're comfortable replacing it later); your Pipeline should now look like this: In the asset palette on the left, search for the "JSON Generator" (it should appear before you finish typing that all out): Drag a "JSON Generator" onto the canvas, connecting it to the "Azure OpenAI Embedder" or "OpenAI Embedder" Snap Click on the "JSON Generator" to open it, then click on the "Edit JSON" button in the main Settings tab Highlight all the text from the template and delete it so we have a clean slate to work with Paste in this text, replacing "Your question here." with an actual question you want to ask that can be answered from the document you loaded with your Indexer Pipeline. For example, I loaded an employee handbook and I will ask the question, "When do I get paid?" [ { "prompt" : "Your question here." } ]โ Your "JSON Generator" should now look something like this but with your question: Click "OK" in the lower-right corner to save the prompt Click no the "Azure OpenAI Embedder" or "OpenAI Embedder" Snap to view its settings Click on the Account tab, then use the drop-down box to select the account you created in the section above ("Loading Data", steps 8-9) Click on the chat bubble icon to suggest "Deployment IDs" and choose the same one you chose in "Loading Data", step 10 Set the "Text to embed" field to $prompt as shown in the screenshot below: Save and close the "Azure OpenAI Embedder" or "OpenAI Embedder" Snap Click on the Mapper immediately after the embedder Snap Create a mapping for $embedding that maps to $vector Check the "Pass through" box; this Mapper Snap should now look like this: Save and close this "Mapper" Open the "Pinecone Query" Snap Click the Account tab, then use the drop-down to select the Pinecone account you created in "Loading Data", step 14 Use the chat bubble on the right side of the "Index name" field to select your existing Index [OPTIONAL] Use the chat bubble on the right side of the "Namespace" field to select your existing Namespace, if you created one; the "Pinecone Query" Snap should now look like this: Save and close the "Pinecone Query" Snap. Click on the "Mapper" Snap after the "Pinecone Query" Snap. In this "Mapper" we need to map the three items listed below, which are also shown in the following screenshot. If you're not familiar with the $original JSON key, it occurs when an upstream Snap has implicit pass through, or like the "Mapper" in step 17, we explicitly enable pass through, allowing us to access the original JSON document that went into the upstream Snap. (NOTE: If you're validating your pipeline along the way or making use of our Dynamic Validation, you may notice that no Target Schema shows up in this Mapper until after you complete steps 27-30.) Map $original.original.prompt to $prompt Map jsonPath($, "$matches[*].metadata.chunk") to jsonPath($, "$context[*].data") Map jsonPath($, "$matches[*].metadata.source") to jsonPath($, "$context[*].source") Save and close that "Mapper". Click on the "Azure OpenAI Prompt Generator" or "OpenAI Prompt Generator" so we can set our prompt. Click on the "Edit prompt" button and make sure your default prompt looks like the screenshot below. On lines 4-6 you can see we are using mustache templating like {{#context}} {{source}} {{/context}} which is the same as the jsonPath($, "$context[*].source") from the "Mapper" in step 25 above. We'll talk about this more in future articles - for now, just know this will be a way for you customize the prompt and data included in the future. Click "OK" in the lower-right corner Save and close the prompt generator Snap Click on the "Azure OpenAI Chat Completions" or "OpenAI Chat Completions" Snap Click the "Account" tab then use the drop-down box to select the account you created earlier Click the chat bubble icon to the far right of the "Deployment ID" field to suggest a deployment; this ID may be different than the one you've chosen in previous "Azure OpenAI" or "OpenAI" Snaps since we're selecting an LLM this team instead of an embedding model Set the "Prompt" field to $prompt; your Snap should look something like this: Save and close the chat completions Snap Testing our example Now it's time to validate our pipeline and take a look at the output! Once validated the Pipeline should look something like this: If you click the preview data output on the last Snap, the chat completions Snap, you should see output that looks like this: The answer to our prompt is under $choices[0].message.content. For the test above, I asked the question "When do I get paid?" against an employee handbook and the answer was this: Employees are paid on a semi-monthly basis (24 pay periods per year), with payday on the 15th and the last day of the month. If a regular payday falls on a Company-recognized holiday or on a weekend, paychecks will be distributed the preceding business day. The related context is retrieved from the following sources: [Employee Handbook.pdf] Wrapping up Stay tuned for further articles in the "GenAI App Builder Getting Started Series" for more use cases, closer looks at individual Snaps and their settings, and even how to connect a chat interface! Most if not all of these articles will also have an associated video if you learn better that way! If you have issues with the setup, find a missing step or detail, please reply to this thread to let us know!4.1KViews3likes1CommentEmbeddings and Vector Databases
What are embeddings Embeddings are numerical representations of real-world objects, like text, images or audio. They are generated by machine learning models as vectors, an array of numbers, where the distance between vectors can be seens as the degree of similarity between objects. While an embedding model may have its own meaning for each of the dimensions, thereโs no guarantee between embedding models of the meaning for each of the dimensions used by the embedding models. For example, the word โcatโ, โdogโ and โappleโ might be embedded into the following vectors: cat -> (1, -1, 2) dog -> (1.5, -1.5, 1.8) apple -> (-1, 2, 0) These vectors are made-up for a simpler example. Real vectors are much larger, see the Dimension section for details. Visualizing these vectors as points in a 3D space, we can see that "cat" and "dog" are closer, while "apple" is positioned further away. Figure 1. Vectors as points in a 3D space By embedding words and contexts into vectors, we enable systems to assess how related two embedded items are to each other via vector comparison. Dimension of embeddings The dimension of embeddings refers to the length of the vector representing the object. In the previous example, we embedded each word into a 3-dimensional vector. However, a 3-dimensional embedding inevitably leads to a massive loss of information. In reality, word embeddings typically require hundreds or thousands of dimensions to capture the nuances of language. For example, OpenAI's text-embedding-ada-002 model outputs a 1536-dimensional vector Google Gemini's text-embedding-004 model outputs a 768-dimensional vector Amazon Titan's amazon.titan-embed-text-v2:0 model outputs a default 1024-dimensional vector Figure 2. Using text-embedding-ada-002 to embed the sentence โI have a calico cat.โ In short, an embedding is a vector that represents a real-world object. The distance between these vectors indicates the similarity between the objects. Limitation of embedding models Embedding models are subject to a crucial limitation: the token limit, where a token can be a word, punctuation mark, or subword part. This constraint defines the maximum amount of text a model can process in a single input. For instance, the Amazon Titan Text Embeddings models can handle up to 8,192 tokens. When input text exceeds the limit, the model typically truncates it, discarding the remaining information. This can lead to a loss of context and diminished embedding quality, as crucial details might be omitted. To address this, several strategies can help mitigate its impact: Text Summarization or Chunking: Long texts can be summarized or divided into smaller, manageable chunks before embedding. Model Selection: Different embedding models have varying token limits. Choosing a model with a higher limit can accommodate longer inputs. What is a Vector Database Vector databases are optimized for storing embeddings, enabling fast retrieval and similarity search. By calculating the similarity between the query vector and the other vectors in the database, the system returns the vectors with the highest similarity, indicating the most relevant content. The following diagram illustrates a vector database search. A query vector 'favorite sport' is compared to a set of stored vectors, each representing a text phrase. The nearest neighbor, 'I like football', is returned as the top result. Figure 3. Vector Query Example Figure 4. Store Vectors into Database Figure 5. Retrieve Vectors from Database When working with vector databases, two key parameters come into play: Top K and similarity measure (or distance function). Top K When querying a vector database, the goal is often to retrieve the most similar items to a given query vector. This is where the Top K concept comes into play. Top K refers to retrieving the top K most similar items based on a similarity metric. For instance, if you're building a product recommendation system, you might want to find the top 10 products similar to the one a user is currently viewing. In this case, K would be 10. The vector database would return the 10 product vectors closest to the query product's vector. Similarity Measures To determine the similarity between vectors, various distance metrics are employed, including: Cosine Similarity: This measures the cosine of the angle between two vectors. It is often used for text-based applications as it captures semantic similarity well. A value closer to 1 indicates higher similarity. Euclidean Distance: This calculates the straight-line distance between two points in Euclidean space. It is sensitive to magnitude differences between vectors. Manhattan Distance: Also known as L1 distance, it calculates the sum of the absolute differences between corresponding elements of two vectors. It is less sensitive to outliers compared to Euclidean distance. Figure 6. Similarity Measures There are many other similarity measures not listed here. The choice of distance metric depends on the specific application and the nature of the data. It is recommended to experiment with various similarity metrics to see which one produces better results. What embedders are supported in SnapLogic As of October 2024, SnapLogic has supported embedders for major models and continues to expand its support. Supported embedders include: Amazon Titan Embedder OpenAI Embedder Azure OpenAi Embedder Google Gemini Embedder What vector databases are supported in SnapLogic Pinecone OpenSearch MongoDB Snowflake Postgres AlloyDB Pipeline examples Embed a text file Read the file using the File Reader snap. Convert the binary input to a document format using the Binary to Document snap, as all embedders require document input. Embed the document using your chosen embedder snap. Figure 7. Embed a File Figure 8. Output of the Embedder Snap Store a Vector Utilize the JSON Generator snap to simulate a document as input, containing the original text to be stored in the vector database. Vectorize the original text using the embedder snap. Employ a mapper snap to format the structure into the format required by Pinecone - the vector field is named "values", and the original text and other relevant data are placed in the "metadata" field. Store the data in the vector database using the vector database's upsert/insert snap. Figure 9. Store a Vector into Database Figure 10. A Vector in the Pinecone Database Retrieve Vectors Utilize the JSON Generator snap to simulate the text to be queried. Vectorize the original text using the embedder snap. Employ a mapper snap to format the structure into the format required by Pinecone, naming the query vector as "vector". Retrieve the top 1 vector, which is the nearest neighbor. Figure 11. Retrieve Vectors from a Database [ { "content" : "favorite sport" } ] Figure 12. Query Text Figure 13. All Vectors in the Database { "matches": [ { "id": "db873b4d-81d9-421c-9718-5a2c2bd9e720", "score": 0.547461033, "values": [], "metadata": { "content": "I like football." } } ] } Figure 14. Pipeline Output: the Closest Neighbor to the Query Embedder and vector databases are widely used in applications such as Retrieval Augmented Generation (RAG) and building chat assistants. Multimodal Embeddings While the focus thus far has been on text embeddings, the concept extends beyond words and sentences. Multimodal embeddings represent a powerful advancement, enabling the representation of various data types, such as images, audio, and video, within a unified vector space. By projecting different modalities into a shared semantic space, complex relationships and interactions between these data types can be explored. For instance, an image of a cat and the word "cat" might be positioned closely together in a multimodal embedding space, reflecting their semantic similarity. This capability opens up a vast array of possibilities, including image search with text queries, video content understanding, and advanced recommendation systems that consider multiple data modalities.3.2KViews5likes0CommentsRecipes for Success with SnapLogicโs GenAI App Builder: From Integration to Automation
For this episode of the Enterprise Alchemists podcast, Guy and Dominic invited Aaron Kesler and Roger Sramkoski to join them to discuss why SnapLogic's GenAI App Builder is the key to success with AI projects. Aaron is the Senior Product Manager for all things AI at SnapLogic, and Roger is a Senior Technical Product Marketing Manager focused on AI. We kept things concrete, discussing real-world results that early adopters have already been able to deliver by using SnapLogic's integration capabilities to power their new AI-driven experiences.2.3KViews4likes2CommentsData At Scale For AI At Scale: How To Think About Data Readiness
This week's episode of the Enterprise Alchemists is another live recording from Integreat 2024 in London! This week we have Maks Shah of Syngenta; we had a fascinating conversation during the after-event cocktail party โ which is why this episode is a bit shorter than normal. Maks's key takeaway was that "there was a common theme throughout all the presentations this afternoon, and that was that your data has to be fit for it". There is no hashtag#AI success without the data to feed it with. Establishing a solid data foundation is step zero on your journey to hashtag#GenAI.1.9KViews0likes7CommentsGenAI App Builder Getting Started Series: Part 2 - Purchase Order Processing
๐ Welcome! Hello everyone and welcome to our second guide in the GenAI App Builder Getting Started Series! First things first, GenAI App Builder is now generally available for all customers to purchase or test in SnapLabs. If you are a customer or partner who wants access to SnapLabs, please reach out to your Customer Success Manager and they can grant you access. If you are not yet a customer, you can check out our GenAI App Builder videos then when youโre ready to take the next step, request a demo with our sales team! ๐ค What is GenAI App Builder? If youโre coming here from Part 1, you may notice that GenAI Builder is now GenAI App Builder. Thank you to our customers who shared feedback on how we could improve the name to better align with the purpose. The original name had led to some confusion that its purpose was to train LLMs. ๐ Purchase Order Processing Example In this example we will demonstrate how to use GenAI in a SnapLogic Pipeline to act like a function written in natural language to extract information from a PDF. The slide below shows an example of how we use natural language to extract the required fields in JSON format that would allow us to make this small pattern part of a larger app or data integration workflow. โ Prerequisites In order to following along with this guide, you will need the items below to complete this guide: Access to GenAI App Builder (in your companyโs organization or in SnapLabs) Your own API account with access to Azure OpenAI, OpenAI, Amazon Bedrock Claude. โฌ๏ธ Import the pipeline At the bottom of this post you will find several files if you want to just use a pattern to see this in action in your own environment and explore it further. If you are familiar with SnapLogic and want to build the Pipeline on your own you can do that as well and just download the example PDF or try your own! PurchaseOrderExample.pdf InvoiceProcessing_CommunityArticlePipeline_2024_06_28.slp (zipped) Once you are signed in to SnapLogic or SnapLabs you can start with the steps below to import the Pipeline: In Designer, click the icon shown in the screenshot below to import the Pipeline: Select the file in the File Browser window that pops up In the Add New Pipeline panel that opens you can change name and project location if desired Press the Save button in the lower-right corner ๐ง Parsing the file If you imported the pipeline using the steps above, then your pipeline should look like the one below. The steps below assume you imported the pipeline. If you are familiar enough with SnapLogic to build this on your own you can drag the Snaps shown below to create the Pipeline then follow along with us. ๐ NOTE: The instructions here are completed with the Amazon Bedrock Prompt Generator and the Anthropic Claude on AWS for the last two Snaps in the Pipeline. You can swap these out for Azure OpenAI or OpenAI Snaps if you prefer to use those LLMs. Click the File Reader Snap to open its settings Click the icon at the far right of the File field as shown in the screenshot below Click the Upload File button in the upper-right corner of the window that pops up Select the PDF file from your file browser (download the file โโ at the bottom of this post if you have not already) Save and close the File Reader Snap once your file is selected No edits are needed for the PDF Parser Snap, so we'll skip over that one Click the Mapper Snap Add $text in the Expression field and $context in the Target path fields as shown below Save and close the Mapper Snap Click on the fourth Snap, the Prompt Generator Snap (we will demonstrate here with the Amazon Bedrock Prompt Generator Snap - you do not have to use Amazon Bedrock though, you can any of the other LLM Prompt Generators we have like Azure OpenAI, OpenAI, etc.) Click the Edit Prompt button as shown in the screenshot below so we can modify the prompt used for the LLM You should see a pre-generated prompted like the one below: Copy the prompt below and replace the default prompt: Instruction: Your task is to pull out the company name, the date created, date shipped, invoice number, P.O. number, vendor from vendor details, recipient name from recipient details, subtotal, 'Shipping & handling', tax rate, sales tax, and total from the context below. Give the results back in JSON. Context: {{context}} The Prompt Generator text should now look like the screenshot below: Click the Ok button in the lower-right corner to save our prompt changes Click on the last Snap, the Chat Completions Snap (we will demonstrate here with the Anthropic Claude on AWS Chat Completions Snap - you do not have to use Anthropic Claude on AWS though, you can any of the other LLM Chat Completions Snaps we have like Azure OpenAI, OpenAI, etc.) Click the Account tab Click Add Account; if you have an existing LLM account to use you can select that here and skip to step 22 below Select the type of account you want then press Continue - available options will depend on which LLM Chat Completions Snap you chose Enter in the required credentials for the LLM account you chose; here is an example of the Amazon Bedrock Account Press the Apply button when done entering the credentials Verify your account is now selected in the Account tab Click on the Settings Click on the Suggest icon to the right of the Model name field as shown in the screenshot below and select the model you want to use Type $prompt in the Prompt field as shown in the screenshot below: Expand the Model Parameters section by clicking on it (if you are using OpenAI or Azure OpenAI, you can leave Maximum Tokens blank; for Anthropic Claude on AWS you will need to increase Maximum Tokens from 200 to something higher; you can see where we set 50,000 below) Save and close the Chat Completions Snap ๐ฌ Testing our example At this point we are ready to test our Pipeline and observe the results! The screenshot below shows you where you can click to Validate the Pipeline, which should have every Snap turn green with preview output as shown below. If you have any errors or questions, please reply to share them with us! Here is the JSON output after the Anthropic Claude on AWS Chat Completions Snap (note that other LLMs will have different API output structures): Extras! Want to play with this further? Try adding a Copy Snap after the Mapper and sending the file to multiple LLMs at once then review the results. Try changing {{context}} in the Prompt Generator to something else so you can drop the Mapper from the pipeline ๐ Wrapping up Congratulations, you have now completed at least one GenAI App Builder integration in SnapLogic! ๐ Stay tuned to the SnapLabs channel here in the Integration Nation for more content on GenAI App Builder in the future! Please share any thoughts, comments, concerns, or feedback in a reply or DM RogerSramkoski!1.8KViews4likes0CommentsUnlock the Future of AI: Discover Project SnapChain and Build Your Own RAG Chatbot
To say we've journeyed through a realm of groundbreaking advancements since the release of SnapGPT in August (has it already been 4 months?!) is just scratching the surface. At AWS re:Invent 2023 not only did we showcase SnapGPT, but we also unveiled our revolutionary generative AI capability - Project SnapChain. Our customers have been thrilled with how SnapGPT has transformed their pipeline creation and documentation processes. But the excitement doesn't stop there - they're eager to delve into building their own generative AI applications using their unique data and documents. We're inviting you to a special event - this Wednesday, December 6th, at 11 AM ET (8 AM PT) for an exclusive behind-the-scenes look at Project SnapChain in action. In this interactive webinar, we're not just sharing insights; we're guiding you on how to construct a RAG-based chatbot using nothing but Snaps, along with your data and documents. What's more, you'll have the chance to put this knowledge into practice in our SnapLabs environment! Join us to be part of this innovative journey and unlock the power to create. Reserve your spot now and be at the forefront of AI innovation. We can't wait to see you there! Sign up here: https://www.snaplogic.com/resources/webcasts/snaplabs-corner-december-20231.7KViews1like0CommentsIs GenAI improving your productivity?
SnapLogic recently published a survey of use of Generative AI in organizations around the world. 67% of respondents said they are already saving 1-5 hours every week from use of GenAI tools. What about you? Are you using GenAI tools? Are they helping you improve your productivity? Please add a comment!1.6KViews0likes0CommentsGen AI for Integration: Addressing Security and Privacy Concerns With SnapGPT
Have questions about data security and privacy with Generative AI-driven integration? Read this blog by mrai : Gen AI for Integration: Addressing Security and Privacy Concerns With SnapGPT What thoughts or questions do you have?1.5KViews0likes0Comments