SnapGPT Pipeline Refinement - The Fastest Way to Modify and Improve Your Data Pipelines
Transform pipeline management from a time-consuming task to an instant action. SnapGPT refinement capabilities accelerate your workflow by using single prompts to improve snap labels and add new functions to existing pipelines. Reduce development time, improve clarity, and directly increase the ROI of your integration projects.8Views0likes0CommentsBasics of SnapLogic
Introduction SnapLogic is a cloud-based integration Platform-as-a-Service (iPaaS) that provides tools for connecting various applications, data sources, and APIs. It enables businesses to automate and streamline their data integration processes by offering pre-built connectors and a visual interface for designing integration workflows. The SnapLogic platform uses a SnapLogic pipeline, a series of connected "Snaps" (pre-built components) that define the flow and transformation of data between various systems and applications. In a SnapLogic pipeline, data flows from one Snap to another, with each Snap performing a specific function, such as data extraction, transformation, or loading (ETL). SnapLogic Designer The SnapLogic Designer is the user interface that enables you to develop pipelines. You can see the example page below. But in SnapLogic with a feature called “Asset Palette,” you may see the different styles of Side Panel view. But the features are the same as those of the side panel view. The designer page consists of three main parts: Canvas - The field for visualizing and editing the pipeline Side Panel / Asset Palette - The panel contains the menu list. (The left picture is the Side Panel view. The right picture is Asset Palette enabled) Snaps Catalog - lists all available Snaps. https://docs-snaplogic.atlassian.net/wiki/x/ePIV Pipelines Catalog - list all pipelines that you can access. https://docs-snaplogic.atlassian.net/wiki/x/w-IV Patterns Catalog - list all the patterns that you can access. https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/3022160260/Patterns+Catalog Toolbar - The list of tools for the pipeline Icons Description Execute Pipeline Execute the pipeline. Validate Pipeline Validate the pipeline. Any unsaved changes will be saved before validation. Clicking the button while a validation is in process cancels that validation. Shift-clicking the button will clear the cache before validating. Edit Pipeline Properties You specify properties when creating a pipeline. Click this button to modify the properties. Check Pipeline Statistics As a pipeline executes, the statistics are updated periodically so that you can monitor its progress. Create Task Create a Task for the current pipeline. Save Pipeline Save the current pipeline. Export Pipeline Export the current pipeline. Copy Pipeline Copy the pipeline from one project to another. Move Pipeline Move the pipeline from one project to another. Delete Pipeline Delete the current pipeline Pipeline Versions Create versions of the pipeline. Compare Pipeline Compare the current pipeline with the target pipeline. Notes Add a note or delete an existing note. Notes are saved with the pipeline. Print Print the pipeline. Snaps Snaps are the building blocks of a pipeline. Each Snap performs a single function, such as reading, parsing, transforming, or writing data. You can view the Snaps available to you (or your account) in the Snaps Catalog on the left-hand side of the SnapLogic Designer. You can drag a Snap from the Snap Catalog onto the Canvas to use it in a pipeline. Snaps Type SnapLogic includes the following basic types of Snaps with distinct icons. Icon Snap Type Description Read Specifies data sources in the pipeline. Examples: File Reader, CSV Generator, Birst Query Parse Takes the input of unstructured data and generates an output of structured data. Examples: XML Parser, Sequence Parser, JSON Parser Transform Modifies data significantly. Examples: Mapper, Aggregate, Join Flow Changes the output or direction of data in a pipeline. Examples: Router, Gate, Union Format Changes the data format. Examples: CSV Formatter, JSON Formatter, Excel Formatter Write Specifies data destinations in a pipeline. Examples: File Writer, REST Post, Email Delete Connecting Snaps The key to creating a Pipeline in SnapLogic is connecting Snaps. There are a few things to consider when placing Snaps in a Pipeline. Connection Shapes Like puzzle pieces, only Snaps with matching connection pairs (circles or diamonds) can be connected between the input and output of two snaps. When you drag a snap and place it next to or in front of another snap, the snap will automatically connect both snaps, and the connection will change color, which means it connects successfully. If the color doesn’t change, you need to recheck that both connection shapes are the same and re-connect it again. Disconnect Linked Snaps Unlinked Snaps can be moved apart or placed next to each other. Make sure the circle or diamond connector is colored Blue, which indicates that the Snaps are linked. To disconnect linked Snaps, click on the Blue connector. This clears the color and allows you to rearrange the Snaps. Remote-Connect Link Snaps You can connect to Snaps, but not next to each other, using a remote-connect link. For example, click and hold on the Mapper Snap connector until it turns Yellow, then drag it to the Copy Snap connector. When both connections turn Blue, release the mouse button. A number is placed in both connectors to let you know they are connected. Note: The number is only temporary until the Pipeline is saved. At this point, a new, permanent number may be assigned. You can also click and hold on one connection, and both Snaps connected by this link will darken. This feature is helpful for large pipelines where it may take much work to visualize the connections quickly. Data model SnapLogic will pass the data between Snaps with two models: Document data The document data models will be represented by a circle shape. This data type uses the JSON format as a container of the data. The support data type in this model is similar to the JSON standard in that it includes string, boolean, number, array, object, and null. Binary data The document data models will be represented by a diamond shape. This data type will wrap the binary data in SnapLogic’s model. Mostly, this will be inputted to the file writer and parser and outputted from the file reader and formatter. Configuration Snaps You have two options to open the configuration dialog. First, left-click on the Snap that you want to configure. The dialog will show up immediately. The second way is right-clicking at the Snap, and the menu displays options available in all Snaps through a dropdown list will be shown. Then click “Edit” in the menu. Each Snap will have different configurations. You can learn more about the configuration of each snap by clicking the question mark icon on the top right of the dialog. Expression The SnapLogic expression language is a utility that is available to Snaps. You can use expressions (JavaScript syntax) to access functions and properties to set field values dynamically. You can also use the expression language to manipulate data. Example $text == "NFL" ? "foo" : "bar" $counter > 1 ? ($counter < 3 ? 50 : 100) : -1 Expressions are available across multiple Snaps. If the Snap exposes the functionality of the expression for a property, then the icon appears in front of the property's text box. You can toggle on or off by clicking on the icon. When the toggle is on, the down arrow within the field will appear. You can click to see the list of functions and properties available. Operations List of supported and unsupported operations available on (document https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/1438042/Understand+Expressions+in+the+SnapLogic+Platform) Accessing Pipeline Parameters Parameters allow a pipeline to be reused in multiple situations. For example, a File Writer Snap can be configured to write to a file path specified by a parameter, which allows the same pipeline to write to different files. The parameters for a pipeline can be defined by using the Edit Pipeline properties dialog. The name of each parameter must only contain alpha-numeric characters, and the value will be converted to a string. The value for a parameter defined in the pipeline properties dialog is treated as the default when running the pipeline in Designer. Parameters can also be passed to the Pipeline Execute Snap. Any parameters not passed down from the Task or Snap will use the defaults specified in the properties dialog. To access a pipeline parameter from the expression language, you must prefix the parameter name with an underscore. For example, given the following parameters: Key Value firstName Bob numValue 12 path $.age The "firstName" parameter can then be accessed using _firstName, as in: "Hello, " + _firstName // result: Hello, Bob Since the value of a parameter is always a string, you'll need to convert any string to numeric values before operating on them. For example, simply adding two to the "numValue" parameter will append the character "2" to "12" and yield "122": _numValue + 2 // result: "122" Instead, you need to use the parseInt/parseFloat functions to parse the string into a value and then add two to it: parseInt(_numValue) + 2 // result: 14 You need to parameterize your pipeline with an expression. You can use the eval() function to evaluate an expression stored in a string. For example, to read the document field specified by the "path" parameter, you can use: eval(_path) // result: <the value of the "age" field in the current document> Accessing Input View Variables as Part of Expressions An input view schema attribute can be used as part of the expression using the dollar sign ($) prefix. Example The REST Put Snap provides a URL. The URL can be toggled into an expression, and the expressions could be created by dynamically substituting the variables from an input view, such as: 'http://someplace:someport/somepart/' + $inputvar + '/somemoreparts' Accessing secret value from the secrets manager Any expression-enabled authentication field in a Snap or Account can be used with Secrets Management. You can enter an expression that retrieves a secret stored in your secrets manager, such as an access token, a username, or a password. To use the values from the secrets manager, you must first create secrets myaccesskey and mysecretkey in the Secrets Manager vault. Then, create or modify the Account and enter an expression in the required fields. Learn more: Configure Accounts to use secrets. Account An account represents an object that encompasses details to connect to an endpoint. Accounts play a crucial role in integrating applications. Any Snap that communicates with an external endpoint needs an authenticated account to access the resources on the endpoint. For example, a MySQL Snap requires authenticated access to a MySQL database. In SnapLogic, you create an Account to store credentials and any other information necessary to connect, such as a URL, hostname, and port number. You can create an account from Designer or Manager. In Designer, when working on pipelines, every Snap needing an account prompts you to create a new account or use an existing one. To use an existing account, you can click the dropdown icon to show all the available accounts for the snaps. To create a new account, click the “Add Account” button below the property field and follow the steps. The account will be created in your selected location on the first step. You can manage the created account on the Manager page in that location. Note: You can learn more about account type and each property by clicking the icon question mark in the top right corner. Validation & Execute Pipeline Sometimes, we want to test the pipeline by dry-running it without running the write snaps. You can use the validate function on the toolbar menu. The difference between validate and execute is before each snap runs. It will check the property called “Snap execution.” There are three opinions on how to trigger the snaps. Validate & Execute - this option makes the snaps run on both the validation and execution steps. Execute only - this option makes the snaps run on only the execution step. The snap writer type uses this as a default value. Disabled - this option prevents the snaps from running. Note: By default, the validation will be triggered every time we change the configuration of the snaps in the pipeline. Preview Data After executing or validating the pipeline, they will have a preview icon in the connection joint. The preview dialog will appear when you click on it, showing the snaps' output data. For example, when we click the preview icon, the pipeline above will show output data from JSON Generator snaps. The preview dialog has three types: JSON, Table, and Raw. You can select the dropdown on Preview Type and choose the type you like. JSON Table Raw Create First Pipeline This section will show how you start creating the pipeline, from the requirement to checking the result and running the final pipeline. For the example scenario, we want to calculate the employees list to check who needs to be assigned marketing training. The list of our employees looks like the data below. [ { "Name": "Albert Maro", "Location": "Field", "Extension": 4357, "Email": "amaro@company.com", "Title": "Director, Eastern US", "Department": "Sales", "Dept ID": 1100 }, { "Name": "Anthony Dunn", "Location": "HQ", "Extension": 4387, "Email": "adunn@company.com", "Title": "Social Media Director", "Department": "Marketing", "Dept ID": 1200 }, { "Name": "Rich Harris", "Location": "CO", "Extension": 4368, "Email": "rharris@company.com", "Title": "Principal Developer", "Department": "Engineering", "Dept ID": 1300 } // more data ] The constraint of needing training is an employee in the marketing department working at “HQ.” We want the list of employees with Firstname, Lastname, Email, Title, and Training fields. The result should look like below. [ { "Firstname": "Albert", "Lastname": "Maro", "Email": "amaro@company.com", "Title": "Director, Eastern US", "Training": false }, { "Firstname": "Anthony", "Lastname": "Dunn", "Email": "adunn@company.com", "Title": "Social Media Director", "Training": true }, { "Firstname": "Rich", "Lastname": "Harris", "Email": "rharris@company.com", "Title": "Principal Developer", "Training": false } // more data ] Steps 1. Open the Designer page. 2. Click to create a new pipeline. 3. Change the label to “Employees training” and click save. 4. At this step, we already have a new empty pipeline. Then, find the “JSON Generator” snap from the side panel and drag it to the canvas screen. This snap generates a JSON document for the next snap in the pipeline. We will set it as an input source. 5. Click at the JSON Generator snap to open the configuration dialog and click “Edit JSON.” Then, replace all JSON with the value below. [ { "Name": "Albert Maro", "Location": "Field", "Extension": 4357, "Email": "amaro@company.com", "Title": "Director, Eastern US", "Department": "Sales", "Dept ID": 1100 }, { "Name": "Anthony Dunn", "Location": "HQ", "Extension": 4387, "Email": "adunn@company.com", "Title": "Social Media Director", "Department": "Marketing", "Dept ID": 1200 }, { "Name": "Rich Harris", "Location": "CO", "Extension": 4368, "Email": "rharris@company.com", "Title": "Principal Developer", "Department": "Engineering", "Dept ID": 1300 } // more data ] Click “Ok” and save button ( ) before close the dialog. 6. Wait for the validation to finish. If it doesn’t run validation, click the validation button to manually validate the pipeline. 7. Find the “Mapper” snap and drag it to after the JSON generator. The Mapper snap transforms incoming data with the specific mappings and produces new output data. 8. Click on the Mapper snap to open the configuration dialog. We focus on the five blocks at the bottom of the dialog. Input Schema - shows the schema of input data Mapping table - is the configuration to map from input data to new output data Target Schema -shows the schema of output data. But this snap hasn’t been validated yet, so it shows nothing. Input Preview - shows the current input data Output Preview - shows the current output data Next, set the mapping table with the information below. To add multiple mapping, click in the top right corner. Expression Target path $Name.split(' ')[0] $Firstname $Name.split(' ')[1] $Lastname $Email $Email $Title $Title $Location == "HQ" && $Department == "Marketing" $Trainging The finish configuration will look like this. Click save and close the dialog. 9. Click the preview button after the Mapper snap. The output should be like this. SnapGPT SnapGPT is an interactive tool inside SnapLogic Designer. It uses the power of LLMs to democratize integration by helping users create and manage integrations using natural language prompts. The SnapGPT can do six main functions in SnapLogic. Generate pipelines Describe pipelines Analyze pipelines Ask anything about the SnapLogic Intelligent Integration Platform (IIP) Generate SnapLogic expressions Create SQL queries Usage SnapGPT You can open the SnapGPT panel by clicking on the SnapGPT logo in the header bar. Then, the panel will be displayed with a welcome message. Next, we will show how to use each feature of SnapGPT on the SnapLogic platform. Generate pipelines Prompt direct to the SnapGPT Example prompts: Extract opportunity object records from Salesforce and add them to Snowflake Create a Pipeline using Salesforce Read to fetch my Opportunities, Filter out any opportunities outside of the last fiscal quarter, then write them to Snowflake. Extract opportunity object records from Salesforce closed before “2022-10-01” and add them to Snowflake. Create a pipeline that fetches my SnapLogic Activity Logs from the SnapLogic API. Describe pipelines Open the pipeline you want to describe, then go to the SnapGPT panel and ask, “Describe the pipeline.” Example prompts: Describe the pipeline Analyze pipelines Open the pipeline you want to analyze, then go to the SnapGPT panel and ask, “Analyze the pipeline.” Example prompts: Analyze the pipeline Result: SnapGPT identifies issues with the pipeline and makes suggestions for improvement, and Snaps in the pipeline offers suggestions. Ask anything about the SnapLogic Intelligent Integration Platform (IIP) Example prompts: How do I build a pipeline? When and how should I use the Salesforce SOQL snap? How can one pipeline call another pipeline? Can pipelines use recursion? How is an Ultra pipeline different from a regular pipeline? Generate SnapLogic expressions To begin, simply open a snap and select the icon. This action activates the expression generation feature, enabling me to assist you in creating expressions. I can start creating expressions for you either in our chat or inside the expression-enabled field itself by typing the prompt and then clicking on the SnapGPT icon . Example prompts: Generate an expression to filter my closed lost opportunities. Generate an expression to grab the current date and time. Create SQL queries Open a Snap that supports SQL or SOQL queries and open SnapGPT. For example, if you open the Salesforce SOQL Snap, the suggestion Create SQL query appears above the SnapGPT prompt. SnapGPT generates the query and displays it in the SQL Preview panel. You can review the generated SQL before applying to the snap. Example prompt: Generate a SQL query to get the total amount of opportunities closed within the last quarter grouped by the account's country and deal status.6.1KViews5likes0CommentsAccess to SnapLabs
In order to provide the best experience for members of our SnapLabs program, we add a small group of users every week. This allows our team to provide dedicated time to support you through the program. Those interested in participating in SnapLabs can sign up for the waitlist at https://www.snaplogic.com/ snaplabs Please don’t hesitate to post questions and examples here in the SnapLabs corner of the SnapLogic Community.1.9KViews3likes0CommentsUnlock the Future of AI: Discover Project SnapChain and Build Your Own RAG Chatbot
To say we've journeyed through a realm of groundbreaking advancements since the release of SnapGPT in August (has it already been 4 months?!) is just scratching the surface. At AWS re:Invent 2023 not only did we showcase SnapGPT, but we also unveiled our revolutionary generative AI capability - Project SnapChain. Our customers have been thrilled with how SnapGPT has transformed their pipeline creation and documentation processes. But the excitement doesn't stop there - they're eager to delve into building their own generative AI applications using their unique data and documents. We're inviting you to a special event - this Wednesday, December 6th, at 11 AM ET (8 AM PT) for an exclusive behind-the-scenes look at Project SnapChain in action. In this interactive webinar, we're not just sharing insights; we're guiding you on how to construct a RAG-based chatbot using nothing but Snaps, along with your data and documents. What's more, you'll have the chance to put this knowledge into practice in our SnapLabs environment! Join us to be part of this innovative journey and unlock the power to create. Reserve your spot now and be at the forefront of AI innovation. We can't wait to see you there! Sign up here: https://www.snaplogic.com/resources/webcasts/snaplabs-corner-december-20231.7KViews1like0CommentsIs GenAI improving your productivity?
SnapLogic recently published a survey of use of Generative AI in organizations around the world. 67% of respondents said they are already saving 1-5 hours every week from use of GenAI tools. What about you? Are you using GenAI tools? Are they helping you improve your productivity? Please add a comment!1.6KViews0likes0CommentsGen AI for Integration: Addressing Security and Privacy Concerns With SnapGPT
Have questions about data security and privacy with Generative AI-driven integration? Read this blog by mrai : Gen AI for Integration: Addressing Security and Privacy Concerns With SnapGPT What thoughts or questions do you have?1.5KViews0likes0CommentsSnapGPT Is Now Generally Available
SnapGPT, now part of the SnapLogic Intelligent Integration Platform, is currently available free of charge to new and existing customers. Contact your CSM for enablement. For a deeper understanding of how it works, we invite you to: visit our website. read the SnapGPT press release, "SnapLogic First to Market with the World’s Only Generative Integration Solution" read the blog The Dawn of Generative Integration: SnapGPT Is Now Generally Available. request a SnapGPT demo and see how generative integration can make a difference in your workflows and your business.1.7KViews1like0CommentsSnapGPT Beginner's Guide
What is SnapGPT? SnapGPT is a generative AI solution in early release and currently available only to users who have been invited to SnapLabs. Built right into the SnapLogic web interface (screenshot below), you can now prompt SnapGPT for a wide variety of help creating Pipelines, configuring Snaps, for suggestions about which Snap to use, and so much more. How can I get started with SnapGPT? In this section we cover a few examples that should be repeatable as a way to send your first few prompts to SnapGPT and observe the outcome. After that you can explore our SnapGPT Prompt Catalog, which contains even more prompts to copy/paste into SnapGPT as you explore. One caveat here is that as a generative AI solution that is always learning, it is possible that outcomes will change over time. When SnapGPT creates a Pipeline for you it will be a bit like importing a Pipeline in the sense that it will have a wizard to help select accounts and finalize the Pipeline. Log in at https://snapgpt.labs.snaplogic.com If SnapGPT is not shown by default, press the SnapGPT button in the upper-right corner of the SnapLogic web interface to make it visible; to make it always visible, click your name in the upper-right corner > User Settings > Opt-in Features > Check the box for “Open SnapGPT by Default”: A new box will appear on the right-hand side of the SnapLogic web interface for you to start typing to SnapGPT: Examples: See SnapGPT in Action Now let’s talk about getting your feet wet, hands dirty, or whatever saying floats your boat. Below are several examples you can use to start exploring SnapGPT and they should be precise enough to yield consistent results. Example 1: Create a pipeline that pulls Salesforce Opportunities Our first example is one that will generate a short but complete Pipeline for us. With any generative AI, SnapGPT included, it is important to remember that the more specific you are with the prompt the more accurate a response you will receive, or in this example, the more accurate a Pipeline we will receive. Prompt: “Create a Pipeline using Salesforce Read to fetch my Opportunities, Filter out any opportunities outside of the last fiscal quarter, then write them to Snowflake.” Here is a screenshot of the short Pipeline created by SnapGPT that closely resembles the prompt we provided: Inside the Filter Snap we can see that SnapGPT created an expression for us to filter the $CloseDate file for us: Example 2: Ask help for identifying which Snap to use At some point we were all new to using SnapLogic and we learned it from CSM-led training, trial-and-error, reviewing existing pipelines, etc. What we did not have was an always-on AI assistant ready to answer our questions (we still love you Iris and wouldn’t be here without you!). This example helps show us how SnapGPT can be prompted with natural language to let us know exactly what Snap we need. Prompts: “What snap can I use to remove records from my pipeline based on a given condition?” “Which snap acts like a case statement or switch to allow me to move records down different pathways based on a condition?” Example 3: Ask for help to learn when to use one Snap over a different Snap Another example of using SnapGPT more for educational purposes or documentation skimming would be to ask it when you might want to use one Snap instead of another. Prompt: “When would I need to use the Salesforce SOQL snap instead of the Salesforce Read snap?” Example 4: Generate sample data We can also use SnapGPT to generate sample data, for those times when we need to get started on a business process and show some results but maybe we don’t yet have access to the source system. Prompt: “Create a single-snap pipeline with a JSON Generator that has 10 example Salesforce Lead records” Example 5: Fetch exchange data from third-party API It is also possible to use SnapGPT to pull data from a third-party site such as exchange data. Prompt: “Fetch exchange rate data from the European Central Bank and save it to a JSON file” What should I be aware of when using SnapGPT? As with any early access release of software, especially generative AI that is always learning, there are some key points to keep in mind as you explore SnapGPT and share feedback with the SnapLogic team (including any previously mentioned and/or typical disclaimers about using ChatGPT or SnapGPT): SnapGPT may generate Pipelines with unnecessary Snaps (like kids overpacking to visit grandma’s house!) SnapGPT depends on ChatGPT availability, so there are times when you might see a response like this: What if I have questions? Our goal is to provide several ways to interact with our team, which we’ve broken out below. Community: Using the SnapLogic Community’s locked SnapLabs Category, which is the same category you should be reading this content from (please do not post on the public forums yet since this is a limited release at this time). Office hours: Roger Sramkoski, one of our Sr. Technical Marketing Managers, will be setting up office hours once or twice a week. These will be purely optional and will be minimal agendas so we can focus on open conversations. Email: You can also contact Roger Sramkoski directly at rsramkoski@snaplogic.com7.2KViews7likes3CommentsPrompt Deep Dive: Exporting audit logs
Overview Many SnapLogic customers are required by various industry regulations to retain audit logs for long periods of time. If you are a SnapLogic org administrator, you have either already built a pipeline to export your SnapLogic Activity Logs or are looking to build one. In one of our recent Office Hours sessions, a customer asked if SnapGPT could help create a pipeline to address this so we are going to take a few minutes to go through this example in a way that would produce a valid pipeline. Walkthrough Here is our SnapGPT prompt (screenshot below): “Create a pipeline that fetches my SnapLogic Activity Log and writes it to S3” SnapGPT comes back with a pipeline preview, which looks like a good starting point: After pressing “Import on new tab” we’re able to start the pipeline import process which includes having a chance to rename the pipeline and choose where you want to save it. Now we’ll open the REST Get snap to add our authentication and verify the URL. NOTE: If your REST Get snap does not include the URL, you can ask SnapGPT for the URL or copy it from here: https://{pod_path}/api/1/rest/public/activities/{org} . The placeholder {pod_path} is the beginning of the URL in your address bar for SnapLogic, so snapgpt.labs.snaplogic.com for SnapGPT in SnapLabs, or elastic.snaplogic.com for other environments. You may need to use the Elastic pod and a different org than SnapLabs if you want to validate and/or run this pipeline. I have used ‘elastic.snaplogic.com’ as my pod_path and ‘ConnectFasterInc’ for my {org} as seen in the screenshot below. If you do intent to run this from SnapLabs you will also want to check the “Trust all certificates” box. I’ve also set a query parameter ‘limit’ to a value of ‘500’. SnapGPT may add some additional expressions in the Mapper, so what you see below is a minimal change we can make to load raw entries and drop header and status information from the audit log file. Wrap up Your final step here would be to configure the S3 File Writer, or if you need to send the audit log to a different location you could reconfigure the Mapper and send to wherever you need the files to go. Video coming soon! Sometimes a video is worth a bunch of words and screenshots, so once we finalize the video we’ll post it here!3.2KViews3likes1Comment