ContributionsMost RecentMost LikesSolutionsRe: HTTP Client response formatted different from REST Get Yes, the response type is set to JSON. I also tried using the Extract Entity as Document option but it just returned the string by itself: Re: HTTP Client response formatted different from REST Get In both the HTTP client and the REST Get snap , I have my headers set like this based on the source system documentation: HTTP Client response formatted different from REST Get I'm trying to replace the REST Get snaps in my pipelines with the Http Client snap but the response I'm getting is formatted differently. With the Rest Get snap, the JSON response is formatted as a list so I can subset it using $entity.data notation: However, when I use the HTTP Client snap, the 'entity' element is returned as a string: As close as possible, I tried to configure both snaps the same but I'm assuming there's something different about the HTTP Client snap that I need to change. Can someone explain what I need to do to get the HTTP Client to return the JSON in the same format at the Rest Get snap? API without parameters returns empty JSON I have a triggered task that pulls data from a database and formats the JSON to be returned by an http request. I added a parameter called 'dataset_name' so users making an api call can select just one record from the database: I pass the value '_dataset_name' to a filter snap and the API returns the record. This works as expected, when a user queries the endpoint and specifies the dataset_name parameter, they get a JSON with one record. https://elastic.snaplogic.com/api/1/rest/slsched/feed/.../my_pipeline%20Task?dataset_name=<my_dataset> However, when a user doesn't specify a parameter, I'd like the API to return all the records in the table. Right now, if you don't specify a "dataset_name" parameter, the http request returns an empty JSON. I'm assuming the issue is with how I've implemented the filtering. Can someone explain what I did wrong? SolvedRe: Backing up two database tables to an S3 bucket as one archive That's great. I didn't realize you could write to S3 with the ZipFile Write snap. As a bonus, is it possible to make the file name today's date? I tried: 's3:///my-data-warehouse-backup@s3.us-west-1.amazonaws.com/' +Date.now().toString().replaceAll(":","_")+'.gz' but that didn't work, got 'Unsupported protocol or URL syntax error ' Backing up two database tables to an S3 bucket as one archive I'm trying to develop a pipeline that queries two (or more) database tables, converts them to CSV, compresses them to a single archive and writes them to and S3 bucket. I have a working pipeline for a single table: But what I'd like to do is have multiple select queries that pull in different tables and compresses them to a single archive. Is this possible? I looked at "ZipFile Write" but there didn't seem to be a way to write to S3. SolvedConditional delete from a databases using a value from the pipeline I'm trying to delete rows from my database when their id's aren't in the response document returned from an api query. Currently, my pipeline looks like this: If there is an ID value in the SQL Server table that is not in the REST response document, I want to delete that row from the table. Here's my current non-working SQL Query: But this doesn't work; it doesn't delete anything from the target table. How can I delete rows form the table when they aren't present in the api response? Reshape document to denormalized table I have a document like this: [ { "first": ["1", "2"], "second": "a" }, { "first": ["3", "4"], "second": "b" } ] And I'm trying to convert it to a table like this: First Second 1 a 2 a 3 b 4 b I'm thinking I need to use the "map" or "mapValue" function but I can't figure out how to do it. Any ideas? SolvedAdd column to document and repeat value in each row I have one document structured like this: FirstName LastName Title Ann Smith Ms June Jones Mrs Sam Johnson Mrs And a second document like this: Label Value MyEvent 8/21/23 I'm trying to merge them together to get a final product like this: FirstName LastName Title Value Ann Smith Ms 8/21/23 June Jones Mrs 8/21/23 Sam Johnson Mrs 8/21/23 I'm trying to append the Value column and repeat the value for every row in the first table. I've tried Union and Join but without success. What would be the correct Snap for this use case? SolvedHow to ignore duplicate rows when inserting record into table I'm developing a pipeline to request data from an API and load it into a MS SQL Server table. The first run of the pipeline loaded 50 records to the table, each with a unique 'user_name' value. The table constraints require the 'user_name' field be unique. A new user has been added to the source platform so I want to run the pipeline again and ignore the duplicate records and update the destination table with the additional, 51st, record. Right now, I just get an error from the first record telling me I have a duplicate key error. I've tried using the 'Update' snap with a condition like "user_name != $user_name" but with no success. How can I run this pipeline so it will skip duplicate key values and add any new records?