Update on the topic. After long investigation and checking every snap, documentation and ultra task support. I found out that there was nothing wrong with my pipeline but that the API portal can’t handle certain request even when it’s not so big. Because I tested all my API get,put, post methods in the browser with the request url and they all worked and instant response.
Conclusion: I think that the API portal has some issues regarding api calls getting lots of data or multiple requests. I will close my topic tomorrow.
Also If anyone has experience with the API and API portal/ Ultra task of SnapLogic. I’m all ears to learn more.
Next I tried to find something online about it. But I only could find that it was possible the the node couldn’t allocate enough memory or the CPU was to high. I checked the dasboard it only used like 1 percent CPU and less than 10mb Memory but I did for sure in the SQL select. Staging mode: in memory.
then after researching all that I kinda was thinking why haven’t I tried the request URL in the browser.
Somehow then all my API call requests worked and I get an instant response with the corresponding format that i made with the pipeline and pipeline execute.
I hope this explains it a bit more. If you would like we can also have a call in the future so I can explain more detailed step by step what I did.
But maybe there is another problem in my pipelines that causes the portal to give that response. Don’t know yet. My apoligize if my thinking or answer is wrong.
Note: I’m not so experienced in efficient pipeline creation for APIs still learning. Open for suggestions in the future.
Thanks, Jens. The image of your pipeline is too low-res for me to be able to read the snap names. But I think I’m hearing that the crux of the problem is that your SQL Select returns multiple records so you’re trying to deal with that in your Ultra pipeline.
The key to doing that is to use a child pipeline and move the SQL Select to that pipeline, along with whatever snaps are needed to aggregate those multiple documents and return a single document as the result of the child pipeline. Then in the parent, do any additional processing needed on that child output document to create a single response document for this request. This works because a new instance of the child pipeline is created for every request, so you can use aggregating snaps there that you can’t use in the root Ultra pipeline. Make sense?
Yeah that was the problem before and I implemented the child pipeline but then after that the pipeline was not erroring in the dashboard but when I made multiple requests it always gives response: Unable to handle echo message. But next day then it works again until a certain of time.
Then I tried the Request URL in the browser and that works and I get an instant response.
For now it’s working and we didn’t have the error whole day could it be possible that when your editing the pipeline constantly changes that it maybe shows the error message because of all the changes. I know it always takes some time to update the pipeline in ultra task