ContributionsMost RecentMost LikesSolutionsRe: Send a 5 request per Second I had the same kind of problem. One thing that you can do is have it send out only 5 records at a time, and have an earlier process{That could be done using a script object} send out only five records every second. When I had to do it, I was sending them out to a que, and there was no real concern as to when the actual process received the records, so I had snaplogic send them out as fast as possible, but only send out a given number of records every so many seconds. Since you want to send them out so slowly, you can also send them out with one thread and a batch of 1, so it would be easier to debug, etc… Re: Insert Into Teradata I have only done it with a competing product, and various languages and utilities. I was actually on a contract where the SAME table had fields defined about 5 different ways. Luckily, I got the DBA to change the format to be consistent. But NONE would accept data unless it was null, or an appropriate value. So I had the same type of problem. Also, now that I think about it, at one point they upgraded that product(That has the schema definition in the source/target objects), and the company changed the timestamp format, and I had to change the format in my code to suit. The format was IDENTICAL, but one didn’t have microseconds, and the other did. If all of your errors are overflows, THAT might be your problem! Is the last part SS, SS.sss, or SS.ssssss? Re: Insert Into Teradata What is the PRECISE definition in teradata, and in snaplogic? There are at least 2 types of timestamps in teradata, and they are looking for relevant data for every byte. Make one mistake, and it will reject the data. |TIMESTAMP (0) CHAR (19) Timestamp(0) is YYYY-MM-DDbHH:MI:SS TIMESTAMP (6) CHAR (26) Timestamp(6) is YYYY-MM-DDbHH:MI:SS.ssssss (milliseconds extra) https://docs.teradata.com/reader/WurHmDcDf31smikPbo9Mcw/VgEeisUpvM6NNgAXNLdJzQ You could also try explicitly casting: sel cast('2008-10-23’as date format ‘yyyy-mm-dd’) (TIMESTAMP, FORMAT ‘YYY Y-MM-DD-HH:MI:SS.S(6)’) to see what you should do. Re: Filter snap throwing error BTW Maybe this is variable, or can be avoided in some way, but it is a good idea to determine where the files are wanted, and what space you can use. For example, at least with reltio’s implementation, you are limited to something like 100K. I might be wrong, and it may be 100M, but these days you are likely to exceed EITHER! I ended up using MY ORGS shared folder for files, and that was only for SMALL files that were a good deal smaller than even 100K. The actual sources varied, but some were FTP files that I moved to S3 and read from there. If I had to write out “flat” files, INCLUDING REPORTS, I did THOSE to S3 also. Some of those files are over 100GB. ALSO, watch out on shared folders. It looks like you only have two, so as long as the pipeline is in a folder underneath the projects fold, and you still only have two levels like that, for local files, you can put the file under …/shared/filename, with “filename” being what you want to call the file. THEN, your file will always be in the shared folder right under projects. REMEMBER though, make sure the admin and user are OK with that, and you don’t exceed a quota. ANOTHER thing, just so you know. White space, like that on the reader in your picture, indicates the snap isn’t connected. So if you aren’t getting any data, that is why. One neat feature SnapLogic just added, in the last release that you should have already, is an option to disable a snap. It does the SAME thing as idsconnection, but: Disables the snap. Puts a red circle with a slash through it, to be more obvious Is FAR easier to set and reset. It is under the execute options for the snap. Instead of an enable preview option, it has a list. “RUN”,“RUN&PREVIEW”,“DISABLE”… Re: Pipeline output is un-predictable I had a similar problem to what you described. Are you creating the document to put it in? If you are merely adding to the data coming into the snap, you don’t have to do anything. You can merely add the value. But if you are writing a different document out, it can create seemingly random data if you don’t first create a place to put it. I wrote this in python, but I ended up writing a line like: new = java.util.HashMap() new was the new document I was writing out. The document coming in was created by SnapLogic, and can simply be updated, so it isn’t so obvious, and isn’t a problem. Anyway, I did that, and the problem went away. Re: Trigger Task Option The problem with that, is that people generally don’t care if one is running. They just don’t want two to be running at the same time. So the ONLY place the logic can go is in the acceptance of the api command. The logic should be OK, got a trigger, NO OTHER TRIGGER SHOULD BE ACCEPTED UNTIL COMPLETION OR FAILURE. And if you do have a queue up feature, it should do the same thing in the same place, except that the trigger should be remembered for after the current one finishes. There probably should be some consideration for a failure in there. I always figured this is for the problem ALL schedulers have in that nobody knows when something will finish, like the time a DBA kept incorrectly recreating a table I needed, and making it take almost 150 times as long! My task that generally took 30 minutes took about 3 DAYS after he recreated the table. Had two processes started at once, it could have taken substantially longer and/or potentially corrupted the table’s data. Someone also could have been looking at the table, which could have delayed it substantially. I was even at a place where they thought their network ran 45 times the speed that it did. It wasn’t until we NEEDED that speed that the question was raised and, evidence in hand, we confronted the one guy that could tell us the truth. The BUILDING had the throughput we were told, and we were allocated only a small part of THAT. One place I was at had two complimentary systems, to backup. They figured that if A impersonated B and B did the same with A, that there would be no problem. If they had a complete failure of a system, maybe it might have worked. A took over for B which took over for A which took over for B, etc…… Everything came to a screeching halt. We never found out what started it, but once started, those systems were just trying to boot until a person came in and STOPPED it! One place even got a certain product. I don’t know if they ever fixed the bug or whatever. The manager had to crash the system every morning and reboot. It was so slow that he couldn’t do it properly, and the garbage collection in that software obviously had problems. So yeah, sometimes it is a good idea to have some sort of safety device. Re: Connection to legacy Sybase DBMS sometimes hangs Yeah, I think sybase made it the default on the idea that people would always have a write once mentality, where changes would have a reversing entry, or things would be only read. It is a nice concept, but I HATE that they made it the default. It gave me a lot of grief because the DBA insisted on doing ALL changes, and just wouldn’t listen. So that computer probably did over 2 months work for NOTHING. Luckily that was generally on the weekends, so it wan’t as bad as it could have been. Outside of transactions, locks, and deadlocks, I think the database should be relatively fast, meaning that you wouldn’t notice a significant pause. And I am ASSUMING that that is what you mean. The page size at least WAS small, so it would do more page splits than say M/S SQL TODAY, or Oracle. So there might be more gaps where it seems to almost hesitate, but we are talking barely susceptible hesitations that may be tens or hundreds of milliseconds. If you can, it might be a good idea to bring it up with sybase. It COULD be a jdbc problem, or even some oddity in sybase. If you are doing enough processing, it could even be garbage collection. It could be a windows problem. Windows NEVER handled virtual memory well, and at least earlier versions of windows didn’t generally handle memory over a certain amount properly. They always wanted people to pay more to be able to do that. And what else is happening on that windows system? Re: Connection to legacy Sybase DBMS sometimes hangs Well, clustered indexes are only created on tables with primary keys, so it sounds like that might not be a problem here. What I mean by key order is that keys can be anything. Say the key was numeric, and you had records 1-1000000 listed out there, with their number as the primary key. MOST databases store data OUT OF ORDER, and have a LINKED set of pages that hold the keys, and pointers to their tables. The overall hit on a read isn’t that noticeable, and a write can be VERY fast. A CLUSTERED index on Sybase, and the older versions of MS/SQL Server is different. They store data IN ORDER, and kind of use THAT as the index. Reads are a bit faster, and a write can be very slow, if written out of order. In my example, with a non clustered index, writes and reads would be roughly the same whereever they are. As I said though, that index, and the random order will slow down reads a bit. If you wrote record 1000000 or later, the write would actually be a bit faster than the unclustered. If you wrote record 1, it could be MUCH slower, because it has to make room for that record. It has been a long while since I have used sybase, or older versions of MS SQL. I believe you can just use DESCRIBE mytablename Re: Connection to legacy Sybase DBMS sometimes hangs OH MAN! This brings back memories! You might have fallen into a trap I consider a design flaw! The default primary key for this variant of sybase, IIRC, was CLUSTERED! Is that what it is on YOUR system? http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc32300.1570/html/sqlug/X50317.htm If so, you must do things IN KEY ORDER! If you do them OUT of order, it will take LONGER! How much longer? Well, I was on a contract where I often had to run things at the end of the week just before I flew out. NO PROBLEM! It was a reliable process, and generally took 30 MINUTES to run! Well, sometimes I would get to the airport and be struck with HORROR! I would suddenly remember that the DBA might make a change that week. Whenever he “made a change”, he would drop and recreate the table, WITH NO SPECIFICATION! So the primary key would be clustered, and the routine I was running would take about THREE DAYS!!! During this period, the connection will APPEAR to hang! It is really just waiting for the operation to complete. CLUSTERED on SYBASE, at least with the first variant they had, AND with M/S SQL Server up to 6.5(or was it 7), since they shared the same code. Was GREAT for READING, since it was faster and took less space. It was great for writing, IN SEQUENCE! It was HORRIBLE for writing OUT of sequence. The delay on a particular record is based on where the update or delete occurs. If it is towards the end, it might not be very noticeable. If it is at the beginning, it could take a LONG time! Of course it is ALSO possible that you simply have a lock, or a deadlock. If it is a lock, it will likely eventually go away. If it is a deadlock, it will stay locked for a while and eventually kill, iirc, the longest running transaction. Of course the transaction on the sacrificed job will roll back also. Re: Update Multiple Line Items using the NetSuiteUpdate It is just a guess, but it looks like you were trying to delete a line, and that Netsuite has a second area saying that a transaction occurred. Apparently either both lines are created at the same time, or there was some transaction with the item, and you have to delete the transaction line(Stating it was shipped, paid for, etc…), before you delete the detail for that line.(stating WHAT was ordered)