Forum Discussion
Day 9 started the fun of “to solve this problem, the logic is easy, but since we need continual state, it’s going to take a long time” advent of code solutions. My sample pipeline solution took 1 minute to run, and that was for a total of under 100 “head” movement. Then I saw the input file had 2,000 lines and realized this was going to be an over 1-hour process to get completed. My first iteration was able to process 1.7 documents per second (where each document is a 1-unit step in the puzzle) which my input file had 11,240 steps, so that took roughly 2 hours to complete (8 minutes short of 2 hours). I’m currently running with the setup you see in this post, I assume it will work properly, but I’m currently processing at 2.4-2.5 documents per second. Based on the new speed, it SHOULD complete in about 80 minutes. This is still a long time, but I’m still impressed that this is even possible in SOMEWHAT of a normal timeframe.
The improvement in speed is that rather than having the tail movement pipeline read from the file and write to the file, I’m simply having it update the data in-place, with a pipeline parameter to define the index of the output array to actually process. This was previously done via passed-in parameter and file name for the index and filename where it was read, and then all processing done. That Disk IO really does impact the processing time for things, so these are the scenarios where it’d be nice to have something to do with state, etc. I would say that today’s puzzle is more of a practice in patience and really testing every step of a sub-pipeline that you may need to process. While testing is hard, you can see that I’ve just leveraged a JSON Generator that I disable and disconnect for final running for the internal values. I also just disable the pipeline calls to get the first state of things configured to begin with.
Another interesting problem, and unfortunately a very long runtime to get the solution. Below are the screenshots from the runtime of the pipelines, then the pipelines themselves and SLP Files (I’ll edit/update to add the second runtime time screenshot)
With sub-sub-pipeline reading/writing file (1 hour, 51 minutes):
With sub-sub-pipeline just editing the structure itself (1 hour, 18 minutes):
Screenshots:
SLP Files:
day09_2022_12_09.slp (32.4 KB)
day09_move_2022_12_09.slp (38.3 KB)
day09_tail_move_2022_12_09.slp (10.7 KB)