My solution parses each line only once to find the right and left numbers for both parts of the puzzle.
## Day 2: Cube Conundrum
That one seemed pretty straight forward. For each line, the solution immediately sums up games that fulfill the maxima and finds the maxima of each color.
For this I modified the solver class to pass in three lines at once, shifting one line down in each iteration, processing the numbers in the middle line and looking for additional symbols in the lines before and after. The tricky part was to correctly track the data needed for processing of each line and discarding it in time, without resorting to reading all data in before processing.
I introduced the test framework for this puzzle while stumbling over quite a few bugs.
## Day 4: Scratchcards
For part 1, the algorithm simply matches winning numbers against numbers we have, and multiplies the current line result by two for every match (except the first).
For part 2 there is a list of numbers of card copies for the upcoming cards, wher the list index is always relative to the current line. This works because the copies are always applied contiguously over upcoming cards. Once a card has been processed, its copy value is deleted from the beginning of the list (index 0).
## Day 5: If You Give A Seed A Fertilizer
Originally, I had implemented this by reading all data in first, constructing a list of the seven mappings, each containing a list of mapping ranges. I rewrote this when I realized that the conversion can be done line-by-line by maintaining separate lists of "unconverted" and "converted" values. Each mapping range is applied to all unconverted values, and if one matches it is converted and moved into the list of converted values. At the end of a map all converted values are moved back into the unconverted list. Unconverted values simply remain unconverted for the next map.
For part 2, it is not necessary (and not feasible) to convert the input ranges into individual values to run through the existing algorithm. Instead I modified the algorithm to run on ranges of input directly. This means that a successful conversion can split a range in up to three parts, where one is moved into the "converted" pile, while the others remain unconverted.
## Day 6: Wait For It
This one I solved by calculating the roots of the function *f(x) = -time ^2 * x + distance* and determining the distance between them. Part 2 was the first puzzle that required 64-bit integers for the calculations.
## Day 7: Camel Cards
The first puzzle that I could not solve line-by-line (day 6 doesn't count). For this one I store all the card hands and assign them a "type", e.g. "four of a kind", when processing them by counting the different card values in a hand. The rest of work is done in a custom compare function. When all data is processed I just use the compare function to sort all card hands, and then multiply the resulting indices with the bids.
For part 2, each card hands gets a "joker type" analoguous to the "type", for which the number of joker cards is added to the highest number of a different card type.
## Day 8: Haunted Wasteland
Again a puzzle where I had to read in all of the data before starting the algorithm. It proved difficult to verify parts of the algorithm by hand, but part 1 was still pretty straight forward.
Part 2 was a bit sneaky. This is the first puzzle where the result is outside the 32-bit unsigned integer range. And it is solvable only because each starting node leads into a loop with one of the target nodes, where the length of the loop is a multiple of the length of the sequence of instructions. With this knowledge, one can stop traversing the network once each target node has been reached and calculate the result directly.
This one I enjoyed the most so far. The process that is discribed in the puzzle, constructing a series of differences from the previous series, and then reverting the process to extend the series, is equivalent to finding a polynomial with maximum degree of *n - 1*, where the original series are *n* equidistant values of the polynomial.
So instead of using the outlined "brute force" method, I used Lagrange polynomials with *x1 = 0, x2 = 1, ..., xn = n - 1* evaluated at *x = n* (for part 1) and *x = -1* (for part 2) to find the function values for the extrapolated "points". Conveniently, the Lagrange polynomials can be precalculated (with some tricks to not run over Int64 limits) and make the calculation of the extrapolated values quite easy.
A nice explanation of the Lagrange method can be found here <http://bueler.github.io/M310F11/polybasics.pdf>.
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.