Terminating Knight Tours on Infinite Boards

NumberPhile did a video on potentially infinite Knight Tours that I found quite interesting. I wanted to test out literate programming using Quarto to solve it and I did (result) 🙂

The RStudio integration with Quarto is really good and for a problem like this it really helps to keep your thoughts and code in the same place. Plots like this one that shows the complete tour are displayed inline and the experience is comparable with a Jupyter notebook with the advantage that it is not in the browser but in a polished IDE.

AoC 3rd Advent Sunday Wrap Up

Be warned: spoilers ahead.

Days 5 to 11 posed a bit more challenge than the first four and gave the opportunity to explore various parts of R.

Day 5

The actual logic of the puzzle was quite easy:

do_move <- function(stacks, count, from, to, move_fun = identity) {
  stacks[[to]] <- c(stacks[[to]], move_fun(tail(stacks[[from]], count)))
  stacks[[from]] <- stacks[[from]][seq_len(length(stacks[[from]]) - count)]
  stacks
}

where move_fun was either identity() or rev(). Getting the data into shape was more interesting and the native pipe could be put to good use as well as a new experminental feature:

stacks <- gsub("    ", " [_]", parts[[1L]]) |>
  ustrsplit(split = "\n") |>
  stacks => head(stacks, length(stacks) - 1L) |>
  strsplit(split = " ") |>
  data.table::transpose() |>
  lapply(rev) |>
  filter_empty()

The pipebind operator => in the middle of this pipeline can be used to the current argument to a name in the middle of a pipe. This allows using the current argument of the pipeline without the need to resort to ad hoc anonymous functions. Since it’s an experimental feature, it must be activated. This can be done by putting this in your .Rprofile: Sys.setenv("R_USE_PIPEBIND"=TRUE). This also works in RStudio.

Day 6

Day 6 was the easiest puzzle until now but I learned one small trick: The base function match() has an optional argument nomatch which specifies the return value if no match is found. In this case an if statement can be avoided by setting nomatch=0. The code below gets the part from a index to the end or keeps the buffer with one char added:

index <- match(char, buffer, nomatch = 0L)
buffer <- c(buffer, char)
buffer <- buffer[(index + 1L):length(buffer)]

Day 7

This puzzle gave a good reason to start the Dictionary class in recollections! Unlike the builtin list datatype the recollections dictionary can be used by reference which allowed finding a directory and directly using it without the need to copy it back into the directory tree. On top of that, the C++ code that underlies the Dictionary class is more efficient than that of list. With the tree of dictionaries in place, the logic to find the sizes of all leaves in the tree is a standard use of recursion.

Day 8

I’m not completely happy with this solution to this puzzle. After tinkering and looking at profvis output I managed to create a solution that runs in less than 10 seconds on this old machine but if the forest gets much bigger this code will struggle. Putting the puzzle input into a data.table might not be the most natural thing to do but in the end to write a quite clear solution so maybe it’s not all bad.

Day 9

In this puzzle we were asked to implement some weird version of snake. Again, this seems to be best solved using recollections::Dictionary to keep track of which cells have been visited. My initial solution solution had quite a bit of logic to determine the moves in the tail but this Reddit comment that simplified the logic quite a bit. Despite being a similar solution it’s interesting to see that the Python and R solution use quite different language features.

Day 10

Now we are asked to emulate a simple instruction set. The instruction set is so simple that execution and a history of all states can be handled using just data.table (with a big help of shift() and nafill().

Day 11

In this exercise it seems logical to put all properties of the individual monkeys in some kind of class. So, this was a good moment to play around with S4 classes. This worked out quite well but I did notice there is a bit of overhead when one interacts with the slots of the classes. By batching slot manipulation a speed of 50% was achieved. This performance improvement is unlikely to be relevant for most uses of R though as a lot of R code won’t have such tight loops.

Scrypt package back on CRAN

The scrypt package is back on CRAN and I have become the maintainer. The package allow password hashing and verification using Colin Percival’s scrypt scheme. The advantage of the scrypt hashing scheme over other cryptographic hash functions such as SHA is that calculation of the hash takes much more time and memory and a random seed is always used. This makes it much more expensive and time-consuming for attackers to retrieve passwords from hashes obtained through database hacks.

Thanks to RStudio and Andy Kipp in particular for doing all the heavy lifting of creating and writing the package and allowing me to take over maintainership of the CRAN package and the GitHub-repo. Issues and patches welcome!

CSV benchmarking 1/n

In this series of posts we will be looking at a number of ways to store data using R in as little space as possible and also consider the portability of the different solutions. As an example, the New York City Flights data set for 2013 is used (available through CRAN). The first rows and columns are shown below:

# A tibble: 336,776 x 19
    year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
   <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
 1  2013     1     1      517            515         2      830            819
 2  2013     1     1      533            529         4      850            830
 3  2013     1     1      542            540         2      923            850
 4  2013     1     1      544            545        -1     1004           1022
 5  2013     1     1      554            600        -6      812            837
 6  2013     1     1      554            558        -4      740            728
 7  2013     1     1      555            600        -5      913            854
 8  2013     1     1      557            600        -3      709            723
 9  2013     1     1      557            600        -3      838            846
10  2013     1     1      558            600        -2      753            745
# … with 336,766 more rows, and 11 more variables: arr_delay <dbl>,
#   carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#   air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>

For portability across programming languages, CSV’s, optionally compressed, area great option as any platform for doing anything with data should be able to read CSV. Alternative formats such as fst, parquet and ORC have a number of advantages such as smaller sizes, better fidelity and built in integrity checking. These will be examined in a later post. For now, gzipped CSV is used as reference. If created from the R data.table package, it will take 2 cores a bit less than one second to write the file containing the 2013 flights. The file size is 6.6mb.

library(dplyr)
library(data.table)
# We drop the column with timestamps for reasons explained below.
flights <- nycflights13::flights %>% select(-time_hour)
flightsDt <- as.data.table(flights)
system.time(data.table::fwrite(flightsDt, 'flights.csv.gz'))
    user  system elapsed 
   1.886   0.027   0.984 

If you want to read or write a large amount of data to CSV using R and you want to do it rather quickly, there are two good options at the moment: data.table and vroom.

data.table and vroom in the current development version support writing gzip-compressed csv’s as well. Roughly, zip-like compression algorithms works by creating a mapping of shorter sequences to longer sequences of bits in such a way that the mapping + the input mapped from the set of longer sequences to the shorter sequences takes up less space than the original. Compression algorithms use clever techniques to create and maintain this mapping but for reasons of speed and memory use this mapping can’t grow without bounds. We can help the algorithm a bit by first transposing the data.

Transpose for a free lunch

There is a lot of repetition in the first three columns year, month and day.

> flightsDt[,    
  lapply(.SD, function(.) length(unique(.))),    
  .SDcols = c('year', 'month', 'day') 
]
    year month day
 1:    1    12  31

If all these are put close together, it helps the compression algorithm a bit:

TimeAndSize <- function(FUN, fileName) {
  filePath <- file.path(tempdir(), fileName)
  on.exit(unlink(filePath))
  timing <- unclass(system.time(FUN(filePath)))
  fileSize <- file.info(filePath)$size
  c(timing, FileSizeInMB = round(fileSize / 1024L / 1024L, 1))
}
dt_csv_fun <- function(fileName) fwrite(flightsDt, fileName) 
dt_gz_fun <- dt_fun vroom_csv_fun <- function(.) 
vroom_write(flights, .) 
vroom_gz_fun <- function(.) vroom_write(flights, pipe(paste('gzip >', .)))
vroom_mgz_fun <- function(.) vroom_write(flights, pipe(paste('pigz >', .)))
vroom_zstd_fun <- function(.) vroom_write(flights, pipe(paste('zstd >', .)))
dt_csv_fun_transposed <- function(.) fwrite(transpose(flightsDt), .)
dt_gz_fun_transposed <- function(.) fwrite(transpose(flightsDt), .)
experiments <- list(
  dt_csv_fun = 'flights.csv', dt_gz_fun = 'flights.csv.gz',
  vroom_csv_fun = 'flights.csv', vroom_gz_fun = 'flights.csv.gz',  
  vroom_mgz_fun = 'flights.csv.gz', vroom_zstd_fun = 'flights.csv.zstd',
  dt_csv_fun_transposed = 'flights.tcsv', 
  dt_gz_fun_transposed = 'flights.tcsv.gz'
 )
 PerformExperiments <- function(experiments) {
   t(sapply(names(experiments), function(.) {
     TimeAndSize(match.fun(.), experiments[[.]])
   }))
 }
 PerformExperiments(experiments)
                  user.self sys.self elapsed user.child sys.child FileSizeInMB
dt_csv_fun            0.403    0.012   0.215      0.000     0.000         22.8
dt_gz_fun             1.891    0.000   0.961      0.000     0.000          7.5
vroom_csv_fun         1.327    0.019   0.360      0.000     0.000         22.9
vroom_gz_fun          1.359    0.030   1.517      1.474     0.024          7.5
vroom_mgz_fun         1.341    0.046   0.943      1.560     0.039          7.5
vroom_zstd_fun        1.364    0.037   0.600      0.181     0.030          6.6
dt_csv_fun_transposed 2.318    0.047   1.964      0.000     0.000         22.8
dt_gz_fun_transposed  3.004    0.003   1.893      0.000     0.000          5.5

Comparing the results of data.table and vroom we see that timing and file sizes are almost equal. Both are equally good at writing CSV files. However, if the table is first transposed and then written, the total process takes a bit longer but file size is reduce by around 25%!

Warts

The column time_hour needs to be dropped as data.table‘s transpose() unpacks this column which is saved as more data and transposing these columns back doesn’t result in the same values in any case. In any case, transposing is slow even with `data.table` and handling very wide tables can be problematic for some software packages.

Conclusion

Compressed csv’s are a portable data format that require significantly less space than their uncompressed counterparts. The size of compressed csv’s can be further reduced by transposing the csv’s first and then compressing. Whether this extra compression is worth it depends on use case.