= 0
numbers
for (i in 1:100){
= numbers + i
numbers
}
print(numbers)
[1] 5050
Website - Youtube - About - Talks - Books - Packages - RSS |
I think what I enjoy the most about functional programming is the peace of mind that comes with it. With functional programming, there’s a lot of stuff you don’t need to think about. You can write functions that are general enough so that they solve a variety of problems. For example, imagine for a second that R does not have the sum()
function anymore. If you want to compute the sum of, say, the first 100 integers, you could write a loop that would do that for you:
= 0
numbers
for (i in 1:100){
= numbers + i
numbers
}
print(numbers)
[1] 5050
The problem with this approach, is that you cannot reuse any of the code there, even if you put it inside a function. For instance, what if you want to merge 4 datasets together? You would need something like this:
library(dplyr)
Attaching package: 'dplyr'
The following objects are masked from 'package:stats':
filter, lag
The following objects are masked from 'package:base':
intersect, setdiff, setequal, union
data(mtcars)
= mtcars %>%
mtcars1 mutate(id = "1")
= mtcars %>%
mtcars2 mutate(id = "2")
= mtcars %>%
mtcars3 mutate(id = "3")
= mtcars %>%
mtcars4 mutate(id = "4")
= list(mtcars1, mtcars2, mtcars3, mtcars4)
datasets
= datasets[[1]]
temp
for(i in 1:3){
= full_join(temp, datasets[[i+1]])
temp }
Joining with `by = join_by(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear,
carb, id)`
Joining with `by = join_by(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear,
carb, id)`
Joining with `by = join_by(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear,
carb, id)`
glimpse(temp)
Rows: 128
Columns: 12
$ mpg <dbl> 21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2, 17.8,…
$ cyl <dbl> 6, 6, 4, 6, 8, 6, 8, 4, 4, 6, 6, 8, 8, 8, 8, 8, 8, 4, 4, 4, 4, 8,…
$ disp <dbl> 160.0, 160.0, 108.0, 258.0, 360.0, 225.0, 360.0, 146.7, 140.8, 16…
$ hp <dbl> 110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, 180, 180…
$ drat <dbl> 3.90, 3.90, 3.85, 3.08, 3.15, 2.76, 3.21, 3.69, 3.92, 3.92, 3.92,…
$ wt <dbl> 2.620, 2.875, 2.320, 3.215, 3.440, 3.460, 3.570, 3.190, 3.150, 3.…
$ qsec <dbl> 16.46, 17.02, 18.61, 19.44, 17.02, 20.22, 15.84, 20.00, 22.90, 18…
$ vs <dbl> 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0,…
$ am <dbl> 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0,…
$ gear <dbl> 4, 4, 4, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 4, 4, 4, 3, 3,…
$ carb <dbl> 4, 4, 1, 1, 2, 1, 4, 2, 2, 4, 4, 3, 3, 3, 4, 4, 4, 1, 2, 1, 1, 2,…
$ id <chr> "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", …
Of course, the logic is very similar as before, but you need to think carefully about the structure holding your elements (which can be numbers, datasets, characters, etc…) as well as be careful about indexing correctly… and depending on the type of objects you are working on, you might need to tweak the code further.
How would a functional programming approach make this easier? Of course, you could use purrr::reduce()
to solve these problems. However, since I assumed that sum()
does not exist, I will also assume that purrr::reduce()
does not exist either and write my own, clumsy implementation. Here’s the code:
= function(a_list, a_func, init = NULL, ...){
my_reduce
if(is.null(init)){
= `[[`(a_list, 1)
init = tail(a_list, -1)
a_list
}
= `[[`(a_list, 1)
car = tail(a_list, -1)
cdr = a_func(init, car, ...)
init
if(length(cdr) != 0){
my_reduce(cdr, a_func, init, ...)
}else {
init
} }
This can look much more complicated than before, but the idea is quite simple; if you know about recursive functions (recursive functions are functions that call themselves). I won’t explain how the function works, because it is not the main point of the article (but if you’re curious, I encourage you to play around with it). The point is that now, I can do the following:
my_reduce(list(1,2,3,4,5), `+`)
[1] 15
my_reduce(datasets, full_join) %>% glimpse
Joining with `by = join_by(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear,
carb, id)`
Joining with `by = join_by(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear,
carb, id)`
Joining with `by = join_by(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear,
carb, id)`
Rows: 128
Columns: 12
$ mpg <dbl> 21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2, 17.8,…
$ cyl <dbl> 6, 6, 4, 6, 8, 6, 8, 4, 4, 6, 6, 8, 8, 8, 8, 8, 8, 4, 4, 4, 4, 8,…
$ disp <dbl> 160.0, 160.0, 108.0, 258.0, 360.0, 225.0, 360.0, 146.7, 140.8, 16…
$ hp <dbl> 110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, 180, 180…
$ drat <dbl> 3.90, 3.90, 3.85, 3.08, 3.15, 2.76, 3.21, 3.69, 3.92, 3.92, 3.92,…
$ wt <dbl> 2.620, 2.875, 2.320, 3.215, 3.440, 3.460, 3.570, 3.190, 3.150, 3.…
$ qsec <dbl> 16.46, 17.02, 18.61, 19.44, 17.02, 20.22, 15.84, 20.00, 22.90, 18…
$ vs <dbl> 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0,…
$ am <dbl> 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0,…
$ gear <dbl> 4, 4, 4, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 4, 4, 4, 3, 3,…
$ carb <dbl> 4, 4, 1, 1, 2, 1, 4, 2, 2, 4, 4, 3, 3, 3, 4, 4, 4, 1, 2, 1, 1, 2,…
$ id <chr> "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", …
But since my_reduce()
is very general, I can even do this:
my_reduce(list(1, 2, 3, 4, "5"), paste)
[1] "1 2 3 4 5"
Of course, paste()
is vectorized, so you could just as well do paste(1, 2, 3, 4, 5)
, but again, I want to insist on the fact that writing functions, even if they look a bit complicated, can save you a huge amount of time in the long run.
Because I know that my function is quite general, I can be confident that it will work in a lot of different situations; as long as the a_func
argument is a binary operator that combines the elements inside a_list
, it’s going to work. And I don’t need to think about indexing, about having temporary variables or thinking about the structure that will hold my results.