Fortunately times per second benchmark for execution time can be simply evaluated using the following snippet:
tps <- function(f, time) {
gc()
i <- 0
start <- proc.time()[3]
repeat {
i <- i + 1
f(i)
stop <- proc.time()[3]
if (stop - start > time) {
return (i / (stop - start))
}
}
}This function takes two parameters: a function to be benchmarked (f) and how much time is to be used for evaluation (time). It returns an estimate how many times per second function f can be executed.
As a simple application of tps function consider calculating relative speed of standard, lattice and ggplot2 graphics. The following function compares them by plotting histograms:
library(ggplot2)
library(lattice)
test <- function(n, time) {
x <- runif(n)
b <- c(tps(function(i) {
hist(x, 10, main = i)
}, time),
tps(function(i) {
print(histogram(x, nbin = 10, main = format(i)))
}, time),
tps(function(i) {
print(qplot(x, binwidth=0.1, main = i))
}, time))
names(b) <- c("hist", "histogram", "qplot")
return(b)
}
The function takes two arguments. First is number of points to sample for the histogram and second is time passed to tps function. On my computer the test gave the following result for 10000 size of the sample and 5 seconds for each function each :
> test(10000, 5)
hist histogram
qplot
192.614770 14.285714
5.544933
We can see that standard hist is over 10 times faster than from histogram from lattice and almost 40 times faster than qplot from ggplot2.
It might be interesting to do a comparison between output of manual profiling and use of Rprof/summaryRprof.
ReplyDelete