When we do optimizations and refactoring we want meassure the effect of the changes that we do. There can be many ways of doing it, ranging from using a stopwatch to record the execution time to using sophisticated tools to measure performance.
The approach I describe here is somewhat in the middle. It requires some manual work comparing and judging results and utilises automation to obtain the data to compare.
While working my way through the book: C++ Design Patterns and Derivatives Pricing, I needed to establish a performance baseline to compare the different implementation. A nice implementation should not cost anything or as close to that as possible.
To do this we must establish a test scenario.
The example used throughout the book is a Monte Carlo simulation. One of the main parameters here is the number of iterations done, which we will call n. Experimenting a little bit we see that the accuracy goes up with the number of iterations. We also see that the average time pr. iteration levels of as the number of iterations increase.
We fix the number of iterations at powers of 2 and set the scale of the x-axis to be logarithmic.
We do two different measures, an absolute time and a time pr iteration.
First the absolute time with the y-axis on a logarithmic scale. We expect that time grow linearly with the number of iterations, i.e. time is O(n). This is shown nicely in the graph below.
Then we have the time pr. iteration, also with the y-axis on a logarithmic scale. We expect the time of the iterations to approach some constant, i.e. time pr. iteration is O(1). Which we also see in the graph below.
Finally we graph the price output at the end of each simulation. This gives us an idea about how fast the price converges.
This establish our performance baseline.