Working on Project Euler, there is a problem involving dice probabilities. The forum shows a variety of different solutions, some of them involving entirely too much typing (several people literally iterated through every possible combination of dice faces), and some of them were somewhat more considered.

Reading the comments, it reminded me of something my Russian mathematical modeling professor used to say back in school:

time + computation + thinking = constant

One solution, in C#, took 56 seconds. After some review and evaluation, he got it down to 126 ms. Another one in Java took “about a minute.” Several of the solutions, however, even written in Python without use of Psyco took the barest fraction of that time. My solution took 64 ms in Python, and another person’s took less than a 6th the time that mine did on the same hardware.

What made such a radical difference? Thinking about the problem a little differently leading to a different algorithm, memoization, precomputing , and things along those lines. There was no magic involved: some people spent more time considering the algorithm and implementation, other people just wrote the easiest code possible and let the computer do the work.

In a Numerical Methods class we did projects on numeric solutions to PDEs. My group did an elliptic PDE and we approached it using three different methods: two numerical approaches to the problem, one analytic for comparison. I did the analytic solution, and the final solution took around 10 pages of typeset equations to illustrate every step. The solution, however, was very robust, accurate, and extremely fast once implemented in software. It was easy to test, vary the edge conditions or the problem domain, and took only a few seconds to execute even for relatively complex problems. There were a few limitations, but most of those were not crippling and only effected accuracy in edge cases.

The purely numeric solutions, on the other hand, were extremely difficult to work with. The simplest solution had a variety of restrictions on the structure of the problem and took over two orders of magnitude of time longer. On the other hand, the numeric solutions were (for the most part) easier to describe and to implement. It seemed that the more complex the numeric solution and the more thinking and work it took before implementing, the faster it ran once finally implemented.

At a previous job a coworker implemented a solution where algorithm he came up with ran in $\Theta(n^3)$ time. Given the nature of the application, this was entirely too slow and so we put it up on the whiteboard to examine it. With some thinking, we got it down to $\Theta(n)$, which for a couple of thousand items is hugely significant.

This is a fundamental concept in Software Engineering, because frequently in optimization the first thing to check are the algorithms. Small efficiencies–which are generally relatively easy–rarely help significantly and can hurt long-run maintenance. A change to the algorithm–or apply techniques such as caching–can improve the execution time by several orders of magnitude *and* help later code maintenance.