2015-11-09 It all started with a straight-forward implementation. It read a tree data structure by depth first search from an SQL database (objects linked by parent-id). This showed to be too slow because of the many small SQL queries. Therefore it was reworked in a version that read the whole table into a large array and do the depth first search on that. To time-optimize it, already visited elements were removed. This made the array shrink and thus speed the tree-building up. The program ran fast enough then. Later, the code was run on a huge number of records, exhausting the memory. But even with a lot of memory, it became really slow. Hence, the code was reworked again ... and surprise-surprise: eventually something like the original code was reached. But for whatever reason, now it ran fast enough ... and even much faster than every other version and with much less memory. So, what is to learn from this story? First: Optimizations are temporary (or at least should be). Second: The most simple ver- sion was the best. Third: Understand what is going on and not just fiddle with the code! http://marmaro.de/lue/ markus schnalke