libflame and Elements
I listened to a talk by Robert van de Geijn today at MIT. I learned a few handy things regarding linear algebraic computation. The FLAME project contains a few sub-projects, which are of independent interest:
- The Elements project is all about ‘programmability’ of linear-algebraic and other numeric algorithms. This means that it includes a language for expressing numeric algorithm while avoiding the cumbersome, error-prone and often-encountered multiple nested loops. This language consequently compiles down to machine code and optimizes the code intelligently for different targets: cluster, multi-core, GPU processors, even Texas Instruments’ DSP card (which purportedly can now SVD systems with nearly 100,000 variables).
- The Elements project comes with re-implementations of BLAS and LAPACK and since it can compile for a cluster target, it is marketed as an improvement of ScaLAPACK. Apparently is beats ScaLAPACK in speed but also in having a more human-friendly language for implementing algorithms.
- It seems that the bare-bones FLAME framework redefines the interface of BLAS and LAPACK and is actually easy to compile and use. It is thus probably preferred over the latter two.
The current research trajectory of the team is to figure out how to abstract away (and isolate from the algorithm logic) the expert’s knowledge of how to optimize code for a specific target hardware; thereby making it possible to plug in different optimization drivers into Elements for new hardware or cluster targets.