- Published on Friday, 01 September 2006 07:23
- Written by Douglas Eadline
- Hits: 5106
There is an interesting discussion about Parallel Languages over at the IEEE Technical Committee on Scalable Computing. The discussion was initiated by Greg Pfister of IBM (And author of In Search of Clusters.) The excellent question he posed is below (click Read More) Your thoughts are welcome here as well.
The question that leads to this discussion follows:
With just an hour or so of web surfing, I amassed a list of 80-100 different parallel language efforts, and those are mostly current, active, efforts. (I wondered for a while why I wasn't getting many web references earlier than about 1993. Duh.) There are probably at least another 100 pre-web.
So why, as a first approximation, are none used? Sure, some are used somewhat, in some cases. But go to IPDPS or the like, and all you hear about is (a) MPI - mostly; (b) OpenMP - much less so. Not even much on auto-parallelization, recently.
My tentative theory: It boils down to $$$$, via portability and longevity.
A good compiler, parallel language or not, is expensive to develop. The customers aren't satisfied with a mediocre one.
Similarly, important application codes are expensive to develop, and are expected to last a long time, well past the hardware fad or acquisition of the moment. (Come to think of it, compilers themselves try to last a long time, too.)
So, nobody puts the application code investment into anything that's not extremely likely to be portable over machines and over time. That boils down to a very standard language (Fortran, C) with a subroutine package (MPI, OpenMP). New languages may be nice, but ensuring they're on many machines over time is at least not simple and at worst very expensive.
There are undoubtedly other reasons, like education and familiarity. But I think they pale compared with the economics of portability.
Any comments on this?