|Runtime STEM (?) [message #742851]
||Thu, 20 October 2011 18:44
| Stefan Edlund
Registered: July 2009
We've been bouncing around an idea internally that we thought is worth a discussion in the general STEM community. |
From the very beginning, there's been no separation between the design and the runtime components in STEM. The EMF objects used to compose scenarios (models, graphs, nodes, edges, labels etc.) are the same objects used at runtime running simulations.
These objects are generally large, and contain lots of additional data not needed running simulations. In addition to EMF itself bringing overhead, there's dublin core objects, lengthy URI identifiers etc. etc.
The idea is to separate the design from the runtime in STEM and add "compilation" component that translates the (large) design time EMF data structures into compact and efficient data structures suitable for high-performance computing. Generalized, user could even have the option of compiling STEM scenarios to any runtime target, e.g. for efficient execution on GPU accelerated systems, super computers (blue gene) or even Matlab or R.
The other advantage is users that only wants to run scenarios (and not design new ones) would only need to install a small STEM runtime engine.
Potentially, it could make STEM a lot more powerful than it is today. Granted, there are lots of technical details that would need to be worked out.
[Updated on: Thu, 20 October 2011 18:46]
Report message to a moderator