Hi,
 
(1)
 
Verdict: no, it does not :(
 
(2) 
 
I have fiddled around a bit with measuring the memory consumption of Rete 
on the queries of the CPS example.
Note that only the queries in the cps.xform.m2m namespace are part of the 
benchmark proper.
The memory consumption is measured separately for each query and both the 
absolute and relative memory footprints are recorded (in the first two columns). 
”Relative” is measured against a baseline formed by all dependencies of a 
pattern, so that the “local” costs will stand out (see the 
RelativeQueryPerformanceTest class).
All memory measurements are 
approximative, so some noise is to be expected. As a noise-reducing effort, I 
took the median of 3 (in some cases 5) repeated runs (see separate columns); and 
in one case (triggerPair), I had to manually repeat the measurement due to freak 
outlier results. 
Tab #1 is the most important to look at; it evaluates a list of queries 
(ordered by relative query footprint, descending) on the transformed CPS model 
generated with size factor 32, using a more or less current snapshot Viatra 
Query from early January. 
It has been compared to (a) an older version from 
late November, and (b) the same version running on the model generated with 
scale factor 1.
The first comparison, on request by István Ráth, is there essentially to 
determine whether the improved functional dependency analysis has had any 
impact; see column “ΔRMEM”.  
For most queries, the patch appears to be 
indifferent; among the top 20 queries by footprint (there is too much noise 
further down), it seems there is one significant improvement 
(triggerPairWithoutReachability) and one significant regression 
(multipleTransitionsWithSameAction) between the two versions.
I have verified using the Rete visualizer that the former is indeed due to 
the better query plan allowed by functional dependency information.
The latter seems to be a random side-effect (nothing really to do with 
functional dependencies); the query planner essentially makes a blind decision 
when evaluating this query, which seems to have went one way with the earlier 
version and another in the newer one. 
 
The second comparison tries to find out whether queries on models that are 
32 times bigger really take up 32 times more space. 
Actually, the base 32 
logarithm of the ratio of memory footprints is computed; this should be around 
1.0 for linear scalability, around 2.0 for quadratic sizes, around 0.0 for the 
constant parts that are not grown by the generator, etc. See column 
“scaling”.
It is quite apparent that for a model this large, the costliest queries are 
exactly those where this scaling factor is high.
Cheers,
Gábor