Those tests came into existence when I introduced the p2 query
    evaluator. The main purpose was proof of concept, that the use of an
    evaluator could bring better performance than most hand coded
    queries had done in the past. For that they have played out its role
    but I agree with John, they might still come in handy for reference.
    They were never intended to be proper performance tests. 
     
    Regards, 
    Thomas Hallgren 
     
    On 2011-09-01 16:03, John Arthorne wrote:
    Some of these tests are just comparing two different
      ways of doing things against each other (testAVersusBPerformance
      tests). These ones wouldn't make sense as proper performance
      tests. They could probably be removed, although they might
      sometimes come in handy for reference. Some of them look like
      micro-benchmarks that wouldn't be helpful as proper performance
      tests, but some could probably could be turned into proper
      performance tests with baselines, etc. In particular benchmarks of
      slicer, resolver, and reconciler performance would be really
      useful to guard against regressions. 
       
      John 
       
      On Thu, Sep 1, 2011 at 8:38 AM, Thomas
        Watson  <tjwatson@xxxxxxxxxx>
        wrote:
         
          
            While monitoring a recent p2 test failure in the Indigo
              SR1 (3.7.1) build I noticed a test failing that had
              "Performance" in the name. I found the following tests
              that also have "Performance" in the name: 
               
              testParserPerformanc 
              testMatchQueryVersusExpressionPerformance 
              testMatchQueryVersusIndexedExpressionPerformance 
              testMatchQueryVersusIndexedExpressionPerformance2 
              testMatchQueryVersusMatchIteratorPerformance 
              testCapabilityQueryPerformance 
              testIUPropertyQueryPerformance 
              testSlicerPerformance 
              testPermissiveSlicerPerformance 
               
              So that made me wonder, are these really performance
              tests? Should they be run during the performance test
              bucket so that their stats can be tracked against previous
              releases to see if we have improved or regressed
              performance of these tests? 
               
               
              Tom 
               
             
           
           
          _______________________________________________ 
          p2-dev mailing list 
          p2-dev@xxxxxxxxxxx 
          https://dev.eclipse.org/mailman/listinfo/p2-dev 
           
         
       
       
      
 
_______________________________________________
p2-dev mailing list
p2-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/p2-dev
     
     
  
 |