|Re: [cross-project-issues-dev] tycho-gmp.gmf.tooling builds consumes enormous amount of memory|
On 2/23/12 10:29 AM, Denis Roy wrote:
The problem is, I can't say conclusively the memory consumption I saw is transient or permanent. It is based on one memory dump I took. I need to analyze more dumps. I got 4 so far.
I just looked at the jobs artifacts of tycho-gmp.gmf.tooling. I see a large file (145 MB)
hudsonbuild@hudson:/opt/public/jobs/tycho-gmp.gmf.tooling/builds/135> ls -la maven*.xml
-rw-r--r-- 1 hudsonbuild callisto-dev 145185863 2012-02-21 13:49 maven-build-23fc8cf8-9fe3-4a66-8e73-2612c606debf.xml
I suspect it could be because the maven3 job configuration set to use "private maven repository". I'll uncheck that and do a build and see if that helps.
If it's the jenkins-maven3 plugin that is the culprit, then it needs to go too. ASAP.
It is not Jenkins-maven3 plugin. It is own own built by Sonatype. We have this set up at our installation at Oracle and we have no problem. I don't think it is maven 3 plugin problem either.
Understand. Bear with me, I'm spending my entire week exclusively on analyzing the performance issue. I need some more time to study.
As per my finding Hudson (for that matter Jenkins also) has a fundamental architecture problem of holding all builds of all jobs in memory. This is ok for small installation, but not for enterprise level installations like Eclipse. Hudson quickly became popular and now used in enterprise environment. But it never evolved to be an enterprise level tool. Now that Hudson is a Top level Hudson project, we at Eclipse Foundation need to take pride to evolve Hudson in to an enterprise tool. Being a lead for the Hudson project, I'm taking every steps to do that. But I need time to work on that.
The correct solution is to fix Hudson and make it scale well in an enterprise environment . Unfortunately that can not be done over night. It may take weeks, even months to fix it, because Hudson is a huge tool. For the past several months, my whole time is spent cleaning up the years of crud collected and making the code base IP clean, so that it can be released from Eclipse. Once that is done, I can attack the architectural problem.
Next best thing for the time being we could do is, keep the footprints of these builds smaller, so that Hudson does not run out of memory. I'm working on to find the potential area of improvement among the builds.
No Problem, at Oracle we have larger installation of Hudson 2.1.2. I talked to them and they don't see any issues. I'm comparing the configuration between Eclipse and Oracle installation and see what the actual differences are.
Back to the top