Here are the average + max Memory/#CpuCores:
avg memory.limit Max Memory average cpu limits Max CPU
===== ===== ====== ========
61.58 Gi 378.00 Gi 12.1
vCPU 74.7 vCPU
There are some cpu/memory limits in Jenkinsfile
(https://github.com/eclipse-ee4j/jakartaee-tck/blob/master/Jenkinsfile#L147),
each memory limit is specifying the container/VM memory size (since we
didn't specify the initial memory request setting), so the calculation
is something like:
memory usage = 10Gi per VM * number of test groups
CPU core = 2 * number of test groups
The data-capture does give us a high level view of what the container
level memory/CPU core usage has been. Quoting from a previous TCK ml
conversation (from David Blevins with subject: "Resource Pack
Allocations & Maximizing Use"):
"
Over all of EE4J we have 105 resource packs paid for that give us a
total of 210 cpu cores and 840 GB RAM. These resource packs are
dedicated, not elastic. The actual allocation of 105 resource packs is
by project. The biggest allocation is 50 resource packs to
ee4j.jakartaee-tck (this project), the second biggest is 15 resource
packs to ee4j.glassfish.
The most critical takeaway from the above is we have 50 resource packs
dedicated to this project giving us a total of 100 cores and 400GB ram
at our disposal 24x7. These 50 are bought and paid for -- we do not
save money if we don't use them.
"
So, the Platform TCK is budgeted to use 100 cores and 400GB ram,
however, we haven't used more than 75 CPU cores and 378gb of memory (as
per numbers max memory/cpu numbers pasted above).
I think the fundamental question is: can we manage this resource,
hence the cost, based on these data?
Imo, I think there is memory/cpu tuning that we could do if there is
time to experiment before answers are needed regarding current usage
versus what usage could be.