|Re: [cross-project-issues-dev] Update site time outs|
I just did some debugging on this with Wireshark, and it appears that just about all the projects that I regularly use are contributing to this.
Generalizing hideously it would appear that:
'New' Projects such as Papyrus have not learned to configure P2 repositories correctly; no p2.index, 'old' contributions deleted but not removed from contents.
'Old' Projects such as OCL or Xtext have restructured at least once and have failed to ensure that stale entries for previous structures are cleaned out. Thus OCL contributes 10 404s. EMF, Xtext, ... contribute too many to count.
Each 404 seems to add one second to the getting contents time.
I'll endeavor to do my bit by cleaning up OCL.
It looks like we need some kind of 'SimRel report' that gathers the transitive update sites and checks for badly structured or stale content.
The policy of making everything equally available seems also suspect. Is it really helpful to go through the composite repos to discover 20 releases of some component? Shouldn't P2 or its index scanning look only at the top level/most recent aggregates leaving plumbing the depths till actually interesting?
On 04/03/2015 15:33, LETAVERNIER Camille wrote:
Back to the top