bots, bad data, builds fetching it etc.
Same concerns could be raised with update hits. So no need to build another indicator if it's going to face the same challenges.
to be clear the increase of usage slowed down
"increase of usage slowed down", I don't really get it? As I read it, it means you agree usage of Eclipse IDE is increasing? So we've been agreeing all that long?
Which is why the download numbers always increasing is very hard to trust.
Download numbers had already been incorrect in the past and it was detected, fixed and verified relatively quickly so bugs don't remain unnoticed. So they're worth enough trust.
Download numbers never been a good indicator of usage. Numbers reflecting actual usage, such as updatehits done by running instances of your product are good numbers for that.
As mentioned above, I don't think those numbers would be worth more or less trust.
Anyway, just saying that if you want to use numbers to prove a point don't use download numbers as the primary one as that as a datasource is too easy to dispute.
I think the download number prove the Snyk report sucks. I don't take download numbers as the only source of truth to measure usage, but the huge misalignment of the download number trend vs the snyk report trend has driven me to verify things, and ultimately to challenge the snyk survey, and that is totally explained by the non-representativity of the panel (Twitter, conference, Europe).
And as Lars says the best way to get users to use eclipse is to make Eclipse better.
Guess what many of us have been doing over the last few years? Maybe Eclipse IDE has got better and manages to get more downloads than earlier thanks to that?