Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [polarsys-iwg] Maturity Assessment review and feedback

Hiho,

Some more comments inline. Just my opinion, though.

Le 28/08/2014 14:32, FAUDOU raphael a écrit :
Usage and downloads are different things and correlation is hard to establish. For instance, you may have one organization that download each new release in order to test new features (including beta releases) but with only evaluation purpose, while other organizations download one release a year and distribute it internally through enterprise update site. Finally, first organization might have downloaded 20 times a year a component while the second will have downloaded it 1 a year. From Topcased experience I could check that kind of situation a lot of times concerning industrial companies.
You should be able to calculate this metric from statistics of EF infrastructure (apache logs on updatesite, marketplace metrics, downloads of bundles when the project is already bundled, etc). Be careful to remove from the log stats any hudson/jenkins which consume updatesite: they don't reflect a real number of users.
Yes, that is a first point to care: real number of users. Because of internal update sites, I do not see how we could get public figures about number of users… except by asking explicitly to companies.

Something interesting should be also to measure the diversity of the users. For example, if a project is used only by academic or by only one large company, can we say it is mature?
woh… that is a BIG question. Here again Topcased experience showed that Company’ size is not a representative factor of maturity. 
What would be useful to know is whether component/tool is used in « operations » or only in « research » industrial phase. For instance, at Airbus, there are uses of Topcased in both operations (A380, A400M and A350 programs) and in Research and Technology (next programs) while at CNES Topcased was only used in research programs (from what I know).

Second really important point to measure is the number of concurrent users of a given component/tool. If there is only one user on given project, he/she can probably adapt to the tool and find workarounds concerning major bugs and finally usage is considered as OK. When there are several concurrent end users, you see bugs and issues far quicker and complaints occur quicker. So if a project with 5 concurrent users used a tool with success it has greater value (from my opinion) than 5 usages of one person.
That's really hard to measure.

I'm quite concerned by metrics which may not be easily or consistently retrieved. These pose a lot of threats on the measurement theory [1] and on the quality model consistency, and I would prefer not to measure something rather than measuring it wrongly or unreliably. Not to say that the above-mentionned metrics should not be used, but we have to make sure that they indeed measure what we intend to measure and improve their reliability.

[1] https://polarsys.org/wiki/EclipseQualityModel#Requirements_for_building_a_quality_model

My idea is reuse the existing Eclipse Marketplace to host this feedback. We can reuse ideas from other marketplace like Play Store or AppStore where users can write a review but also add a notation.
well, here, I’m not sure to agree. What you suggest is OK for Eclipse technologies but I see PolarSys as driven by industry and not by end users. For Polarsys I would expect that industrial companies give their feedback, but not that all end users give feedbacks as it can lead to jungle.
On an industrial project, there are quality rules for specification, design, coding, testing…. and the whole team must comply with those rules. If we interviewed each member of the team we might have very different feedbacks and suggestions about process and practices. That is why it is important to get a limited voices and if possible « quality » representative industrial voices.
I don't think industry should be the only voice we hear to; this is a clear violation of Eclipse quality requirements regarding the three communities.
Listening to a larger audience IMHO really leads to better feedback and advice. I would tend to mix both.

But perhaps I’m wrong in my vision of what PolarSys should be. Just le me know…
Well, a great point with our architecture is that we can customise the json files describing the model, concepts and metrics easily. So it is really simple to have a PolarSys quality model and a more generic Eclipse quality model, with different quality requirements and measures.

I still believe that the PolarSys QM should also rely on the other communities (especially regarding feedback) and should not be too isolated, although some specific customisations are definitely needed on quality attributes, concepts and metrics.


I'll let pass some time and then update the quality model with the propositions that gathered positive feedback. I will also try to summarise the metrics needed for the computations and check their availability to help discuss this specific subject.


Have a wonderful day,



--
boris


Best
raphaël



Etienne JULIOT
Vice President, Obeo
Le 22/08/2014 16:57, Boris Baldassari a écrit :
Hiho dear colleagues,

A lot of work has been done recently around the maturity assessment initiative, and we thought it would be good to let you know about it to have some great feedback..

* The PolarSys quality model has been improved and formalised. It is thoroughly presented in the polarsys wiki
[1a], with the metrics [1b] and measurement concepts [1c] used . The architecture of the prototype [1d] has also been updated, following discussions with Gaël Blondelle and Jesus Gonzalez-Barahona from Bitergia.

* A nice visualisation of the quality model has been developed [2] using d3js, which summarises the most important ideas and concepts. The description of metrics and measurement concepts has still to be enhanced, but the quality model itself is almost complete. Please fell free to comment and contribute.

* A github repo has been created [3], holding all definition files for the quality model itself, metrics and measurement concepts. It also includes a set of scripts used to check and manipulate the definition files, and to visualise some specific parts of the system.

* We are setting up the necessary information and framework for the rule-checking tools: PMD and FindBugs for now, others may follow. Rules are classified according to the quality attributes they impact, which is of great importance to provide sound advice regarding the good and bad practices observed in the project.


Help us help you! If you would like to participate and see what this on-going work can bring to your project, please feel free to contact me. This is also the opportunity to better understand how projects work and how we can do better together, realistically.

Sincerely yours,


--
Boris



[1a] https://polarsys.org/wiki/EclipseQualityModel
[1b] https://polarsys.org/wiki/EclipseMetrics
[1c] https://polarsys.org/wiki/EclipseMeasurementConcepts
[1d] https://polarsys.org/wiki/MaturityAssessmentToolsArchitecture
[2] http://borisbaldassari.github.io/PolarsysMaturity/qm/polarsys_qm_full.html
[3] https://github.com/borisbaldassari/PolarsysMaturity


_______________________________________________
polarsys-iwg mailing list
polarsys-iwg@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/polarsys-iwg

<etienne_juliot.vcf>_______________________________________________
polarsys-iwg mailing list
polarsys-iwg@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/polarsys-iwg



_______________________________________________
polarsys-iwg mailing list
polarsys-iwg@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/polarsys-iwg


Back to the top