You aren't bothering, many developers are on vacation as the school year starts soon. I'm sure you feel the crunch too.
I have filled out your survey, and I find it very interesting. Some work needs to be done to better classify the reviews weights though.
Here is a simple example
this code has one functional error, no javadoc, and 10 style errors.
So chart implies that I would be spending 10+ times more on style than on functionality. In reality, a functionality error takes longer to find and longer to report. So I would not be surprised if there is a different
weight associated to it.
Now how much is the weight of a functional error or a structural error vs a style issue? I don't know. I also suspect as student developers finish their internships, that less stylistic errors appear in the review. The
community as a whole is being represented by these charts, and I don't suggest categorizing by person, but I would suggest a year over year on longer lived projects to compare.
June 2018 vs June 2019. (second month of internships)
-- or --
May 2018 vs May 2019 (Near feature freeze)
Finally, if you could share the results of this work, I would love to see what's up. I personally love the whole meta aspect of development.
From: tracecompass-dev-bounces@xxxxxxxxxxx <tracecompass-dev-bounces@xxxxxxxxxxx> on behalf of Enrico Fregnan <fregnan@xxxxxxxxxx>
Sent: Thursday, August 22, 2019 1:10:44 PM
To: tracecompass developer discussions <tracecompass-dev@xxxxxxxxxxx>
Subject: Re: [tracecompass-dev] Trace compass Gerrit code review report
Thank you your reply and your help.
Sorry for bothering you, I was not sure how active the mailing list generally is.
I hope you will find the report interesting, this is just a first step, but I would like to keep working on this code review analytics to make them more understandable and actionable.
Thank you very much again for your help.
Thank you for your interest in trace compass, I will reply now, please understand that our developers may be in different time zones so an immediate response is not always possible.
change your delivery options, retrieve your password, or unsubscribe from this list, visit
Sorry to bother you again. Unfortunately, I have to start soon processing the feedback we received.
Therefore, it would be great if you could give me your opinion too. This would help me to make the report more and more useful and improve the quality of the information contained it.
If you like the idea of this kind of code review analytics, please consider to help me. Your opinion is fundamental to make me able to understand what is already ok and which point should be improved (or if it makes sense to improve them at all).
Here is the link to the report:
Thank you very much again for your help.
I hope you had the chance to look at our report and found it interesting.
Our goal is to create a Code Review Analytics tool to give the developers and project managers a better idea of what is really going on inside the code review process.
This could lead to further improve it, optimising the effort that the developers spent on it. For instance, reducing the number of documentation changes might allow a reviewer to allocate more time on finding functional defects.
Another possible application might be to help optimising the allocation of the developers to a particular review.
Here is a link to the report:
I would like to keep improving our tool adding more and more actionable information. This is just the first step, so I am interested in hearing your feedback so I could keep improving it.
Please, consider to help me by answering the survey attached to the report if you think is information are useful or might be useful in the future with the next steps we would like to take in improving the tool.
Thank you very much.
Dear Trace compass developers,
I am Enrico, a researcher at the University of Zurich and I am studying how to evaluate the code review process to understand how it could be improved.
I created a tool that analyses the changes that happen during a review on Gerrit, to measure the effects of code review on code and understand which kinds changes reviewers pay attention to.
For example, are you aware that the majority of the changes happened during a code review in the last year in your project involve documentation issues?
My goal is to give developers insight on what really happens during the code review process to check if this matches their expectations or if the process can be improved (e.g. improving the way in which reviewers are chosen, revising the code
review policies and so on.)
You will more data about your project at the following link:
I would like to understand better what works already and what still needs to be improved. I am planning to keep improving the tool adding more and more feature, but I would need your help to understand which information should be added/removed etc.
For this reason, the link contains the report and some questions we would like to ask you about it. However, the report is designed in a way that you don't have to answer the questions (if you just want to look at the report), but it
would be great if we could know what you think about our tool/report. Answering will take you 5/10 minutes.
If you have any question about the report or our research, please don't hesitate to contact me.
Thank you very much for your time and sorry for bothering you.
tracecompass-dev mailing list
To change your delivery options, retrieve your password, or unsubscribe from this list, visit