Guys,
I think we should take all statistics with a grain of salt given there
are lies, damned lies, and statistics. I personally find it a bit
incredible that a 90% approval rating is something seen as in need of
improvement. It seems to me that 1 in 10 people will always have cause
for complaint no matter what. As was pointed out already, the whole
Goldilocks principle is at play: what's just right for most will be too
hard to too soft, or too hot or too cold for some.
Even this measure of the turnout for some talk seems like a good thing
but I'd advise caution even here. For example, something about Java is
likely to generate a much larger turnout than something about C++, but
does that mean we should not bother to have C++ sessions with the
premise that the low turnout represents lack of quality? Eclipse
serves many communities, some large and some small, so I'd be concerned
to see that we will best serve primarily the larger communities
ensuring they get larger, while serving the smaller communities poorly
ensuring they stay small. It becomes a little self-fulling and
ultimately self-defeating practice. That being said, of course it
makes sense to offer content in proportion to the levels of interest...
Donald Smith wrote:
Just to play devil's advocate, I would propose that "yes/no" % is not
necessarily an indicator of quality. Perhaps it's an indicator of
agreeability of the speaker.
For the business track, I will be looking more at how many votes were cast
overall as a measure of interest, and only if it was overwhelmingly negative
look to correct. In other words, I think I would recruit someone who was
75-25 last year over someone who was 9-1. The first talk was obviously more
popular and elicited more response.
- Don
-----Original Message-----
From: eclipse.org-eclipsecon-program-committee-bounces@xxxxxxxxxxx
[mailto:eclipse.org-eclipsecon-program-committee-bounces@xxxxxxxxxxx] On
Behalf Of Peter Kriens
Sent: August 28, 2008 2:03 AM
To: Eclipsecon Program Committee list
Subject: Re: [eclipse.org-eclipsecon-program-committee] Yes / No Polling
Analysis
I am not sure the difference between Tutorials 89% and Short Talks 96%
is statistically significant. I also think that anything over 90% is
very good and is not possible to improve. When you look at more
detailed reviews you always find some people that thought it was too
technical and some that thought it was not technical enough. So 5% on
each side of the curve seems quite normal.
I agree about the tutorial quality and I am ready to do more reviews.
As a review for next year, I'd like to have orange buckets to indicate
"neutral". Currently neutral and good are bundled together and to find
out what the good stuff was it is nice to know what was really good.
With the avalanche in iPhones and people with portables, would it be
an idea to have some internet based feedback on a session (in real
time)?
Kind regards,
Peter Kriens
On 28 aug 2008, at 07:31, Scott Rosenbaum wrote:
All,
Last year we went to the yes/no polling for the first time. We
actually got a lot of feedback with almost 7,000 ballots cast over
the four days and 226 sessions. I realize that this polling data is
not very detailed and the Yes/No nature of probably games people
towards a better approval rating than you might get if you used a
different scale, but I still like the data for its simplicity and
the volume of feedback.
At a very high level we should be please that better than 90% of the
people liked (+) the session that they attended. On the other
hand, it is a little troublesome to me that 1 in 10 people walked
away from a session and decided that they were somehow unsatisfied
(-). This is why you will hear me continuing to harp on quality
this year. I hope to see an improvement in the overall ratings next
year, but it will only happen if you work with the submitters in
your category to improve their talks and make sure that they are
prepared to give great presentations.
The attached PDF provides a fairly good overview of the results of
last years feedback. One of the things that surprised me, and
really stood out was how well the Short Talks (96% +) did on average
as compared to the Tutorials (89% +) and Long Talks (90% +). This
is one of the reasons that we have expanded the number of short talk
sessions this year and that we will be actively pushing you to
recruit and organize more short talks.
One particular fact that stood out was that of the ten lowest rated
sessions, 8 of them were tutorials ( 44% - 62% approvals ). I think
that the take away is that if you are going to advertise a tutorial
the presenter had better be prepared to do more than just talk about
the kewl stuff that they can do. The demos have to be clear and
well rehearsed. There needs to be concrete take away content.
Finally, a tutorial should have some level of hands on involvement.
This is one of the reasons that I would like to see more commercial
trainers recruited to come and provide tutorials this year.
Do you feel that we as a program committee can take on a content
review of all of the tutorials? Would you be willing to do that
with the tutorials in your categories? Thoughts?
Scott Rosenbaum
<overall_analysis.pdf>_______________________________________________
eclipse.org-eclipsecon-program-committee mailing list
eclipse.org-eclipsecon-program-committee@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/eclipse.org-eclipsecon-program-comm
ittee
_______________________________________________
eclipse.org-eclipsecon-program-committee mailing list
eclipse.org-eclipsecon-program-committee@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/eclipse.org-eclipsecon-program-comm
ittee
_______________________________________________
eclipse.org-eclipsecon-program-committee mailing list
eclipse.org-eclipsecon-program-committee@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/eclipse.org-eclipsecon-program-committee
|