Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [eclipse.org-eclipsecon-program-committee] Yes / No Polling Analysis

I am not sure the difference between Tutorials 89% and Short Talks 96% is statistically significant. I also think that anything over 90% is very good and is not possible to improve. When you look at more detailed reviews you always find some people that thought it was too technical and some that thought it was not technical enough. So 5% on each side of the curve seems quite normal.

I agree about the tutorial quality and I am ready to do more reviews.

As a review for next year, I'd like to have orange buckets to indicate "neutral". Currently neutral and good are bundled together and to find out what the good stuff was it is nice to know what was really good.

With the avalanche in iPhones and people with portables, would it be an idea to have some internet based feedback on a session (in real time)?

Kind regards,

	Peter Kriens


On 28 aug 2008, at 07:31, Scott Rosenbaum wrote:

All,

Last year we went to the yes/no polling for the first time. We actually got a lot of feedback with almost 7,000 ballots cast over the four days and 226 sessions. I realize that this polling data is not very detailed and the Yes/No nature of probably games people towards a better approval rating than you might get if you used a different scale, but I still like the data for its simplicity and the volume of feedback. At a very high level we should be please that better than 90% of the people liked (+) the session that they attended. On the other hand, it is a little troublesome to me that 1 in 10 people walked away from a session and decided that they were somehow unsatisfied (-). This is why you will hear me continuing to harp on quality this year. I hope to see an improvement in the overall ratings next year, but it will only happen if you work with the submitters in your category to improve their talks and make sure that they are prepared to give great presentations.

The attached PDF provides a fairly good overview of the results of last years feedback. One of the things that surprised me, and really stood out was how well the Short Talks (96% +) did on average as compared to the Tutorials (89% +) and Long Talks (90% +). This is one of the reasons that we have expanded the number of short talk sessions this year and that we will be actively pushing you to recruit and organize more short talks.

One particular fact that stood out was that of the ten lowest rated sessions, 8 of them were tutorials ( 44% - 62% approvals ). I think that the take away is that if you are going to advertise a tutorial the presenter had better be prepared to do more than just talk about the kewl stuff that they can do. The demos have to be clear and well rehearsed. There needs to be concrete take away content. Finally, a tutorial should have some level of hands on involvement. This is one of the reasons that I would like to see more commercial trainers recruited to come and provide tutorials this year. Do you feel that we as a program committee can take on a content review of all of the tutorials? Would you be willing to do that with the tutorials in your categories? Thoughts?


Scott Rosenbaum



<overall_analysis.pdf>_______________________________________________
eclipse.org-eclipsecon-program-committee mailing list
eclipse.org-eclipsecon-program-committee@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/eclipse.org-eclipsecon-program-committee



Back to the top