Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [eclipse.org-eclipsecon-program-committee] Yes / No Polling Analysis

Perhaps instead of a Neutral bucket we need a "Freakin' Awesome" bucket.  Rather than striving to not suck, shoot for being awesome.  <tongue somewhat in cheek/>

Seriously, there will always be a few crappy talks and while I completely agree with trying to reduce the number, elimination is battle of diminishing returns.  On the other hand, I quite liked the direction discussed previously to increase the number of "you had to be there" moments.  I've been trying to think of strategies there.  The best I can think of so far is based on the idea that collectively we have been to a large number of talks.  I'm sure we all have some that are particularly memorable.  Was there something in their format or structure that helped them be memorable?  Perhaps we collect these up, massage a bit and make them available to speakers as inspiration?  "Patterns for a memorable talk"

Jeff

Scott Rosenbaum wrote:
All,

We went to the Yes/No polling as an attempt to a) get more data, b) get the data more quickly.  My experience with multi-question speaker evaluations is that they have very low participation levels and usually take a long time to get compile and publish.  The time lag between delivery and feedback is particularly bad for improving your presentation.  I also feel that the most important feedback on a talk are the open ended comments, which our system supports.

So I agree that Yes/No is not the best but it is better than the alternative.  In terms of a neutral bucket, I have no problem with that idea but I wonder if it would not be better to focus on getting accurate counts on number of attendants for each session with the assumption that if someone doesn't vote that they were neutral.  So the report for a given talk would look like:

Talk Name
Num Attendants
% Positive
% No Vote
% Negative
Reporting 101
100 (head count by door monitor)
35% (35 votes)
67%  (no votes)
3% (3 votes)

I like the idea of on-line voting too, and it seems like that would also work with this system.  Thoughts?  Is there a better way to quickly get attendee feedback and publish it?

So now I am going to have to put my damned liar hat on and use a few statistics to defend my goal of improving the quality rating.  If all of last years talks had
90% approval rating then there is very little room for improvement.  The issue is that the vast majority of last years presentations had better than 90% approval.  Consider that better than a third of our talks had a 100% approval rating (85 out of 127) and more than half of the talks had better than 95% approval rating.*  So our 90% approval rating could have been much higher except for a few outliers on the dissatisfied side. 

I would like to see improvement in the the overall quality by reduction/elimination of poorly received talks. 
So to re-phrase my goal another way.  EclipseCon 2009 - No Crappy Talks

My feeling is that given the way our system skews towards the positive, all a speaker has to do is put the effort in and people will give them a pass or neutral mark.  I am asking the program committee to actively coach and work with their presenters to make sure that they:
   - have chosen an appropriate scope for their talk
   - are prepared
   - focus on their audience (which is not the same as focusing on what they want to say)


Scott

*Like I said our Yes/No system is probably a bit skewed towards the positive, but I still think it is the best measurement system I have seen used.





Ed Merks wrote:
Guys,

I think we should take all statistics with a grain of salt given there are lies, damned lies, and statistics.  I personally find it a bit incredible that a 90% approval rating is something seen as in need of improvement.  It seems to me that 1 in 10 people will always have cause for complaint no matter what.  As was pointed out already, the whole Goldilocks principle is at play: what's just right for most will be too hard to too soft, or too hot or too cold for some. 

Even this measure of the turnout for some talk seems like a good thing but I'd advise caution even here.  For example, something about Java is likely to generate a much larger turnout than something about C++, but does that mean we should not bother to have C++ sessions with the premise that the low turnout represents lack of quality?  Eclipse serves many communities, some large and some small, so I'd be concerned to see that we will best serve primarily the larger communities ensuring they get larger, while serving the smaller communities poorly ensuring they stay small.  It becomes a little self-fulling and ultimately self-defeating practice.  That being said, of course it makes sense to offer content in proportion to the levels of interest...


Donald Smith wrote:
Just to play devil's advocate, I would propose that "yes/no" % is not
necessarily an indicator of quality.  Perhaps it's an indicator of
agreeability of the speaker.  

For the business track, I will be looking more at how many votes were cast
overall as a measure of interest, and only if it was overwhelmingly negative
look to correct.  In other words, I think I would recruit someone who was
75-25 last year over someone who was 9-1.  The first talk was obviously more
popular and elicited more response.


 - Don

-----Original Message-----
From: eclipse.org-eclipsecon-program-committee-bounces@xxxxxxxxxxx
[mailto:eclipse.org-eclipsecon-program-committee-bounces@xxxxxxxxxxx] On
Behalf Of Peter Kriens
Sent: August 28, 2008 2:03 AM
To: Eclipsecon Program Committee list
Subject: Re: [eclipse.org-eclipsecon-program-committee] Yes / No Polling
Analysis

I am not sure the difference between Tutorials 89% and Short Talks 96%  
is statistically significant. I also think that anything over 90% is  
very good and is not possible to improve. When you look at more  
detailed reviews you always find some people that thought it was too  
technical and some that thought it was not technical enough. So 5% on  
each side of the curve seems quite normal.

I agree about the tutorial quality and I am ready to do more reviews.

As a review for next year, I'd like to have orange buckets to indicate  
"neutral". Currently neutral and good are bundled together and to find  
out what the good stuff was it is nice to know what was really good.

With the avalanche in iPhones and people with portables, would it be  
an idea to have some internet based feedback on a session (in real  
time)?

Kind regards,

	Peter Kriens


On 28 aug 2008, at 07:31, Scott Rosenbaum wrote:

  
All,

Last year we went to the yes/no polling for the first time.  We  
actually got a lot of feedback with almost 7,000 ballots cast over  
the four days and 226 sessions.  I realize that this polling data is  
not very detailed and the Yes/No nature of probably games people  
towards a better approval rating than you might get if you used a  
different scale, but I still like the data for its simplicity and  
the volume of feedback.
At a very high level we should be please that better than 90% of the  
people liked (+) the session that they attended.   On the other  
hand, it is a little troublesome to me that 1 in 10 people walked  
away from a session and decided that they were somehow unsatisfied  
(-).  This is why you will hear me continuing to harp on quality  
this year.  I hope to see an improvement in the overall ratings next  
year, but it will only happen if you work with the submitters in  
your category to improve their talks and make sure that they are  
prepared to give great presentations.

The attached PDF provides a fairly good overview of the results of  
last years feedback.  One of the things that surprised me, and  
really stood out was how well the Short Talks (96% +) did on average  
as compared to the Tutorials (89% +) and Long Talks (90% +).  This  
is one of the reasons that we have expanded the number of short talk  
sessions this year and that we will be actively pushing you to  
recruit and organize more short talks.

One particular fact that stood out was that of the ten lowest rated  
sessions, 8 of them were tutorials ( 44% - 62% approvals ).  I think  
that the take away is that if you are going to advertise a tutorial  
the presenter had better be prepared to do more than just talk about  
the kewl stuff that they can do.  The demos have to be clear and  
well rehearsed.  There needs to be concrete take away content.   
Finally, a tutorial should have some level of hands on involvement.   
This is one of the reasons that I would like to see more commercial  
trainers recruited to come and provide tutorials this year.
Do you feel that we as a program committee can take on a content  
review of all of the tutorials?  Would you be willing to do that  
with the tutorials in your categories?  Thoughts?


Scott Rosenbaum



<overall_analysis.pdf>_______________________________________________
eclipse.org-eclipsecon-program-committee mailing list
eclipse.org-eclipsecon-program-committee@xxxxxxxxxxx

    
https://dev.eclipse.org/mailman/listinfo/eclipse.org-eclipsecon-program-comm
ittee

_______________________________________________
eclipse.org-eclipsecon-program-committee mailing list
eclipse.org-eclipsecon-program-committee@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/eclipse.org-eclipsecon-program-comm
ittee

_______________________________________________
eclipse.org-eclipsecon-program-committee mailing list
eclipse.org-eclipsecon-program-committee@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/eclipse.org-eclipsecon-program-committee
  

_______________________________________________ eclipse.org-eclipsecon-program-committee mailing list eclipse.org-eclipsecon-program-committee@xxxxxxxxxxx https://dev.eclipse.org/mailman/listinfo/eclipse.org-eclipsecon-program-committee

_______________________________________________ eclipse.org-eclipsecon-program-committee mailing list eclipse.org-eclipsecon-program-committee@xxxxxxxxxxx https://dev.eclipse.org/mailman/listinfo/eclipse.org-eclipsecon-program-committee

Back to the top