Re: I dont understand this voting system

2

I just stumbled onto this thread after viewing the results of the first DPR challenge I entered and noticing the "outlying" ratings for the top images. I thought I would add my two cents worth.

Note that I was not unhappy about how my own entry was rated - actually a little better than I figured - but I was curious about why the distribution was so "abnormal" (still don't know).

Normal or slightly skewed distributions - aka bell/asymmetrical bell curves - might be predicted when asking a population to "score" some parameter (like the quality of an image) if that parameter has generally known and common expectations (e.g. challenge rules and photographic aspects such as focus, sharpness, composition etc. to name a few). An example: when people are asked to rate the taste of comparable food items, the results are often nearly symmetrical distributions. In such a case, comparing the average (mean) ratings of the items tells you with reasonable accuracy which are better or worse tasting.

When rating distributions are expected to be "normal" but are not, then "fixes" can be applied to the data in order to eliminate bias and derive comparable results (such as winners in a contest). Bayesian theory encompasses a rather elaborate family of equations which are sometimes used to this end. You can look up the gory details, but I don't recommend it - lots of heavy math and specialized statistics vocabulary. In essence, data "fixing" is accomplished by determining an "expected" rating for a parameter, which you might think of as the average rating, and also determining the variability of the rating (how much scatter there is). The fancy equations are used to essentially compress the data for a rating distribution, taking into account "outlying" data points by diminishing their "importance". The average rating ends up being shifted upwards if a statistical test shows that low ratings are outside an expected range (which is based on the bulk of the votes cast). Think of it like dropping the high and low scores for a competition, but more accurately de-weighting points that fall further from the average. This is a simplistic interpretation - DPR's method must use an actual mathematical treatment - but in then end, I believe the results are basically fair, and the images that most people like best end up in the higher spots. Can attempted manipulation knock an entry a few spots out of its rightful place? Probably. Can it turn a well liked entry into a bottom dweller or garbage into a winner? No.

What I really don't get is why anyone would try to influence the results. Regardless of that, the best way to negate the effect of manipulation is to input more ratings. Assuming that most people are honest, the bulk of their "true" ratings will squelch the squirrely ones.

Apologies for blathering. Regards, Brian