The system does work.

OldArrow wrote:
Tim A2 wrote:
corneaboy wrote:

:-):-) Well I do think the scoring system works pretty well but that doesn't mean it can't be improved. After my initial post I thoought about it some more and it seemed to me the best approach was to encourage better voting. Then I thought, if people are cheating by downgrading other entrants it would be revealed by showing the histogram of each entrants personal voting in that challenge along with the voting results. One of the advantages would be that you wouldn't have to accuse anyone of cheating, as it would be obvious to all. Accusing someone of cheating could bring about its own set of problems, which I'm sure dpr woulld want to avoid.

I don't seem to be getting any support for my idea so think I will go do some photogrphy.
Don't feel bad about not getting support for your idea. That is pretty rare around here. I'm not saying that's a bad thing. I'm just sayin. I do think it may be naive to think dpr would ever reveal how individuals vote. The best we can hope for is to see the same histograms the hosts see now. Enjoy your photography and enjoy the challenges, just avoid the ones where unqualified entries get out of hand. My experience is that you get a fair shake by the majority of voters.

Tim
Maybe I failed to explain it clearly.
Actually, OldArrow, you were quite clear and I thank you for the reply, I apologize for not thanking your sooner and bringing the conversation to a proper close. After further thought on information about the voting I now think the most useful information would be a single histogram for each challenge showing the distribution of the total of all voter's votes. It would also be interesting to see periodic histograms of all votes for all challenges. Do you agree? Has that been suggested before?


Tim
 
Maybe there would be some useful information to be had from such an information display if there was any foreseeable consistency in voting principles across the votes cast. I'm afraid this isn't the case...

For instance, an entry might receive (and often does) the whole range of vote values - from 0.5 through 5.0. What is there to see?

Some voted fairly, but out of their experience and/or knowlege - which is an unknown quantity.

Some others perhaps have voted for their own different reasons that may or might not had to do with that entry's quality. Excepting several obvious attempts, the rest of such votings is an unknown again.

It is quite often that viewers battle between "attractive photography" and "attractive theme". There will be inconsistencies there too. A good photo of some repulsive scene might receive lower votes, and a snapshot of something endearing might get higher vote values, people voting from the heart instead from the photography standpoint.

That's why I have been proposing the changes in Challenge system along the lines of:

- maximum use of the measurable elements in photo submissions (input auto-pruning by rules)

- voting on image quality only, and not on rule conformance AND quality by the same value scale

- ensuring the maximum votes (by "all-voters-vote-on-all-entries" approach), also

- eliminating the vote-slanting influence by above principle

- host picking (and removing) based on something more accurate than random

If these were applied, you might expect more precise insight emerging from the challenge results, since it wouldn't any longer matter much, whether the final top-list derived from Bayesian or additive processing.
 
Last edited:

Keyboard shortcuts

Back
Top