How to use DPReview Challenges (Help)
How to use DPReview Challenges (Help)
10 months ago
This article should help the DPR members understand the Challenge system. Although it has been published anew, it still leaves open a wide range of problems that the Hosts and Challenge Entrants grapple with for a long time already.
It's been a year since Hosts asked for certain changes that would get rid of various and dramatically widespread misuses of the Challenge system, such as cheating, vote slanting, and also lazy and inept hosting. With due respect and understanding of other priorities, let me try and offer some suggestions which would make the Challenges better, more educative, and in the same time more attractive to those members who would like to measure their personal advancement in photography by their placement in the final challenge rank, or "stats".
To that effect, the Challenges should be re-framed, some hosting tools should become available, and the challenge creation page should help setting the particular challenge rules in an automatically checkable manner, so as to reduce the wrongly prepared entries already at the point of submission. There is more, but let me try to point out the problems emerging from the current concept.
What we don't have at all, and probably miss the most, is someone open to serious and effective discussion about the problems the hosts are grappling with, day in and day out.
We don't have tools to expose cheaters, and when these are discovered, we can't do anything to remove them, at least from the challenges / series, (when we're reasonably expecting DPR to remove the likes of such from the site altogether).
We don't see anything done to help us run fair, honest and edicative Challenges.
We the hosts DON'T HAVE sufficient contact /response from DPR!
We have promises from DPR that changes will be made, but we do not seem to be able to influence neither the kind of changes, nor the time when these can be expected.
And we do have some fine photographers and honest folk, which have left the Challenges out of sheer frustration, when they realized no-one at the site-side of things seems to care enough to extend some help to the hosts!
We also have members with multiple accounts, doing their thing.
We have ambiguous voting scale, mixing quality and appropriateness - confusing the voters.
We also have Hosts with language, knowlege and responsibility problems. We have Hosts, Entrants and Voters who are outright Cheaters, yet these keep on hosting, participating and voting!
So far, we have Hosts who do their job responsibly, but can't do anything against "less-than-honest" entrants and / or voters.
The purpose of all the proposed changes is to get rid of the frustrations on both the Host and the Entrant side.
Thanks for your patience.
As above. Bayesian system may be right in some situations, but when one has to deal with improper voting from members hell-bent on winning at all costs, the only way to remove the malicious influence is to have as many votes as possible (disarm by dilution). Various suggestions were offered (see the majority of themes in this Forum), but the most fair and precise way would be to have all voters vote on all challenge entries. This will be feasible if there is an upper limit to total entries that will not scare away the potential voters! Then one could use a simple additive system instead of the current one, and have the most precise results.
- Rate as many, or as few, images as you wish. We chose a rating system over a ranking system because it frees you (the judges) from having to judge every image. Voting on only a subset of images in a challenge has no negative impact on the fairness of final result so take it easy.
... except when there was no rating. An unrated entry places last, and the reason does not have to be that entry's quality at all. It will be enough if there were no voters to evaluate it, which is quite useless to that image's entrant.
- Every rating affects an image's final rank. You cannot actively give a 'neutral' rating (what would be the point?). The closest you can come to that is not expressing an opinion at all - and you don't even need an internet connection to do that.
If all entries in any given challenge batch are not voted upon, then it is possible that some will remain without votes. How will such images be ranked? These can't possibly "share joint last place", right? Thus, all images should be voted upon (to ensure equal exposure).
- Judge each image on its own merits. The final ranking system assumes that not all people will judge (or necessarily even see) every image. As images are shuffled during judging (to ensure equal exposure), better images will still receive more high-value ratings, pushing them to the top in the final ranking.
There will be no genuine feedback as long as the voting process isn't as exacting as possible, which also means removing the vote slanters and other "winning" tacticians.
- Rate high if images satisfy point #1, rate low if they don't. Don't be afraid to award low ratings to images which don't make the cut. Entrants want genuine feedback and everybody knows we're not playing for sheep stations here. Don't forget that there are 'half' stars (0.5, 1.5 etc)
Thus, by the beginning of the Voting Phase, only those entries that fit the rules and aims of the challenge should remain within that challenge batch. If the Hosts do not do what the're supposed to do, they disfigure the whole challenge, which is unfair to all the entrants and annuls the challenge purpose. Such hosts are useless.
There can be an excellent image that does not fit the Challenge at all. Some voters will judge the quality, and those that have read the rules will be forced to cast a low value vote. In fact, these will be doing the Host's duty! Voters shouldn't do that at all.
*This is a point causing much confusion. There is one single voting scale which is to be used for two different purposes; image quality AND image appropriateness.
Rate each image according to your own interpretation of:
- how aesthetically pleasing is the image? AND
- how well does the image meet (your interpretation of) the challenge's criteria (name, description & rules)
Clear as mud? Well, don't worry, the beauty of the system is that one need not (and in fact should not) consider ranking mechanics when rating images. Just follow the judging guide below and have fun."each image's final rank is a combination of its average rating and our confidence in that average based on the number of people who rated it, compared to its peers"
A very rough paraphrasing of our actual ranking algorithm would be:
It's natural that people want to know how their input contributes towards each image's final ranking. Early in the design phase of the challenges system it was clear to us that simple techniques (counts, averages etc.) could not possibly handle the multitude of variables in our challenges (arbitrary number of votes, arbitrary number of users, subjective criteria, arbitrary number of images). Therefore the ranking algorithm we finally settled upon utilizes bayesian techniques instead.How does the ranking process work?
Any dpreview user can take part in the judging process for any challenge by rating images during that challenge's voting phase (entrants may not judge their own entries). Judging images is ultimately a subjective process and the challenge judging system reflects this. However, to address the most frequently asked questions we offer the following:
- (has not been found cheating in entry submissions and/or voting)
- (reasonable photographic knowlege made evident / membership recommendation)
- (DPR membership time / activity / images shown / challenge participations)
To become a challenge host, the applicant should fit the following minimal criteria:
Themes, rules and entrant (entry?) limits are set by hosts, who can (or at least should must) disqualify entries which don't meet a challenge's criteria. Any member of the dpreview community may apply to become a challenge host.
dpreview.com challenges are online photo competitions defined by, open to and judged by members of the dpreview community.