The system does work.

corneaboy

Leading Member
Messages
565
Reaction score
23
Location
CA, US
I have a pretty good background in statistics and have looked at the scoring results carefully to see if I can come up with suggestions for improving the system. There are so many instances of scoring bias that are obvious attempts to scam the system but I think in some cases it is misunderstanding of how scoring should be done. In any case it does seem that for the most part the better photos rise to the top.

The best way to get the most valid results is to vote on every photo in the challenge. That way the scammers will be lost in the noise.

I always wonder what the cheaters do if they should win. Do they put the results on their wall or show it to the local photo club?
 
Last year , a winner of a challenge I hosted proudly put his winning photo and statistic on his web site which was an advertisement for a photography studio. While the photo was good, it clearly won because of friend voting; Not against the rules, and not the best photo in the bunch, but he was probably only here to get a "win"for his web site. It seemed to work out well in his favor.
 
barb_s wrote:

Last year , a winner of a challenge I hosted proudly put his winning photo and statistic on his web site which was an advertisement for a photography studio. While the photo was good, it clearly won because of friend voting; Not against the rules, and not the best photo in the bunch, but he was probably only here to get a "win"for his web site. It seemed to work out well in his favor.
 
corneaboy wrote:
barb_s wrote:

Last year , a winner of a challenge I hosted proudly put his winning photo and statistic on his web site which was an advertisement for a photography studio. While the photo was good, it clearly won because of friend voting; Not against the rules, and not the best photo in the bunch, but he was probably only here to get a "win"for his web site. It seemed to work out well in his favor.
 
RaptorUK, I would agree that the relative system might work better. However, it requires that the judge examine all of the photos first to set the boundary conditions and then repeat the process for scoring. I don't think many people woould be willing to spend the time it takes to do that.
 
I think the voting system works wonders, considering what it has to work with. Most people here just don't seem to understand Bayesian rating, let alone appreciate it's power to get meaningful results from the messy voting that goes on around here. Your idea a while back to tweak the system by removing possible outliers is the only thing that I think could be improved. I do think the dpr guys are doing some trimming, but need to be more aggressive to deal with the blatant "vote twisting" (thanks OldArrow for the term) that occurs.

Tim
 
I realized my previous post may seem like I am giving more credit to the voting system than it is due. To be clear, I think the bulk of the credit for the creme rising to the top in the challenges goes to those voters who are responsible and capable judges of the photos entered.

Tim
 
Last edited:
i too remember enough of my college statistics course to agree with corneaboy that voters need to vote on all the entries. in addition to that, if you enter a challenge you ought to be required to vote. statistically you need a better sampling.... the challenge should be more of a contest than a poll.

i haven't done a lot of challenges but did just participate in the 'we are on our own'.

one hundred entries, you could enter two, so you have to figure probably more than sixty entered yet the the 'winner' garnered votes from only 30 members. those at the other end of the challenge were getting around 20 votes.

more participation and voting for all the entries and this system can work and overlook your 'cheaters' at the same time.

get the vote out!!!

cheers.

el
 
corneaboy wrote:

RaptorUK, I would agree that the relative system might work better. However, it requires that the judge examine all of the photos first to set the boundary conditions and then repeat the process for scoring. I don't think many people woould be willing to spend the time it takes to do that.
You have already stated that . . . "The best way to get the most valid results is to vote on every photo in the challenge." so you already agree that the voters have to look at every image . . . so I suggest they look at every image and select their top x and rank them 1 to x

x could be adjusted by number of entries, for example: minimum of 3 or 10% of the number of entries




Also make it mandatory for all entrants to vote in the Challenges they enter, and don't allow voters to vote in other Challenges until they have entered a minimum number of Challenges, maybe 5 . . .
 
RaptorUK wrote:
corneaboy wrote:

RaptorUK, I would agree that the relative system might work better. However, it requires that the judge examine all of the photos first to set the boundary conditions and then repeat the process for scoring. I don't think many people woould be willing to spend the time it takes to do that.
You have already stated that . . . "The best way to get the most valid results is to vote on every photo in the challenge." so you already agree that the voters have to look at every image . . . so I suggest they look at every image and select their top x and rank them 1 to x

x could be adjusted by number of entries, for example: minimum of 3 or 10% of the number of entries

Also make it mandatory for all entrants to vote in the Challenges they enter, and don't allow voters to vote in other Challenges until they have entered a minimum number of Challenges, maybe 5 . . .
 
Since I do not know what algorithm is being used in the data analysis I will give the dpr people the benefit of the doubt. There are so many potential problems of analyzing relative data such as linearity of scaling, etc, etc. it is very difficult to be fair. More sophisticated statistical analysis will never make up for poor data.

I would rather see participants encouraged to vote on all photos rather than make it compulsory, which might make them just quickly assign a bunch of low votes.

How about making the voting results distribution for each participant displayed as a histogram under his entry after the voting has been completed? I assume it should have some simblence of a normal distribution. It should make it very apparent who is doing the cheating. It might also reveal to some honest people that their voting is distorted in one direction or another.

I am also concerned about accusing anyone of cheating since that seems to invite legal challenges. Perhaps voting inconsistency or some other term might be used.
 
corneaboy wrote:

Since I do not know what algorithm is being used in the data analysis I will give the dpr people the benefit of the doubt. There are so many potential problems of analyzing relative data such as linearity of scaling, etc, etc. it is very difficult to be fair. More sophisticated statistical analysis will never make up for poor data.
See here: http://www.dpreview.com/learn/?/Help/ChallengesHelp_01.htm
I would rather see participants encouraged to vote on all photos rather than make it compulsory, which might make them just quickly assign a bunch of low votes.
Encouraging does not work (proof: any rule in any challenge. These are either disregarded or not applied). Compulsory makes the voting process precise, even if done without any qualification (and we can't assert the voters level of expertise anyway).

I'm sorry if it looks like "honesty enforcement", but so far, no other enticement produced positive results.
How about making the voting results distribution for each participant displayed as a histogram under his entry after the voting has been completed? I assume it should have some simblence of a normal distribution. It should make it very apparent who is doing the cheating. It might also reveal to some honest people that their voting is distorted in one direction or another.
Hosts do see such a histogram... it helps warn us of the illogical value distribution. The rest of the thing is a problem.
I am also concerned about accusing anyone of cheating since that seems to invite legal challenges. Perhaps voting inconsistency or some other term might be used.
Before accusing anyone of cheating, one has to gather evidence, and that evidence has to be undeniably obvious. Until that moment, there is only "reasonable suspicion".

What finally gets to be entered into anyone's Blacklist is based upon enough evidence to boot the perp out of DPR. Now if I was... and I'm not. So, nothing. :)
 
I have settled into a routine; play the slideshow to the end making a mental note of best and worst then back up one by one placing votes. Finally close the slide show and look at the thumbnail summary to review. My top entrants will show clearly and I can see inconsistencies.

This is probably common but I detail it to make a point - IT WORKS WELL AND IS FUN ONLY IF THE ENTRY NUMBER IS LIMITED - I think more participation would come with lower limits.
 
OldArrow wrote:
corneaboy wrote:
How about making the voting results distribution for each participant displayed as a histogram under his entry after the voting has been completed? I assume it should have some simblence of a normal distribution. It should make it very apparent who is doing the cheating. It might also reveal to some honest people that their voting is distorted in one direction or another.
Hosts do see such a histogram... it helps warn us of the illogical value distribution. The rest of the thing is a problem.
OldArrow you are a very tough taskmaster when assessing new ideas and I respect that. What do you think of the idea that hosts let us know when they see suspicious voting patterns in a challenge. It might even be helpful if the rogue voters knew their activity was being made public.

Tim
 
The hosts are doing that, using this Forum, but unless there is sufficient proof, no-one should tag anyone "cheater". In that sense, even reasonabe suspicions should IMO be phrased carefully, until there is enough evidence.

Sometimes errors happen. This, of course, excludes data manipulations... but genuine errors have a self-explainable way to be what they are. It is host's task to discover those and straighten the matter with the entrant. If the host isn't doing what s/he's supposed to do, Forum is a good place to let others know.

Sometimes there are things that can't be avoided, like when using an old lens with a mothern camera. The lens then can't supply the part of metadata relating to its part of performance. It has to be taken into consideration, and Challenges requesting full EXIF may have to modify the meaning of that rule...

There will be more things like that. In all cases, the host will be the most important decision-maker, and the host's actions, in a nutshell, is aimed at making the Challenges equally just, honest and fun for all. This means sticking to the carefully formed chalenges, rules and explanations.

It goes without saying that the dishonest are to be kept away from the Challenges by all possible means. Otherwise, the whole thing has lost its purpose.
 
corneaboy wrote:

Since I do not know what algorithm is being used in the data analysis I will give the dpr people the benefit of the doubt. There are so many potential problems of analyzing relative data such as linearity of scaling, etc, etc. it is very difficult to be fair. More sophisticated statistical analysis will never make up for poor data.
I may have missed the point. I thought it was a given that application of sophisticated statistical methods was out of the question because of the "poor data" and that it was impressive how well the ratings worked in spite of that fact. The general Bayesian method is well know. We don't know how dpr tweaks it for their purposes, though. Since they belongs to Amazon, it could be they use a formula similar to the other Amazon companies. We will never know because their revealing their formula would be like throwing raw meat to a pack of hungry wolfs.

Voters have different ideas on how to distribute their votes among the challenges and the dpr system accommodates that fact.

I thought the point of this thread was to acknowledge how well the ratings turn out. You went from saying the system does work to saying it works most of the time to now saying it can work. I think it works amazingly well all things considered, thanks mostly to the efforts of those voters who give their honest evaluations and the efforts of most of the hosts. The rating system just quietly hums along in the background. If a newcomer visits this forum they will find a very negative tone that would give the wrong impression of just how well the challenges do work. The negativity is mostly a consequence of taking care of necessary business. Between voting and poor hosting on the part of some hosts i think the poor hosting is the biggest problem, but at least that problem is isolated mostly to specific challenges.

Tim
 
Last edited:
:-):-) Well I do think the scoring system works pretty well but that doesn't mean it can't be improved. After my initial post I thoought about it some more and it seemed to me the best approach was to encourage better voting. Then I thought, if people are cheating by downgrading other entrants it would be revealed by showing the histogram of each entrants personal voting in that challenge along with the voting results. One of the advantages would be that you wouldn't have to accuse anyone of cheating, as it would be obvious to all. Accusing someone of cheating could bring about its own set of problems, which I'm sure dpr woulld want to avoid.

I don't seem to be getting any support for my idea so think I will go do some photogrphy.
 
OldArrow wrote:
RaptorUK wrote:
corneaboy wrote:

RaptorUK, I would agree that the relative system might work better. However, it requires that the judge examine all of the photos first to set the boundary conditions and then repeat the process for scoring. I don't think many people woould be willing to spend the time it takes to do that.
You have already stated that . . . "The best way to get the most valid results is to vote on every photo in the challenge." so you already agree that the voters have to look at every image . . . so I suggest they look at every image and select their top x and rank them 1 to x

x could be adjusted by number of entries, for example: minimum of 3 or 10% of the number of entries

Also make it mandatory for all entrants to vote in the Challenges they enter, and don't allow voters to vote in other Challenges until they have entered a minimum number of Challenges, maybe 5 . . .
 
corneaboy wrote:

:-):-) Well I do think the scoring system works pretty well but that doesn't mean it can't be improved. After my initial post I thoought about it some more and it seemed to me the best approach was to encourage better voting. Then I thought, if people are cheating by downgrading other entrants it would be revealed by showing the histogram of each entrants personal voting in that challenge along with the voting results. One of the advantages would be that you wouldn't have to accuse anyone of cheating, as it would be obvious to all. Accusing someone of cheating could bring about its own set of problems, which I'm sure dpr woulld want to avoid.

I don't seem to be getting any support for my idea so think I will go do some photogrphy.
Don't feel bad about not getting support for your idea. That is pretty rare around here. I'm not saying that's a bad thing. I'm just sayin. I do think it may be naive to think dpr would ever reveal how individuals vote. The best we can hope for is to see the same histograms the hosts see now. Enjoy your photography and enjoy the challenges, just avoid the ones where unqualified entries get out of hand. My experience is that you get a fair shake by the majority of voters.


Tim
 
Last edited:
Tim A2 wrote:
corneaboy wrote:

:-):-) Well I do think the scoring system works pretty well but that doesn't mean it can't be improved. After my initial post I thoought about it some more and it seemed to me the best approach was to encourage better voting. Then I thought, if people are cheating by downgrading other entrants it would be revealed by showing the histogram of each entrants personal voting in that challenge along with the voting results. One of the advantages would be that you wouldn't have to accuse anyone of cheating, as it would be obvious to all. Accusing someone of cheating could bring about its own set of problems, which I'm sure dpr woulld want to avoid.

I don't seem to be getting any support for my idea so think I will go do some photogrphy.
Don't feel bad about not getting support for your idea. That is pretty rare around here. I'm not saying that's a bad thing. I'm just sayin. I do think it may be naive to think dpr would ever reveal how individuals vote. The best we can hope for is to see the same histograms the hosts see now. Enjoy your photography and enjoy the challenges, just avoid the ones where unqualified entries get out of hand. My experience is that you get a fair shake by the majority of voters.

Tim
Maybe I failed to explain it clearly.

The hosts see the voting patterns of individual voters, and it's not usually hard to spot when someone among the voters cast their votes in an illogical way. Comparing entry quality, conformance with the rules etc. shows it right away (unless hosts know nothing about photography).

Whether entrants see how the individual voters cast their votes is not so relevant, since it suggests that one can tell how well each voter understands what s/he is doing. This not being possible, it's enough if hosts see that the voting was fair and honest, thus ascertaining that the challenge runs as it should. This, in turn, depends upon the hosts' responsibility.

All the proposed changes have been suggested in order to uniform the hosts, entrants and voters behavior, as the challenges will only work properly if all involved did their part responsibly. Since this can't be merely wished or expected, some segments have to be enforced - by challenge system reconstructions, as many unguarded spots have been found and used by "morally challenged" to their advantage. These required changes are quite extensive, and have all been discussed elsewhere.


The host's problem is in the anonymization code. Hosts alone should be able to see that voter's ID in order to remove such a manipulation.In other words, obvious vote slanting should be removed.

For that, hosts need some efficient tools. We've been waiting for challenge changes which will allow hosts ro really moderate their challenges, and not just sit and feel helpless when they see how the dishonest turn their best intentions to garbage.

My advice to all entrants would be to watch for hosts who allow entries outside of their own rules to run without any timely intervention, so as to be able to avoid those. Also, avoid all challenges run by the hosts which have been found cheating. That way, everyone can reduce the manipulators' field of play.

Use the available tools (complaint, feedback, mail, Forum, etc.) to let hosts / DPR know about improper behavior. In short, do whatever you see fit to make challenges fair and straight (again).
 
Last edited:

Keyboard shortcuts

Back
Top