List of best rated lenses on B+H and Amazon

JohnNEX

Senior Member
Messages
2,715
Solutions
5
Reaction score
4,048
I sometimes put up a list of all E/FE lenses with various stats. You can find the latest list, from around a month ago, here .

That list is ordered in terms of focal length, Just for something different, below is a list of FE lenses, ordered by their ratings (out of 5) on B+H and Amazon. I've updated a lot of the ratings recently.

The list is ordered in terms of a bayesian average, which is how a lot of websites rank things (e.g. IMDB top 250 films and boardgamegeek ). It works such that a lens with a single rating of 5 does not shoot to the top. Instead, it finds the balance between having a high rating and a good number of ratings. If you want to argue the merits of the bayesian average then please do that someplace else.



2de392af6f244a68b2f337fe20fa034b.jpg

This is a ranking of the best rated lenses on two websites. It is NOT necessarily a list of the ranking of the best lenses. You can probably come up with lots of reasons why the B+H and Amazon ratings are somehow flawed - I can too (e.g. low rating because of Amazon delivery service etc etc etc). Don't get too hung up that your BFF lens is ranked only at number 11 or whatever rather than number 1 where it really deserved to be. Its just a list for general interest. The ranking for lenses with very few ratings (e.g. 100mm STF and the Loxia 85mm) will obviously change as more ratings come in.

That said, the rankings unsurprisingly do line up pretty well with what 'general opinion' seems to be. My rough impression of what people are basing their rating on is that it is a combination of absolute quality and value for money, probably around 60/40 or 70/30 I reckon. In a few cases something else is a significant factor - the obvious one being the GM 85mm f/1.4, which got a lot of low ratings because of noisy AF.

Almost all of the lenses here are really very good - just because a lens is in the second half of the list doesn't mean its bad and even the bottom few lenses have their supporters. The key is to do your research and understand what you are buying before it arrives.
 
love data analysis like this. Thanks.

Q. Is the zeiss batis 18mm really manual?

would love to see the same stats for any mount :-)
 
thanks for info & good analysis.

Just about to pull the trigger on the FE 85 f1.8 which like you said on about the scores it's an overall thing when looking at performance / value for money. If money were no object then I would just get the G master but at double the price?

Getting the FE 85 f1.8 for £519 & eBay has 8x nectar points so that's another £20 off overall.

Just gotta pick me a UWA.
 
I have #s 1, 6, and 9. I sold #4 and then bought #19. For what I do, I do not see a difference in IQ between #4 and #19. The Voigtlander 15/4.5 didn't make the list, but it may be my next purchase (I rented it and loved it) or the Laowa 15/2 when it emerges.

I am surprised the FE 28 is rated so highly, but I suppose it's a great value for the money and that's certainly reflected in the ratings. Perhaps my best picture of all time was taken with the FE 28 (maybe 2 of my top 3 for that matter).
 
Last edited:
Thanks for doing this, it's really interesting. It may be helpful to add a "date released" column. A lot of the top lenses have been around a while (i.e., have the most ratings).
 
love data analysis like this. Thanks.

Q. Is the zeiss batis 18mm really manual?

would love to see the same stats for any mount :-)
Nice pick-up! The batis 18mm is auto focus - will correct that for next time - thanks!
 
thanks for info & good analysis.

Just about to pull the trigger on the FE 85 f1.8 which like you said on about the scores it's an overall thing when looking at performance / value for money. If money were no object then I would just get the G master but at double the price?

Getting the FE 85 f1.8 for £519 & eBay has 8x nectar points so that's another £20 off overall.

Just gotta pick me a UWA.
The GM is better IQ but is weight no object as well? The 85 GM is much heavier being a f/1.4.
 
I have #s 1, 6, and 9. I sold #4 and then bought #19. For what I do, I do not see a difference in IQ between #4 and #19. The Voigtlander 15/4.5 didn't make the list, but it may be my next purchase (I rented it and loved it) or the Laowa 15/2 when it emerges.

I am surprised the FE 28 is rated so highly, but I suppose it's a great value for the money and that's certainly reflected in the ratings. Perhaps my best picture of all time was taken with the FE 28 (maybe 2 of my top 3 for that matter).
The voight 15mm is at number 17 on the list.

Yes, the 28mm f2 seems to be a very decent lens at a very good price.
 
Excellent guide. Thank you. While I have both #1 and #2, and think they do belong at the top, my most used lens is the 16-35mm f4 and it's far down the list. Selecting a kit is always complicated.
 
thanks for info & good analysis.

Just about to pull the trigger on the FE 85 f1.8 which like you said on about the scores it's an overall thing when looking at performance / value for money. If money were no object then I would just get the G master but at double the price?

Getting the FE 85 f1.8 for £519 & eBay has 8x nectar points so that's another £20 off overall.

Just gotta pick me a UWA.
The GM is better IQ but is weight no object as well? The 85 GM is much heavier being a f/1.4.
The weight would come into it after a while. I would put up with the weight initially but I know it would start to annoy me. I have gone from an RX10 "all in one" with just that around my neck to an A7R2 with 5 lenses & I have started humping around a tripod as well so the weight is building. I have started leaving the 70 - 200 at home as my lenses have now out grown my bag.

PS.... ordered the FE 85 so my main arsenal will the the FE28, FE55 & FE85 withe the 70-200 in reserve. I have the 21mm converter for the 28mm but was thinking of getting rid of the 21& possibly the 28 to be replaced with the 16-35 but might be interested in the new 12-24.
 
This might be an interesting approach. The problem is that a cursory look shows that the list is full of errors.

For example, in 31st place is the is the Venus 15mm with two ratings of 4 with 1 and six ratings from Amazon and BH, respectively. It is followed by the Mitakon which has much higher ratings on both sites (4.5 and 4,37) and many more ratings (almost eight and a half times as many in total). Since the raw ratings for the Venus are lower on both sites, there is no weighting system that could account for the position of the Venus versus the Mitakon.

The same problem exists for the next pair - the only way the Yasuhara is 33rd could be ahead of the Sony in 34th is another error or errors. The Venus in 31st is above both and should be above neither.

36 and 37 show the same problem.

To be of any use at all the analysis would have to be redone.
 
This might be an interesting approach. The problem is that a cursory look shows that the list is full of errors.

For example, in 31st place is the is the Venus 15mm with two ratings of 4 with 1 and six ratings from Amazon and BH, respectively. It is followed by the Mitakon which has much higher ratings on both sites (4.5 and 4,37) and many more ratings (almost eight and a half times as many in total). Since the raw ratings for the Venus are lower on both sites, there is no weighting system that could account for the position of the Venus versus the Mitakon.

The same problem exists for the next pair - the only way the Yasuhara is 33rd could be ahead of the Sony in 34th is another error or errors. The Venus in 31st is above both and should be above neither.

36 and 37 show the same problem.

To be of any use at all the analysis would have to be redone.


5babdc03253843179587ec7c78ec16a1.jpg

As you can see #31 sould be below all of these except, perhaps#33 and #36.
 
This might be an interesting approach. The problem is that a cursory look shows that the list is full of errors.

For example, in 31st place is the is the Venus 15mm with two ratings of 4 with 1 and six ratings from Amazon and BH, respectively. It is followed by the Mitakon which has much higher ratings on both sites (4.5 and 4,37) and many more ratings (almost eight and a half times as many in total). Since the raw ratings for the Venus are lower on both sites, there is no weighting system that could account for the position of the Venus versus the Mitakon.

The same problem exists for the next pair - the only way the Yasuhara is 33rd could be ahead of the Sony in 34th is another error or errors. The Venus in 31st is above both and should be above neither.

36 and 37 show the same problem.

To be of any use at all the analysis would have to be redone.
What you point out is what happens with the Bayesian average methodology. As I said in the original post, the Bayesian average is used in a wide variety of contexts but it does seem a bit counter-intuitive when the number of observations is low.

The Bayesian average (leaving out a bunch of theory) is basically adding a number of 'dummy' ratings to the calculation of each average. This means that lenses with less ratings tend more towards the dummy rating.
 
I am surprised the FE 28 is rated so highly, but I suppose it's a great value for the money and that's certainly reflected in the ratings. Perhaps my best picture of all time was taken with the FE 28 (maybe 2 of my top 3 for that matter).
I must say given the normal price to performance ratio amongst lenses I did not expect such good results from the 28mm . It is a great performer at a very reasonable price and I use mine quite a bit .
 
Those lists have one big shortcoming: they assume that the only lenses worth having are native mount ones.

I have some great lenses, but not a single one on those lists.... ;-)
I suspect the time it takes to add every non-native mount lens would be quite prohibitive. I appreciate the effort of simply doing the native mount lenses.
 
JohnNEX wrote

What you point out is what happens with the Bayesian average methodology. As I said in the original post, the Bayesian average is used in a wide variety of contexts but it does seem a bit counter-intuitive when the number of observations is low.

The Bayesian average (leaving out a bunch of theory) is basically adding a number of 'dummy' ratings to the calculation of each average. This means that lenses with less ratings tend more towards the dummy rating.
I agree that a counter-intuitive result isn’t necessarily an incorrect one. My initial language was too stark. I believe that the particular assumptions you have chosen to apply here have resulted in a list that includes problematic results.

When one lens is ranked above another lens even though it has both much lower raw ratings and many fewer observations, it ought to lead to a re-examination of the assumptions made, in my view.

I will use the example of the Venus 15mm lens, at #31 and the Mitakon at #32, which has higher raw scores.

Why did this particular re-ranking occur?

When re-ordering a list like this, the analyst makes various assumptions to develop new expected values of the utility of each item on the list, which he or she is assuming may differ from the apparent observed utility. If all of the lenses had many ratings, then the raw and post-analysis lists would be very close – the chosen algorithm and other assumptions would have relatively little impact on the re-ordering. The lower the number of ratings, the more the analyst’s assumptions affect the repositioning of a particular lens.

Since the assumptions that went into the calculations weren’t discussed, I have made some guesses – please correct me if I am wrong. I am assuming that not all of the lens ratings data on the Amazon and B&H websites were included in the analysis. I am assuming only the ratings data for the most highly-rated lenses were included. The results of the analysis will reflect this assumption, which is key. How many "dummy" observations were included in the calculations?

The ratings of this arbitrary “best lens” subset are upwardly-biased, obviously, compared to the ratings for the total population of lenses. So, the position of the Venus 15mm lens was arbitrarily moved up, above its raw position, because of the use of a biased (compared to the entire lens population) subset and the number of “dummy” observations included in the calculations. If more lenses and/or fewer “dummy” observations were included, it is likely that the Venus would have been moved down, reflecting the resulting lower sample mean rating combined with the low n values of the Venus ratings.

The results are not endogenous – they reflect the assumptions used. One could arbitrarily change the order of the final list by changing the assumptions used.

So, it is not the approach per se, that is the issue, it is the particular assumptions used, in my view.

Personally, I would rather have a raw list than one rejiggered with a problematic assumption set. To each his own.
 
I'll try to give an example of how it works, but first a bit of 'theory', but keep in mind that the interpretation of Bayes' ideas is still debated even though the 'bayesian' average is widely used.

The bayesian idea is that all statistics come with assumptions.

The assumption behind a 'raw' average is that the sample of observations used for the average is representative of the population, even if that sample is very small. Or, put another way, we assume that as more data comes in then the average won't change. This is clearly silly for a lens with only one rating of 5.

The bayesian idea is that you can impose a different, but also valid, assumption and get a different, but also valid (and perhaps more accurate), result.

A commonly used 'prior' assumption is that, in the absence of any better information, the observed value will simply be the average over all of the options. For example, in the case of a lens, if you had no ratings at all for a particular lens, what would you assume the rating to end up after a whole lot of people rated it? A reasonable answer is whatever the average rating is for all lenses.

But what about if you had just one rating for the lens? What would you think the end rating will be? A reasonable answer is still the overall average rating for all lenses, with a little nudge in the direction of the one rating. As more ratings come in, you are prepared to move further away from the assumption and towards the data.

That's the core of the bayesian average. You have a prior assumption that the rating will simply be the average (or something else), and each additional rating will push you further from that assumption towards where the data is leading.

Its not that the bayesian average is based on assumptions and the raw average is not. They are both based on assumptions. You can see that this could (and does) get very philosophical, which is why made the comment in the original post.

In practice it works by adding a number of 'dummy' ratings of your assumption, so that they will initially play a stronger role than the data, but as more data comes in then the dummy ratings play less of a role.

Here is an example.

Let's say that on average lenses have 50 ratings with an average rating of 4.3 out of 5 (most people are pretty generous in their ratings).

A lens with 100 ratings and an average rating of 4 will have a bayesian average rating of 4.10 (including the 50 'dummy' ratings of 4.3.

A lens with 10 ratings and an average rating of 3.8 will have a bayesian average rating of 4.22 (including the 50 'dummy' ratings of 4.3.

So, the lens with less ratings and a lower average rating will actually have a higher bayesian average rating! This is because the lens with less ratings has not moved as far away from the overall average rating than the lens with more ratings.

Put another way, we are more sure of the result (its a below average lens) for the lens with more ratings.

I understand that the bayesian average is a better predictor of the 'end' rating than a 'raw' average, which is obviously why it is used a lot. I believe that this can be proven mathematically as well, but I just don't know how to do it.

Anyway, I've put both the 'raw' ratings and the bayesian average in the table, so you can use whichever you like.

But Bayes' followers would say don't think that just because the 'raw' average is simple to calculate it therefore has no assumptions underneath it. The 'raw' average just has different (and less predictive) assumptions than the bayesian average. And vigorous discussion ensues :-)

(apologies for the long rambling post which really has nothing to do with lenses)
 

Keyboard shortcuts

Back
Top