Reading mode:
Light
Dark
buynoski
Joined on
Aug 9, 2021
|
Latest reviews
Finished challenges
Most popular cameras
Features
Top threads
sh10453: Bla bla bla.
How hard is it to find the area if you know the length and width???
How hard is it to find the diagonal if you know the length and width???
And you dare to say "the engineer in me ..."???
How embarrassing!!!
Pythagoras is now trying to rise from his grave!!!
That must be aimed at me :-) since I'm the only one who invoked the word "engineer." The length and width are useful, but I thought the area and aspect ratio a better pair of statistics. You can, of course, compute both the area and aspect ratio from length and width, and compute length and width (and the diagonal if you find that useful; I don't) from the area and aspect ratio. So the two are more or less equivalent; it's only a matter of small conveniences (which numbers are given, which you have to compute) to choose one or the other. You may find 24 X 36 more convenient than 864, 2:3 and that's fine; for me, the most important numbers I want to look at are the total area and the aspect ratio, which is why I suggested (in the post down below) what I did. If we really want to get into details, perhaps the best thing to add is an individual pixel's area, which has significance concerning noise and dynamic range.
The engineer in me asks: why not just use the area (to the nearest square millimeter) and aspect ratio? The "old system" is (as DPR has noted) essentially useless, so cast away all traces of it. This way a "full-frame" (also a misnomer, even if entrenched usage) sensor is an 864, 2:3 sensor (864 square millimeters, 2 to 3 aspect ratio). One of those 6.6mm X 8.8mm sensors will be indicated by 58, 3:4, and so on. Simple, carries all the needed information, and actually means something. Oh, and by the way, this method makes it hard indeed for marketing departments to obscure reality with hype.
In the grand prize winner, the bird with the vole is a juvenile (the brown feathering is the ID characteristic), so it is possible the second bird (no brown feathering) is the juv's parent. Either that, or it's another adult kite about to attempt kleptoparasitism :-)
3Percent: "And are we really surprised? The end of the DSLR as a mainstream product was clear ..."
I guess it's clear now Richard. Nobody will go on record of stating such, having ties to major companies for years, but this is as much of a confirmation that I need.
Personally I think that Nikon and Canon may regret fully abandoning these legacy mounts. There are still photographers that will buy and prefer to use, DSLR's as long as photography exists.
I have no issue with reducing the introduction time of new models, and the total amount of new models either, in fact, it's obvious there were too many DSLR models anyways even during the hey-day of DSLR's.
But to completely abandon them, doesn't sit well with me. Leica has continued the dang rangefinder that has been around MUCH longer than D/SLR's have and that is because they saw a market for it.
People do seemingly (to me) odd things in photography. Consider film cameras, which nearly vanished but are now having something of a renaissance, in spite of the fact that digital has better specs. In the same vein, some 'old' lenses have a 'special look' to them, in the view of some folks (by no means all of them old fossils like me :-), compared to the, again, technically superior 'new' lenses designed with better and far faster computer-aided equipment and more exotic optical glasses. So it will come as no surprise to me if DSLR's will have a resurgence after the "NEW! DIFFERENT!" pizzaz of mirrorless wears off. Expect, for example, paeans for the "see it as it really is" characteristic of an optical viewfinder, with its infinite (well, OK, speed of light "limited") refresh rate and no artificial electronic screen in the way...
BackToNature1: The Nikon AF-S NIKKOR 500mm f/5.6E PF ED VR Lens is $3,596.95. That is for the F-Mount Lens/FX Format. So the mirrorless version should be a lot cheaper? I would be more than happy with an 500mm f/5.6E. That's way more reasonable to me pricewise.
Expanding a bit on what Mr. JasonTheBirder said, tradeoffs exist between a 500mm and an 800mm. Clearly the 500mm is lighter (3.2 lb vs. 5 lb) and easier to carry around, but doesn't have the "reach" of the 800mm. Not quite so obvious is the ease-of-tracking vs. "reach" tradeoff. Going "too far" in the focal length department makes tracking more difficult because of the narrower field of view of the longer lens (at infinity focus for a full-frame sensor, 2.6 degrees for 800mm vs. 4.1 degrees for 500mm), at least for me. When I put a teleconverter on the 500mm (so an effective 700mm), I find it harder to find and track the bird. This is almost certainly something that will vary from person to person (because hand-eye coordination ability does), and a narrower field of view is a more stringent test of that coordination. That is especially true for smaller, faster birds that dart this way and that without much warning.
probert500: Looks good for street photography.
...especially if the street is in the next county :-)
buynoski: Hmmm...the equivalence discussed is incomplete. That's because the blur due to depth of field (and, if you wish to be more accurate, including diffraction and other blurs as well) must be multiplied up from its value on the sensor to its size in the final presented image (be it on a monitor or a print). Because smaller sensors require their image to be magnified more to reach a given viewing size, their blurs are also magnified more and this effect undoes much of the so-called advantage in depth of field.
Computing all this is not trivial, but the net result is that any on-sensor depth of field advantage of a smaller sensor is greatly reduced if you require the final viewed image blur to be the same. And, that additional multiplication also raises other blurs: camera motion, subject motion, lens aberrations (e.g. coma, chromatic, astigmatic), diffraction, etc. Net result: not much if any advantage left.
CarVac, the diffraction blur is 0.854N(m+1) = 0.854Nu/(u-F) microns if you use the radius of the Airy disc as the "effective size" of the blur, or 1.708Nu/(u-F) if you use the diameter. Since the effect is much stronger toward the center of the Airy disc, I tend to use 0.854. Now, if you have the same blur on the sensor (which is what you seem to be arguing), then when magnifying up to viewing size, the 16X24 APS-C sensor must be magnified by 1.5X more than the 24X36 sensor, and that will also magnify the on-sensor diffraction blur of the APS-C by 1.5X as well (0.82X replaces 1.5X when comparing the 33X44 sensor with the 24X36 sensor). So the APS-C diffraction blur is larger in the final viewing image; nobody looks at sensor-sized photos, be it on the camera's back-panel screen, in a print, or on a monitor. There was some of that in the "old days." Remember contact prints? (though they were usually viewed with a magnifying loupe, increasing the sensor-viewing magnification).
buynoski: Hmmm...the equivalence discussed is incomplete. That's because the blur due to depth of field (and, if you wish to be more accurate, including diffraction and other blurs as well) must be multiplied up from its value on the sensor to its size in the final presented image (be it on a monitor or a print). Because smaller sensors require their image to be magnified more to reach a given viewing size, their blurs are also magnified more and this effect undoes much of the so-called advantage in depth of field.
Computing all this is not trivial, but the net result is that any on-sensor depth of field advantage of a smaller sensor is greatly reduced if you require the final viewed image blur to be the same. And, that additional multiplication also raises other blurs: camera motion, subject motion, lens aberrations (e.g. coma, chromatic, astigmatic), diffraction, etc. Net result: not much if any advantage left.
Ed, I'm not sure that's quite right. DoF at the sensor level depends on magnification, and that is a function of focal length and distance to subject only, at least in the classic thin-film-lens approximations: m = (F/(u-F)) and
DoF = 2Nc (m+1)/(m^2) where u is dist. to subj., F is focal length, N is focal ratio, c is circle of confusion diameter, and m is the magnification from reality onto the sensor. It does not depend on aspect ratio. If you plug the m formula into the DoF formula, you get DoF = 2Nc(u/F)((u/F)-1) which depends on N, c, and the u/F ratio (rather than u and F separately). See Kingslake, "Optics in Photography." However, if the final viewing images are all to be the same then aspect ratio can affect the sensor-to-viewing magnification, and therefore will need consideration in that respect.
Hmmm...the equivalence discussed is incomplete. That's because the blur due to depth of field (and, if you wish to be more accurate, including diffraction and other blurs as well) must be multiplied up from its value on the sensor to its size in the final presented image (be it on a monitor or a print). Because smaller sensors require their image to be magnified more to reach a given viewing size, their blurs are also magnified more and this effect undoes much of the so-called advantage in depth of field.
Computing all this is not trivial, but the net result is that any on-sensor depth of field advantage of a smaller sensor is greatly reduced if you require the final viewed image blur to be the same. And, that additional multiplication also raises other blurs: camera motion, subject motion, lens aberrations (e.g. coma, chromatic, astigmatic), diffraction, etc. Net result: not much if any advantage left.
A "rogue" planet entering the solar system will very likely be on a trajectory and at a speed that will not result in capture. It would have to lose energy somehow to end up in orbit around the Sun. A collision with something might do it, but another result of that is going to be a whopping shower of large space debris all over the place, with some of that crashing down onto Earth (eeek!).
whitelens: A viewfinder refresh rate of 60 FPS is not what I would consider PROFESSIONAL.
I think people used to shooting professional DSLR Nikons will enjoy that experience.
oops again. moderator, please kill this post.
whitelens: A viewfinder refresh rate of 60 FPS is not what I would consider PROFESSIONAL.
I think people used to shooting professional DSLR Nikons will enjoy that experience.
I must agree with Whitelens (the orginal post in this series). I mostly shoot birds in flight, and thus do a lot of panning. No mirrorless camera I've looked through (with a very long focal length lens attached) appeard anything other than blurry, compared to the effectively "infinite refresh rate" viewfinder of my D850, when following a bird in flight. In static shooting situations, probably no difference will be noticed, but (at least for me) that does not hold up in dynamic situations.
Cheapo Marx: Name That Place!
Looks like the Italian peninsula, then the Adriatic, then the Balkans.
Automobile applications for integrated circuits are very different from those for high-speed logic (those with the smallest dimensions and hardest to make from a lithographic point of view). Many auto. apps require higher voltages and resistance to voltage spikes caused by switching inductive loads. Auto mfrs. notice chip-induced customer complaints at very low ppm levels, so low that semiconductor companies can't fully test for such low levels without such expensive testing as to be uncompetitive; this makes is very hard to convince auto. mfrs. to allow changes to proven processes. The temperature and humidity ranges for auto chips are much wider than for high-speed logic; cars must work from Siberian winters to the Sahara desert. Take it from an old semiconductor engr, the automotive chip requirements are very tough indeed, in the running for the toughest in the manufacturing world.
There being some argument about what's a crow, and what's a raven in Australia, here is a list of the members of genus Corvus in Australia and Melanesia, with both the scientific and common names:
Little Crow, Corvus bennetti
Australian Raven, Corvus coronoides
Bismarck Crow, Corvus insularis
Brown-headed Crow, Corvus fuscicapillus
Bougainville Crow, Corvus meeki
Little Raven, Corvus mellori
New Caledonian Crow, Corvus moneduloides
Torresian Crow, Corvus orru
Forest Raven, Corvus tasmanicus
Relict Raven, Corvus (tasmanicus) boreus
Grey Crow, Corvus tristis
Long-billed Crow, Corvus validus
White-billed Crow, Corvus woodfordi
The words "raven" and "crow" are not scientific names. That said, generally (at least here in N. America...I'm a N.A. bird photographer...Australian bird photographers will know much more about their local species) those members of genus Corvid called ravens are larger than those called crows.
W5JCK: Wow, what a loud drone!
Since that is in Australia they are heading into spring now, so the Raven probably has a nest nearby. Just protecting her babies and territory. Don't mess with mom!
Yes...that noise is perhaps what drove the bird (dare I say it?) stark raven mad :-)
badi: and that is why you must always test your lens sharpness on cat photos!
Break, yes (remembers time cat rolled a lens onto concrete floor while photographer was changing lenses). As to make, I doubt the cats would condescend to doing actual work for pay :-)
Not being a facebook or instragram user, I leave this comment to suggest an additional comparison video: setups for birds-in-flight photography. You can have some minor controversies (OVF vs. EVF, perhaps, or how much speed is needed), several price-point categories (e.g. the lenses seem to come in 3 price "levels" $1000-2000, $3000-5000, and $10000+), you get to vicariously "spend" gigantic heaps of cash, and all sorts of camera bodies to play with. Could be fun. Could be expensive, too; renting many of those lenses may bankrupt somebody somewhere :-)
stateit: @Don Komarechka
Don, why the distracting white lab coat?
If you really insist on it, at least have a couple of pens sticking out of the breast pocket.
Thanks.
And a pocket protector! Can't possibly forget the pocket protector :-)
MikeRan: Interesting the image circle required for this sensor is 7% larger than the standard full frame image circle.
They probably did that so the individual pixel areas could be larger (it computes to almost exactly 25 square microns given the numbers in the article for sensor size and number of megapixels) to improve the dynamic range. The larger image circle also leads one to wonder if the lenses designed for full-frame 35mm cameras will vignette in the corners.