Galaxy S24 Ultra: A missed opportunity?

AksCT

Forum Enthusiast
Messages
326
Reaction score
85
I have been using my my S21Ultra for about 3 years, which offfers great lens/sensor sets for photography. I have been anxiously awaiting for S24U to replace my phone, but it seems that Samsung has missed the mark with the new model.

Certianly better than S21U in many respects, but not sure if it worth the upgrade.

I'd love to hear about hands-on experience of those that will be using S24U for photography.
 
Why do you lead your question with such negativity? It's a new phone, it's better than the S21 ultra in every way. Why such a pessimistic spin on it?

Simply ask if people who have the phone,like it. Then sit back and see what your polled group says.

There are tons of reviews online now specifically looking at photo and video quality and compare it with DSLRs and other smartphones.
 
What breakthrough updates did you expect from the S24 Ultra? Sony A9 quality with a builtin 1.200mm lens? I think manufacturers are now concentrating on other things than cameras since those are deemed "good enough". The S24U mainly has "AI" as updates and 7 years of support. If that is not enough for you, well than you have saved a lot of money.

It is a substantial upgrade from the S21U though, especially when not in the US (Snapdragon instead of Exynos).
 
I have been using my my S21Ultra for about 3 years, which offfers great lens/sensor sets for photography. I have been anxiously awaiting for S24U to replace my phone, but it seems that Samsung has missed the mark with the new model.

Certianly better than S21U in many respects, but not sure if it worth the upgrade.

I'd love to hear about hands-on experience of those that will be using S24U for photography.
Out of curiosity what do you think it missed on?
 
I have been using my my S21Ultra for about 3 years, which offfers great lens/sensor sets for photography. I have been anxiously awaiting for S24U to replace my phone, but it seems that Samsung has missed the mark with the new model.

Certianly better than S21U in many respects, but not sure if it worth the upgrade.

I'd love to hear about hands-on experience of those that will be using S24U for photography.
Out of curiosity what do you think it missed on?
Thanks for asking. Please note that my comment is about photographic features, not about all other asepcts such as CPU, Screen. Also, I am not saying that there is no improvements from S21U to S24U upgrade.

I wished instead of keeping optical 3x, they kept optical 10x with a better sensor/lens combo.

200MP and 50MP are amazing, I wished they used larger physical sensors, thus better SNR

From what I read, AI features seem to be half-baked and gimmicky at this stage
 
Last edited:
I have been using my my S21Ultra for about 3 years, which offfers great lens/sensor sets for photography. I have been anxiously awaiting for S24U to replace my phone, but it seems that Samsung has missed the mark with the new model.

Certianly better than S21U in many respects, but not sure if it worth the upgrade.

I'd love to hear about hands-on experience of those that will be using S24U for photography.
Just use GCam with yr S21 Ultra, and your image quality will take a significant leap forward.
 
A missed opportunity? Not according to DxoMark (18th), no. They have the ability to release a knockout punch to every phone manufacturer but prefer to shuffle along year in, year out. Shareholders are loving it though.
 
Mobile raw photography has actually plateaued.

The focus now is on computational photography and low light performance and packing a bigger sensor.

Someone on Reddit actually compared an iPhone 11 with the latest iPhone 15 recently and the results were almost similar

https://www.reddit.com/r/iPhoneography/s/sPF2q8ArR1
I agree with your comment about computational photography, which is at its infancy. With improvements in algorithms and hardware, AI tools will be incoporated in mobile and traditional photography.

However, I am not sure if mobile photography is plateauted. Based on current and exiting technologies, there are so much in sensor/lens imporvements that can be incoporated today. I am guessing inclusion/exclusion of new hardware/feature is not only a matter of its availability but also economics and business competition.

Consdiering so much improvements in sensor technologies (and more to come), it is likley that smart phone sector could go through a product bifurcation at some point, i.e. (1) smart phone models for general use (i.e. smart phone with photo/video capabilities) and (2) mobile cameras systems with smart of smart phone cabilabiliies. Samsung S Ultra series seems to be moving in the second direction.

Companies lile Samsung have full technical capiabilities for developing such devices now or later (similar to failed multicamera "Lens" projet. lens was a dedicated camera system but missing smart phone options).

The key questions are price point that customers are willing to pay and market demand.
 
I am fully aware the technology is there.

But I was answering your post and telling you to manage your expectations.

I use the word plateaud here to mean companies offering the same from the past or very minimal improvements. Just take your original post, you say s24 Ultra was a missed opportunity? You probably thought there is not much difference from maybe the s22 ultra and s23 ultra, hence that statement.

What companies are offering today is not much difference from what was available 4-5 years ago (example the iPhone 11 vs iPhone 15 test pictures).
 
I agree with DavidoffCafe, mobile raw photography is not interesting anymore. There is no such thing as mobile raw photography anyway. All the devices that have come out in the last few years that support RAW output are not really raw. Even these are involved in some level of computational photography. No phone will ever output a single frame of RAW. If they did, nobody would want to take RAW photos from their phones. Right now the RAW photos we get from our phones are a stack of many frames, stacked together and then enhanced through some processing. Of course this is a good thing for users.

Google, which is the basis of this business, has been using the same camera sensors in Pixel devices for almost 6 generations and has taken its photo capabilities to the next level in each series. For some reason, the brands that are expected to be at the forefront of technology have always been cautious and slow. Brands such as Apple, Samsung, Google, Sony and LG have always been lagging behind in terms of hardware. Thanks to the Chinese brands gaining weight in the market in the last few years, big brands have started to show some development. Because they are afraid of losing their position.

Brands such as Samsung and Apple have been releasing the same or similar hardware for years and generations with new devices over and over again, and yet they have been successful thanks to their brands.

However, now that users are more dominant and curious, the market has gained momentum in mobile technology.
 
 
To be honest, RAW photography is a big thing at least on iPhones.

Not sure which generation introduced it, but I used it for every photo since the iPhone X up to the 13 Pro.

You get a real RAW here, Pixel data before demosaicing, no sharpening / noise reduction or whatever.

And the sensor size of an iPhone 14 or 15 Pro isn’t far away from premium compact cameras like an RX100.
They aren’t class leading, but these sensors can produce amazing clear and detailed images at low ISO ranges with quite good dynamic range.



In fact 100% crops shot at base ISO taken with the main camera don’t look that different compared to my mFT or APSC setups. The difference is, that real RAWs of the iPhone have only 12MP, while mFT comes with 20MP and APSC with 26MP.
So the big cameras offer more crop potential (beside being less prone to flares / CAs based on better lenses and better dynamic range)

Keeping the ISO low is easy with non moving subjects, as the IBIS allows me to take hand held photos up to 1s most of the time.



The iPhone 14 Pro / 15 Pro changed that in some way, as you get 48MP with „ProRAW“, which is a stacked and precomputed format.
I guess manufacturers don’t offer real RAWs of their highres output, as most of them use pixel-binning (Quad Bayer Filter).



So in good lighting situations, a ProRaw might look better. But with high contrasts or worse lighting you often see smudged areas in the shadows due to too heavy noise reduction.



A 12MP real RAW is often more pleasing to the eyes. Even more in the days of AI noise reduction. As those real RAWs aren’t already demosaiced, you can e.g. get full advantage of Adobes AI noise reduction features, which can’t be used with the preprocessed, stacked formats.



And this is a big advantage, as these stuff is too heavy in processing power to be done on the fly. So noise reduction in the stacked output formats of the otherwise very „smart“ phone processing is still lacking behind. (Which will most likely change in a few years).



Because of this, I still shoot 12MP Raw when a situation can’t be reproduced and a picture ruined by the „smartness“ of the phone might be gone forever.



if there’s a lot good light, I often try ProRaw and if there’s enough time I take both. 😉

Can’t compare the situation with Androids, as I never owned one.
 
First of all, thank you for your detailed response.

I just reread the message I wrote and thought that the way of expression I used was not very correct.

What I really want to explain is that today it is not preferred as much as it used to be. (I am talking about smartphones) Although some offer the single frame option as you mentioned as an exception, all smartphone manufacturers stack RAW photos. Especially in the last few years, this has increased with sensors using Pixel Binning. These stacked images look pretty good in raw or jpeg format. So both the manufacturer makes their device look superior and the majority of users are satisfied with it. This is what I was trying to explain in my message, even some professional photographers now prefer stacked RAW images that are presented to the user as configured by the manufacturer rather than single frame RAW images. This way they have a partially enhanced RAW image. They save time, which ordinary users never take the time to do.

But it can be done if desired. With many third-party applications on Android devices, they can capture raw images at various depths within the capabilities of the smartphone. Even raw video if desired.

If we leave our personal tastes out of it, smartphones can give us an image in a second that we can develop with an hour of work thanks to computational photography. Even if some of the algorithms are overkill, it's possible to get an image with very good detail and textures, and at the same time, a remarkably good noise and distortion-free image.
 
Sure, stacked RAW formats have come a long way and bring some very interesting and helpful features. 🙂

Especially, as many smartphones camera reviews seem to be mainly concerned about image noise, and manufacturers therefore tend to produce to too clean results because of this.

With stacked photos you get better looking dark areas most of the time due to better shadow details / less needed noise reduction in the process.
You also end with better dynamic range.

Many phones deliver also very good results with their tone mapping (darker skys, brightened shadow parts) without a too heavy HDR look.
And at least with Apples ProRAW in Adobe apps, you can regulate this tonemapping.
As it's not baked into the RAW image but instead added as an additional layer in the RAW file, you can apply the strength from natural photo look and the one end to HDR look at the other end.
The default slider position ends in an image that looks exactly like a JPG from the phone.

Like you said, this can save a lot work on post processing and I would love to see something like that for my larger cameras as well.


So, my main problem with this formats is some missing control and reliability.
Like, smudged shadows or faces in case it was not possible to shoot / use a frame with a long enough shutter speed for the phone.

For example a moving person might show aggressive noise reduction because the phone decided to freeze it but could only use a very short shutter speed frame and used more noise reduction than on the rest of the image.
This can result in a an image with an irritating look.

On the next image the phone might decide to keep the persons motion blur with less noise reduction, as it doesn't recognize it as a main object of the photo and use a longer shutter speed frame for it.

Both decisions might be in my interest, or completely wrong, depending on the scene. 😉

Or smudged shadow detail, as the camera didn't take a frame with a long enough shutter speed for this high dynamic range. Sometimes happens when shooting in bright sunlight with some dark areas that have not much contrast.
Details can be killed by noise reduction very quickly here. At least this point could see huge improvements when phones can use better AI noise reduction fast enough to do it on-the-fly while shooting/saving the file.

That's why I tend to stick with RAW in some situations.
But I agree that most phone photographers aren't interested in spending that much effort in their photos.
And stacked photography increased quality for the big mass by a lot. 👍
 
Here are some expectations:

1: get rid of all the 10MP and 12MP Sensors

2: give us a modern eye tracking AF for humans and animals that doesn't get limited when shooting more that 12MP

3: smart shutterspeed limit you can set according to the subjects motion but uses faster speed if the scene gets brighter (instead of overexposure you get with a fixed shutter speed in pro mode)

4: fix RAW support
 
Mobile raw photography has actually plateaued.
Regressed:

ExpertRAW is a total scam.

ProMode has not shadow or highlight recovery anymore.
 
A 12MP real RAW is often more pleasing to the eyes.
50MP RAW is even better, as the finer noise looks better when scaled down.
 
Of course it is. 🙂
But the current iPhones won't give you access to pure unstacked RAW data at the higher resolution.
So it's 12MP real RAW vs. 48MP stacked / pre-processed files where you might be confronted with already too much denoised surfaces (which you can't undo in post).
But again, the situation for some of the Android flag-ship models might be different.
 
Last edited:
Someone on Reddit actually compared an iPhone 11 with the latest iPhone 15 recently and the results were almost similar

https://www.reddit.com/r/iPhoneography/s/sPF2q8ArR1
As always the scene and its complexity makes one better then the other, also lets see the exif data? where are the raw files?

If you think that 1/2.55 ~ 25mm2 sensor can match 1/1.28 ~ 80mm2

from much newer generation you cannot be further from the truth.
 
Last edited:

Keyboard shortcuts

Back
Top