And I don't even mean deep technical things like readout noise or sensor readout speeds, etc... Even basic thoughts like exposure interfere with my creative process. That's why I'm a huge fan of camera automation (AF, exposure, etc...), and also why I believe many of the best photos taken today are with smartphones. Photographers sometimes say "that's a great photo in spite of it being taken with a smartphone". I say that's a great photo *because* it was taken with a smartphone. Camera automation liberates me from the workload of shooting, and more importantly, keeps my brain hemispheres separated, allowing me to focus on just seeing.
For a long time, I've tended to distrust camera automation, because it frequently gets things wrong. I still almost universally distrust autoexposure, partly because of the fact that I know roughly how most camera AE algorithms work and I disagree with them. (Specifically, very few cameras offer effective highlight metering).
With smartphones (especially Google's smartphones), they have published multiple papers describing various aspects of their pipeline.
I know that Google's phones meter exposures to preserve highlights. I agree with this approach.
I know that metering for highlights carries a risk of reduced SNR not only in the shadows but also in midtones. I also know from reading their published research that Google compensates for this risk by taking multiple exposures and stacking them.
I know that exposure merging can have issues with motion unless subpixel alignment is allowed and/or local motion estimation and motion compensation (as opposed to global motion estimation/compensation). I know from reading their published research that their newer implementations support this. (Legacy HDR+ did not, their modern MFSR that originally was used in Night Sight and is now default in all modes does.)
I know that exposure compensating/tonemapping can have tradeoffs that an automated system can get wrong. I also know that I have found historically that Mertens' exposure fusion algorithm (aka enfuse) happens to be highly robust and rarely gets things "wrong" in a way that I dislike. (yes, I know this is a matter of taste, but my personal opinion is that I'm almost always happy with the results from enfuse with little to no need to tweak things significantly). Last, I know from Google's published research that they use a variation of Mertens' algorithm to perform their local tonemapping.
I distrust neural networks, but I know that Google only uses them for preview and for AWB in their "default" (e.g. no Magic Eraser or whatever) pipelines/settings.
This knowledge of the image processing fundamentals, and of the design of the automation system, helps me gain trust in the automation. This trust has led me to use my Pixel far more often than my Sonys nowadays, because the Pixel is so much more compact and I always have it with me, and I trust most of the fundamental decisions made in its design. (I disagree with Google's stance on what Adobe calls the "LinearRAW" photometric interpretation, e.g. storing an image that has been demosaiced without color conversion. Apple does this one better as Google's approach is inferior when your stacking algorithm does subpixel alignment.)