New negative conversion tool

Status
Not open for further replies.
You can't make any such assumptions just from looking at a histogram, which, as we all know, is just a graph of pixel distribution, without seeing the attending image and judging whether any clipped data was relevant to the integrity of the image or not.
How is this any different to what I've been saying... that a person has to view the image and make such a determination?

What Im working on here is PURELY the conversion step, which ought initially preserve all information so that these other determinations can be made afterwards.
 
The term works just fine for me, so I'll stick with it. In the color example you presented, the range of tonal values is clearly larger afterwards.

But since it doesn't work for you, what term do you prefer to use when you take a flat, unpleasant, contrast-lacking "true" inverted capture of a film negative and modify it so it will look more like the actual subject did?

Also, you didn't answer the question I asked: Can existing scanner software not produce the same output you're seeking with your tool?
Ok, so when it comes to terminology, if you work in programming on image piplelines, or in a studio pushing around image between different folks, etc... these are the terms in the textbooks, they get used to mean certain things, else communication doesn't work. But that's fine... colloquially, I dont mind how someone wants to think of a certain term, don't really have a preference. As long as we can hash out what we mean... cool.

But notice whats happening in the hists you post. For sure, most of the tonal information is in the lower two thirds of the range, but there IS information right up to the right hand edge, and after adjustment, this all gets smooshed together, a good bunch of it clipping. Information has been lost.
But any clipping, if it exists, is what we actually want in this case, so it's fine.
Technically, even though SOME of the information has been spread over a greater tonal space, the actual amount of tonal information has reduced, which is to say the tonal range has been reduced.
I disagree that the tonal range has been reduced, but there's no point in arguing semantics.
6d8270ab140e4635aaae88e4492319a0.jpg

And that's fine, but it's gonna be an image by image thing. In this example, it's cool, that kind of stretch is pretty much what you would want to do. But consider another possible example.

What if you had a scene, shot at night, low key lighting, with a little sheen on the road from some streetlight reflection. The histogram could look pretty similar to this. If you performed the same stretch on that image, then that sheen would go bright white, very harsh, and it would probably look like crap.

Thus, the converter shouldn't just be doing this by default,it has to be something that's decided on by the user.
I have never said that the digitizing tool should always take over those adjustments. We agree that the user should have control over that aspect. I said so in an earlier post.

But I like having such control built into the scanning software that I use. I can preview an image and perform a wide range of manual tonal adjustments, or allow the software to make those choices. (The software choices are typically very good, BTW.) Although I could do that work with different tools, I personally consider it an integral feature of digitizing software, which is what prompted my original comment in the thread.
As to your question on existing software... I don't know, maybe it can. But, I don't have access to the source code, so I can't find out. Much better is to work through the process myself, where I can fully analyse the results of every operation that is performed in the conversion process, ad thus figure out what "system" of operations will give me what I want, the most plain, unaltered version of the negative's contents for then doing the further work.
Okay. At some point you might examine a trial version of VueScan. The developer has dedicated many years of work to that tool and provided extensive documentation, and will answer email inquiries.
 
And that's fine, but it's gonna be an image by image thing. In this example, it's cool, that kind of stretch is pretty much what you would want to do. But consider another possible example.

What if you had a scene, shot at night, low key lighting, with a little sheen on the road from some streetlight reflection. The histogram could look pretty similar to this. If you performed the same stretch on that image, then that sheen would go bright white, very harsh, and it would probably look like crap.

Thus, the converter shouldn't just be doing this by default,it has to be something that's decided on by the user.
I have never said that the digitizing tool should always take over those adjustments. We agree that the user should have control over that aspect. I said so in an earlier post.

But I like having such control built into the scanning software that I use. I can preview an image and perform a wide range of manual tonal adjustments, or allow the software to make those choices. (The software choices are typically very good, BTW.) Although I could do that work with different tools, I personally consider it an integral feature of digitizing software, which is what prompted my original comment in the thread.
Agreed. I want to finish the process with the scanning software. Personally I don’t want to do anything to the image once I’ve left the scanning software - and I don’t want to do much in there either (one reason I like SilverFast ).

I hadn’t realised immediately that the images being shown were the end result of the proposed inversion - they need far too much work to be a useable route for me. Every extra program that needs to be used is just a time sink
As to your question on existing software... I don't know, maybe it can. But, I don't have access to the source code, so I can't find out. Much better is to work through the process myself, where I can fully analyse the results of every operation that is performed in the conversion process, ad thus figure out what "system" of operations will give me what I want, the most plain, unaltered version of the negative's contents for then doing the further work.
The source code to DarkTable is available to examine, plus the mathematics behind ColorPerfect has been published, The patents for the Fuji scanners are a matter of public record and have the algorithms listed. A number of Kodak patents have been published. A good read of Photo Engineer on Photrio discussing colour inversion will give you an appreciation to how difficult the process is. For example https://www.photrio.com/forum/threads/color-bias-in-characteristic-curve-in-negative-films.155654/
Okay. At some point you might examine a trial version of VueScan. The developer has dedicated many years of work to that tool and provided extensive documentation, and will answer email inquiries.
 
Last edited:
I have never said that the digitizing tool should always take over those adjustments. We agree that the user should have control over that aspect. I said so in an earlier post.

But I like having such control built into the scanning software that I use.
See, I don't really care about having it a under one hood. I'm accustomed to multi app workflows.
The patents for the Fuji scanners are a matter of public record and have the algorithms listed. A number of Kodak patents have been published. A good read of Photo Engineer on Photrio discussing colour inversion will give you an appreciation to how difficult the process is. For example https://www.photrio.com/forum/threads/color-bias-in-characteristic-curve-in-negative-films.155654/
Yeah, I have done the background reading, I haven't just gone in blind you know. And while there's certainly some complexity, it's not difficult. As PE says in that thread you link, its all linear algebra and calculus (which, ofc it is cos colour is a 3d structure with no linear response functions)... which is fine, cos I've been using that kinda math for years. The trick is finding the little right details, coefficients, etc. That's the tougher bit.
plus the mathematics behind ColorPerfect has been published,
Parts of that were an interesting read. They don't really decribe their mask removal method, though they do walk through a photoshop method which, while an ok-ish approximation that gets you close and may work for many, is in fact wrong.

What was more interesting was their noting of performing the actual inversion step as an inversion of density rather than intensity. That's something I'd not considered. They do however state that the difference is that subtracting density is the same as dividing intensity... I don't think that can be right, since density is log(1/transmission)... but again, I suspect it offers a close approximation. Either way, I'm def gonna have to experiment with that idea some.

So cheers for poiting that one out.
 
This makes zero sense. The color palette of a final print was set during PRINTING via color filtration and choice of paper. That's why color negative film is called print film. Film stock has very little to do with it - it only provides the starting point.

... Your software will never know how I want my Fuji 400H to look like. The best you can do is to generate a neutral, low-contrast, inverted image for further tweaking.
See... there's a yes and no side to all that. Film is really an intermediate format used for "data collection" (odd to think of in the analogue world, but there it is). The purpose of which is to capture maximal information to provide flexibility in look development later. So to that extent it's true that it "can" look any way you want. But because image data, or audio, or whatever else can be manipulated into different forms, doesn't mean there's no original base.
No disagreement here. I was not disputing the existence of the "original base". My point is that the final look of a photograph is entirely up to a person doing the scanning. The film of choice simply provides the starting point, and fully automatic color inversion is meaningless.
And when it comes to film stocks, why were there so many (more historically than now ofc)? Why did fashion photographers gravitate to one type, landscapers to another if they were all just so interchangeable?
Great question. If you look into Kodak current or historical lineup, you will clearly see that their print films are spread across following two dimensions:
  • Speed
  • Cost
That's it. Color has nothing to do with it. Even VC/NC saturation split has been retired once scanning replaced 100% chemical workflows. All pro films are simply spread on a grain/speed trade-off spectrum. The same is true for consumer films that have been optimized for a pretty look out of a mini-lab (Gold).

This is even more obvious with cine films. They differ strictly in speed. A film director decides on what kind of color grading they want.
It's because at "base" there were colour pipelines (profiles if you will) designed and baked in. If you took, say, fuji superia, developed it in the standard chemistry laid out by fuji, printed it on the "recommended" paper by fuji, dev'd it again as standard... it gave a certain look, which would be distinct from a given kodak or agfa, or even different fuji stock put through their respective pipelines.
Their design aim for pro grade films was always to deliver 100% parallel CMY curves. Any kind of deviation you see is a manufacturing/storage defect or a mass-printing optimization. Gold was made to look gold because it allows an automatic machine in a 1-hr mini-lab to produce 36 kind-of-OK looking prints from a roll with exposures all over the place, that's been stored @80F for two years. Trying to reproduce these "defaults" when scanning at home makes no sense.
The "grading" was baked into this process by the design and engineering of the separate components. Of course you could always deviate from that by intervening during the processing pipeline, but again, being able to manipulate things away from the default look doesnt mean there is no default look.
I didn't say there's no default look. I said that look is never meant to be preserved, and for that reason fully automatic inversion makes no sense. It's called print film for a reason - colors were set by color filtration and a choice of paper, which means "during scanning" in 2022.

However, a case can be made that people simply can't be bothered. They want one-click inversion that's not great, but isn't terrible either. That's more or less what Noritsu scanners do, but they use presets. You don't get "true Portra color" from them, you get "true Noritsu color" from them, i.e. once again - the film base is discarded in exchange for what Noritsu engineers thought would be pleasing for most.
 
Last edited:
I didn't say there's no default look. I said that look is never meant to be preserved
And there's the thing...

Yes, had the engineers making film been able to nail perfectly accurate colour representation no matter the conditions, there'd have been one stock (or one at various speeds) and that would have been that.

The characteristic "looks" that different stocks did and do have are really nothing more than imperfections, and engineers being forced to make trade offs. Such characteristics were never meant to be preserved and with digitisation, they can be corrected for completely... sure.

But they did exist, and still do and have consequently become individual aesthetics that people pursue for their character. To say that the preservation of such characteristics is meaningless isn't to argue a matter of fact, its to argue a matter of taste.

And while it may well be true that a Noritsu scan's colours say more about the noritsu than the film stock run through it... the alterations applied are still happening after that initial base point, and thus I'm saying I'd prefer to be the one making the decisions about those alterations, rather than simply accepting the ones put upon me by noritsu (or whomever).

I'm not saying that the preserved, base, "true" look should be the final look (maybe it could be), but that's something I want to make the choices about.
 
Last edited:
There's a deeper and almost philosophical conclusion in this argument though. You are trying to deliver something you've never seen. The "true film look" that you believe in, you never saw it. You only saw prints and scans, and all of them were someone's interpretation of what's captured by an emulsion.

You have no idea how "it" looks like, and there's no way to measure how far you're from "it".

Nate of Negative Lab Pro fame is very honest about his intentions: he's emulating lab scanners. Andre, the creator of Negmaster, is honest about giving us a neutral starting point for further edits.

You are not honest. You claim to deliver "true film look" without ever seeing it. That's why you're sharing these results here, you're craving for external validation of your delusions, but I am calling BS. Sorry. Print films don't have a look. They don't offer a complete image, just an intermediate building block used by a scanner operator to create an image they need.
 
Last edited:
Sometime around '98 or '99 when I'd kinda gotten my initial year of drum scanning under my belt and was acting as the chief beta tester for ColorByte, I started shooting some of my architectural interiors (Capital Studios for one) on Portra 160 because it had such a wide dynamic range and it just didn't care what light you shot it with. Nominally a daylight balanced film, I did shoot some with an 80A (it's been a while so if that's not right, y'all know what I mean) to balance for the tungsten lighting, but then I either forgot to put the filter on or just got lazy, and what I found is that it just didn't matter.

I'd just tape those Portra 4x5's to the drum (yeah, under C-42 mylar and Kami fluid) and let Trident do it's magic, inverting the orange mask, setting the white and black end points, setting a neutral if there was one and just adjusting overall on a Sony Artisan or Barco Calibrator to make it look right and finally setting the aperture to 16 microns to match the grain of the film, and of course, turning all sharpening off in the scanning software. In a way, I was printing those negs directly to the calibrated monitor, and from there we'd convert to whatever was the appropriate for the intended output. Not really any different than what we do today, only with film.

The point is, is that I would make those scans look like I wanted them to, not how some preset somewhere thought they should look like. You also learned how much you could do in the scanning software and what needed to be finished off in Ps. Well, everything needs to be finished there anyway, but some things were better done or faster in Trident and the more you were aware of that, the more efficient your workflow was.

I think but I'm not sure that I can bring a 16 bit tiff, non converted into Trident and treat it as if it just came off the scanner. I'll have to try that when I get a chance. It just gets a bit cumbersome as the latest Mac OS it will run on is Mac OS 9.2.2 - y'know, circa 1998.
 
The patents for the Fuji scanners are a matter of public record and have the algorithms listed. A number of Kodak patents have been published. A good read of Photo Engineer on Photrio discussing colour inversion will give you an appreciation to how difficult the process is. For example https://www.photrio.com/forum/threads/color-bias-in-characteristic-curve-in-negative-films.155654/
Yeah, I have done the background reading, I haven't just gone in blind you know. And while there's certainly some complexity, it's not difficult. As PE says in that thread you link, its all linear algebra and calculus (which, ofc it is cos colour is a 3d structure with no linear response functions)... which is fine, cos I've been using that kinda math for years. The trick is finding the little right details, coefficients, etc. That's the tougher bit.
I think if your one take home from that was that it was just some matrix calculations and calculus then you must have been reading a different thread to me
 
Last edited:
I think if your one take home from that was that it was just some matrix calculations and calculus then you must have been reading a different thread to me
No... just that that's the stuff most folk would consider to be "the hard part". I presume then that you're talking about the details of the overlap between the different colour layers, the interaction of the mask, etc, etc. That's really little different to information found elsewhere.

It may be true that finding the exact solutions to all these details might not be doable without access to the propietary data, or very advanced measurement apparatus... but that doesn't mean one can't come very close. There is afterall only one way to find out.

But the fundamental takeaway is that, white these many gotchas exist, none of it is magic, and it IS fundamentally just a math problem.
 
I think if your one take home from that was that it was just some matrix calculations and calculus then you must have been reading a different thread to me
No... just that that's the stuff most folk would consider to be "the hard part". I presume then that you're talking about the details of the overlap between the different colour layers, the interaction of the mask, etc, etc. That's really little different to information found elsewhere.

It may be true that finding the exact solutions to all these details might not be doable without access to the propietary data, or very advanced measurement apparatus... but that doesn't mean one can't come very close. There is afterall only one way to find out.

But the fundamental takeaway is that, white these many gotchas exist, none of it is magic, and it IS fundamentally just a math problem.
My reading of it was that it was that correct inversion was dependent on access to large amounts of proprietary data, test equipment, personnel with tens of years experience etc etc. Things that no one since “peak film” (2004) has had realistic access to. That’s the hard part, hard won expertise always is.
 
The band was called Black Rose and I took that photo. Here is my scan of the neg, from my files. I a previous DPR thread back in 2014.


and I guess Google found it there. So much for copyright.

c4a713ec0eac41b0bc1043365f17e8a8.jpg

And here's my conversion, done in 2014 :

dc9c264ddfa24ddaaecb5333da3d6554.jpg

Don Cox
 
Last edited:
Status
Not open for further replies.

Keyboard shortcuts

Back
Top