New negative conversion tool

Status
Not open for further replies.

NegativeCreep

Member
Messages
12
Reaction score
6
Hi... first post.

So, I'm currently working on creating a new tool for negative conversion. I'm recently getting back into photography, after many years away, and will at some point be wanting to convert any negs I shoot, plus, I plan on going back and rescanning some old ones, but to higher quality.

As such, I started looking over options for scan and convert, and was struck by the somewhat varied options for conversion, and after picking over the details... decided to just create my own software for it.

The aim, is to produce a conversion that is very automated, and which produces as accurate an output to the "true" look of the film stock as possible.

Ofc, with various ideas flying round my head about best to do it, I've gone down this path before even getting as far as digging my old negs out from under the stairs, so atm, all my tests have been on various negative images I've dug up through google search... I've attached them to show the current state of conversion. Note... some of the examples do not have visible film borders (I've not cropped them out, they're just not present in the images I've gathered... getting an accurate result under such circumstances is one of the problems I'm trying to solve).

Be good to hear any feedback, input, etc... ad if anyone has neg scans they wanna post for me to run through the pipe as a test, I'd be most grateful to give 'em a try and see what you think of the output.

Hello again, and cheers.



ba073ed1034b440eabc967ebe780bea7.jpg



6340955e200f47cb8a06e956059211fb.jpg



82bdb8e7e6a24d2896318bf0b9f9b8bf.jpg



f64ab79fdc7649878c8475cba148b032.jpg



f960d5734580427db753a0e15ca3c32a.jpg



0a917ed8db4c4ffea4e9677d4f69503a.jpg
 
Hi... first post.

So, I'm currently working on creating a new tool for negative conversion. I'm recently getting back into photography, after many years away, and will at some point be wanting to convert any negs I shoot, plus, I plan on going back and rescanning some old ones, but to higher quality.

As such, I started looking over options for scan and convert, and was struck by the somewhat varied options for conversion, and after picking over the details... decided to just create my own software for it.

The aim, is to produce a conversion that is very automated, and which produces as accurate an output to the "true" look of the film stock as possible.

Ofc, with various ideas flying round my head about best to do it, I've gone down this path before even getting as far as digging my old negs out from under the stairs, so atm, all my tests have been on various negative images I've dug up through google search... I've attached them to show the current state of conversion. Note... some of the examples do not have visible film borders (I've not cropped them out, they're just not present in the images I've gathered... getting an accurate result under such circumstances is one of the problems I'm trying to solve).
Be good to hear any feedback, input, etc... ad if anyone has neg scans they wanna post for me to run through the pipe as a test, I'd be most grateful to give 'em a try and see what you think of the output.

Hello again, and cheers.

ba073ed1034b440eabc967ebe780bea7.jpg

6340955e200f47cb8a06e956059211fb.jpg

82bdb8e7e6a24d2896318bf0b9f9b8bf.jpg

f64ab79fdc7649878c8475cba148b032.jpg

f960d5734580427db753a0e15ca3c32a.jpg

0a917ed8db4c4ffea4e9677d4f69503a.jpg
You are Lukas Büsse and I claim my £100 ! :-)

If you’re not Lukas Büsse you probably need to credit them with some of the shots
 
Hi... first post.

So, I'm currently working on creating a new tool for negative conversion. I'm recently getting back into photography, after many years away, and will at some point be wanting to convert any negs I shoot, plus, I plan on going back and rescanning some old ones, but to higher quality.

As such, I started looking over options for scan and convert, and was struck by the somewhat varied options for conversion, and after picking over the details... decided to just create my own software for it.

The aim, is to produce a conversion that is very automated, and which produces as accurate an output to the "true" look of the film stock as possible.

Ofc, with various ideas flying round my head about best to do it, I've gone down this path before even getting as far as digging my old negs out from under the stairs, so atm, all my tests have been on various negative images I've dug up through google search... I've attached them to show the current state of conversion. Note... some of the examples do not have visible film borders (I've not cropped them out, they're just not present in the images I've gathered... getting an accurate result under such circumstances is one of the problems I'm trying to solve).
Be good to hear any feedback, input, etc... ad if anyone has neg scans they wanna post for me to run through the pipe as a test, I'd be most grateful to give 'em a try and see what you think of the output.

Hello again, and cheers.
Copyright aside, there are a number of pieces of conversion software available which do a pretty good job of inverting negatives. Do you have any specific experience in negative inversion that would give yours a USP, for example when Fujifilm and Kodak built inversion software in the late 1990s they could call upon their years of experience in colour science to build software.
 
Has DPReview done an article on this issue? If not, would be nice to have Mike Tomlin do one.
 
I think the examples lack punch and are too dark. The first one just lacks contrast and saturation.
 
What copyright issues? It's transformative work undertaken for research and private study, thus exempt.

Speaking to experience... of negative correction specifically, no... but I did spend 15 yrs as a post production technical director in film and video production (commercials mostly, sadly), from where I have a pretty good knowledge of colour pipelines from a technical perspective.

As for what I'm trying to do differently... well, I don't have PS or LR, so the usual plugins aren't much good to me, and there's not a great deal on offer as standalone for the desktop, and what there is shows a fair bit of variability of results.

My aim is to try and produce an output with as little "interpretation" as possible thrown into the process, to produce a conversion that's as close as possible to the "true look" of a given film stock (as intended by the stock's original designers/engineers), as, while editing, correction, etc is fine, I feel that such a point is the best place to start from before doing any editing.

Plus, y'know... a lot of folks into photography like to do things for themselves, be that dev, print, edit, etc. I'm a programmer who knows imaging pipelines, of course I'm gonna DIY this part of the process.
 
I think the examples lack punch and are too dark. The first one just lacks contrast and saturation.
They do to greater or lesser degrees, and yes, the first is low contrast, and desaturated.

But... is that just the underlying characteristic of the film stock, which is what I'm trying to get from the conversion, not some edited, punched, ready jazzed up thing.

(Admittedly, this will be a far easier question to answer once I actually get round to scanning and converting my own pics, and will know exactly what stock was used and what it should look like).
 
What copyright issues? It's transformative work undertaken for research and private study, thus exempt.
Oddly people on a photography site can be a bit picky about copyright, and as far as I’m aware those images weren’t originally published in the USA (they’re look like the ones out of Silvergrain Classics). I think you at least need to credit them, that’s pretty standard practice.
Speaking to experience... of negative correction specifically, no... but I did spend 15 yrs as a post production technical director in film and video production (commercials mostly, sadly), from where I have a pretty good knowledge of colour pipelines from a technical perspective.

As for what I'm trying to do differently... well, I don't have PS or LR, so the usual plugins aren't much good to me, and there's not a great deal on offer as standalone for the desktop, and what there is shows a fair bit of variability of results.
There are a few things outside of Photoshop and Lightroom, for example DarkTable. I thought there was a project in play to replace the negadoctor module within DarkTable with something based on some lapsed Kodak patents ? Using DarkTable to hang it all in would remove a lot of the I/O and GUI issues.
My aim is to try and produce an output with as little "interpretation" as possible thrown into the process, to produce a conversion that's as close as possible to the "true look" of a given film stock (as intended by the stock's original designers/engineers), as, while editing, correction, etc is fine, I feel that such a point is the best place to start from before doing any editing.
I totally agree with you there - I find that the negative inversion processes gets drawn in to post processing. To me making the image “look nice” is separate from inverting the image. One of the reasons I like slides as you can have a calibrated workflow.
Plus, y'know... a lot of folks into photography like to do things for themselves, be that dev, print, edit, etc. I'm a programmer who knows imaging pipelines, of course I'm gonna DIY this part of the process.
There’s a lot of it about….
 
Oddly people on a photography site can be a bit picky about copyright
Yeah, 15 yrs in the creative industries, I've seen enough picky people.
I think you at least need to credit them, that’s pretty standard practice.
Fair, though honestly I just searched "colour negative" in google images, and that's the honest total knowledge I have on the provenance.
There are a few things outside of Photoshop and Lightroom, for example DarkTable. I thought there was a project in play to replace the negadoctor module within DarkTable with something based on some lapsed Kodak patents ?
I like DT, and use it for post. Negadoctor is ok, but it's still hard to know how "accurate" it's output is to the "film truth". Seems investigating for oneself is a good way to find out.
I totally agree with you there - I find that the negative inversion processes gets drawn in to post processing. To me making the image “look nice” is separate from inverting the image. One of the reasons I like slides as you can have a calibrated workflow.
It does, and most if not all of the tools have adjustments for additional colour balancing, shadows/highlights, etc... which is fine from an editing perspective, but when the instruction is "you need to massage the colour to make it look right"... that's the same as saying the correct result isn't nailed in pure conversion alone.
 
Oddly people on a photography site can be a bit picky about copyright
Yeah, 15 yrs in the creative industries, I've seen enough picky people.
I think you at least need to credit them, that’s pretty standard practice.
Fair, though honestly I just searched "colour negative" in google images, and that's the honest total knowledge I have on the provenance.
I think unfortunately you hit on the images that NegMaster use to advertise their product - https://negmaster.com :-)
There are a few things outside of Photoshop and Lightroom, for example DarkTable. I thought there was a project in play to replace the negadoctor module within DarkTable with something based on some lapsed Kodak patents ?
I like DT, and use it for post. Negadoctor is ok, but it's still hard to know how "accurate" it's output is to the "film truth". Seems investigating for oneself is a good way to find out
Ive been shooting a Macbeth chart with various colour stocks and using them to build colour profiles in Negadoctor. They are pretty good apart from they seem to add a slight colour cast. This was my first attempt https://www.dpreview.com/forums/post/65761678 after using DarkTable for about an hour or so (the inversion parameters were fixed on a Macbeth which was then applied to the image of the rose).
.
I totally agree with you there - I find that the negative inversion processes gets drawn in to post processing. To me making the image “look nice” is separate from inverting the image. One of the reasons I like slides as you can have a calibrated workflow.
It does, and most if not all of the tools have adjustments for additional colour balancing, shadows/highlights, etc... which is fine from an editing perspective, but when the instruction is "you need to massage the colour to make it look right"... that's the same as saying the correct result isn't nailed in pure conversion alone.
I believe the colour cast Im getting in DarkTable is because of a DarkTable setting, I just haven’t had chance to change it yet. If that plays out OK then it will give me an automated route, particularly for Vision3, which I’m having to use ColorPerfect for at the moment.
 
You say you don't have Lr or Ps. You're selling yourself short as you really need access to the tools in Ps, not Lr so much. None of these images have had the basic process of ranging - setting proper RGB values for highlights, shadows and neutrals, hence they just look flat and lifeless and also lacking in proper saturation. For that you need Curves. Nothing else will do.

The absolute best color neg conversion software I've used, and I've used a ton of them, stopped development in 1998 and it's still the best - by far. It's Trident 4.0, the software that ColorByte wrote to drive the Howtek desktop drum scanners from back then. Still runs fine on Mac OS 9.2.2

For me, Trident is the gold standard by which all others are measured, but then you kinda need a Howtek to go with.

I have done some very good color neg conversions both in Photoshop and in Capture One, just for fun to see what they can do. Both are capable of great conversions but are probably too difficult and time consuming for the average user.
 
One thing missing at this point is 'proper' interpretation of tonal range. Film generally compresses a wide dynamic range into a narrower one, and expanding that again is part of the digitizing process.
...hence they just look flat and lifeless and also lacking in proper saturation. For that you need Curves...
Yes, but all that is POST.

For instance, look at the following adjustment, a simple lift. The second image looks much better, clearer, brighter, etc. Much more "finished". And no part of it has blown out, all seems good. But do a channel examination, in this example given the green channel.

Obvs, the second, looks fine, brighter, as you'd expect. But, if you divide the results, you can see the scaling factor difference (final single image). It's mostly grey, representing a uniform scaling factor that has simply lifted the overall brightness, but it has white streaks too, and these are areas where the green channel has been elevated "out of bounds", and thus clipped... meaning information has been lost.

And this is fine to do when editing towards the final look, to throw away image data... but you certainly dont want to do that in the conversion step, as it can't be recovered later. Conversion needs to preserve the full tonal information of the film original.



49e98fe5e3bc4c158f0e081f941d827e.jpg



13441d3f9c844eab864d27a41cf71798.jpg



1796d602f4e74327bfe22064b6ccafb9.jpg
 
One thing missing at this point is 'proper' interpretation of tonal range. Film generally compresses a wide dynamic range into a narrower one, and expanding that again is part of the digitizing process.

Sasquatchian wrote:

...hence they just look flat and lifeless and also lacking in proper saturation. For that you need Curves...
Yes, but all that is POST.
I don't consider it post, in that it comes out 'properly' from my flatbed and film scanners, usually requiring little if any further treatment.
... you certainly dont want to do that in the conversion step, as it can't be recovered later.
I very much want that during scanner conversion (along with the ability to make optional manual tonal tweaks based on a fast pre-scan), though I accept that not everyone wants it.

It's interesting to note that you said this earlier:

'The aim, is to produce a conversion that is very automated, and which produces as accurate an output to the "true" look of the film stock as possible.'

I guess it's a matter of whether the 'true' look of the film stock means the look that does not incorporate the need to expand the tonal range ... the look that requires more processing for a result that we'd want to see.

Also, apparently you're using this with output from cameras and an Epson flatbed. Camera digitizing is one thing with some special challenges ... but can existing scanner software not produce the same output you're seeking with your tool?
 
I don't consider it post, in that it comes out 'properly' from my flatbed and film scanners, usually requiring little if any further treatment.

I guess it's a matter of whether the 'true' look of the film stock means the look that does not incorporate the need to expand the tonal range ... the look that requires more processing for a result that we'd want to see.
Just because the software has chosen to make post processing tweaks for you, doesn't mean it's no post processing.

As for the "tonal range"... are you perhaps confusing dynamic range with tonal range? Dynamic range is the difference between the brightest and darkest areas represented (the contrast ratio more or less), the tonal range, is the number of tonal gradations captured.

Imagine if you will a 1bit image (pixels are black or white, nothing else). Given a very (very very) high dynamic range display, that white pixel could be as bright as the surface of the sun, and the black as dark as a black hole, but any tone in between those two extremes would be crushed to one side or the other. Massive dynamic range, pretty much nothing in the way of tonal range.

The tonal range captured cannot be "expanded" (other than by interpolation, which is basically what happens when you take say an 8 bit image in PS and throw it into 16bit mode).

So during the conversion the full tonal range of the image that was captured should be preserved (to whatever bit depth the scan was taken in). If you've ever seen or worked with log video, or cineon format scans from motion picture negative, you'll know how "flat" they are, because they retain maximum tonal information to allow for greatest flexibility in post.

As to the "true" look of a given film stock... if a given process (be it manual or something the software has done for you) has crushed/thrown away colour or tone information present in the original negative... then by definition that cannot be a true representation.
 
Last edited:
The aim, is to produce a conversion that is very automated, and which produces as accurate an output to the "true" look of the film stock as possible.
This makes zero sense. The color palette of a final print was set during PRINTING via color filtration and choice of paper. That's why color negative film is called print film. Film stock has very little to do with it - it only provides the starting point. There's no "true" look of film stock, it's a Youtube myth.

When scanning, a human decides what color palette they want. Therefore, it's impossible to scan well in fully automatic mode. That's why lab scanners use presets and auto-color/auto-tune algos to produce mediocre results for scanning at volume and creating myths like "Fuji greens". When people talk about Portra colors, they are talking about what Noritsu EZ Controller thinks Portra looks like. And the truth is that it can look like anything, because it's a print film! :)

Any film can produce any color palette, in other words all films can look the same after scanning. Only speed, cost and grain matter.

This is true for cine film as well. Ever wondered how Mexico always looks yellow in the movies? No, there's no "Mexico" film stock, it's cinematographers color grading their movies to get what they want.

In other words, you're trying to solve the unsolvable. Your software will never know how I want my Fuji 400H to look like. The best you can do is to generate a neutral, low-contrast, inverted image for further tweaking. But that's what Negmaster already does quite well.
 
Last edited:
The tonal range captured cannot be "expanded" (other than by interpolation, which is basically what happens when you take say an 8 bit image in PS and throw it into 16bit mode).
The term works just fine for me, so I'll stick with it. In the color example you presented, the range of tonal values is clearly larger afterwards.

Before
Before

After
After

But since it doesn't work for you, what term do you prefer to use when you take a flat, unpleasant, contrast-lacking "true" inverted capture of a film negative and modify it so it will look more like the actual subject did?

Also, you didn't answer the question I asked: Can existing scanner software not produce the same output you're seeking with your tool?
 
Last edited:
The term works just fine for me, so I'll stick with it. In the color example you presented, the range of tonal values is clearly larger afterwards.

But since it doesn't work for you, what term do you prefer to use when you take a flat, unpleasant, contrast-lacking "true" inverted capture of a film negative and modify it so it will look more like the actual subject did?

Also, you didn't answer the question I asked: Can existing scanner software not produce the same output you're seeking with your tool?
Ok, so when it comes to terminology, if you work in programming on image piplelines, or in a studio pushing around image between different folks, etc... these are the terms in the textbooks, they get used to mean certain things, else communication doesn't work. But that's fine... colloquially, I dont mind how someone wants to think of a certain term, don't really have a preference. As long as we can hash out what we mean... cool.

But notice whats happening in the hists you post. For sure, most of the tonal information is in the lower two thirds of the range, but there IS information right up to the right hand edge, and after adjustment, this all gets smooshed together, a good bunch of it clipping. Information has been lost.

Technically, even though SOME of the information has been spread over a greater tonal space, the actual amount of tonal information has reduced, which is to say the tonal range has been reduced.

6d8270ab140e4635aaae88e4492319a0.jpg

And that's fine, but it's gonna be an image by image thing. In this example, it's cool, that kind of stretch is pretty much what you would want to do. But consider another possible example.

What if you had a scene, shot at night, low key lighting, with a little sheen on the road from some streetlight reflection. The histogram could look pretty similar to this. If you performed the same stretch on that image, then that sheen would go bright white, very harsh, and it would probably look like crap.

Thus, the converter shouldn't just be doing this by default,it has to be something that's decided on by the user.

As to your question on existing software... I don't know, maybe it can. But, I don't have access to the source code, so I can't find out. Much better is to work through the process myself, where I can fully analyse the results of every operation that is performed in the conversion process, ad thus figure out what "system" of operations will give me what I want, the most plain, unaltered version of the negative's contents for then doing the further work.
[/QUOTE]
 
"Technically, even though SOME of the information has been spread over a greater tonal space, the actual amount of tonal information has reduced, which is to say the tonal range has been reduced."

You can't make any such assumptions just from looking at a histogram, which, as we all know, is just a graph of pixel distribution, without seeing the attending image and judging whether any clipped data was relevant to the integrity of the image or not. In virtually every case of digital image tonal manipulation, you sacrifice *some* pixels in the image to make the whole image much better. Make a Curves adjustment and, by the very definition of the tool, you're compressing data in one area while expanding it in another. Oh my god, you're throwing away data. Yep, you sure are. But assuming you're working on an image that has enough native levels of data, it doesn't matter because you can't see it - ever. So, the question then becomes, is it better to preserve the integrity of every single original pixel in the image, or is it better to make the image look its absolute best, a few lost pixels be damned. And, btw, have you ever seen a linear histogram coming out of a drum scanner? That's how they scan internally, but we never deliver scans in a linear color space because they're unusable. We toss a crapload of data to make the images look great but we're, hopefully, doing in with full 16 bit per channel hardware, so it doesn't make a difference.

Sometimes when you try to reinvent the wheel, you just get another wheel.
 
Status
Not open for further replies.

Keyboard shortcuts

Back
Top