Camera scanning: extremely blue results from NLP

madlantern

Forum Enthusiast
Messages
430
Reaction score
51
I find I’m getting wildly fluctuating and inconsistent results (especially color temp) from batch converting the photos from the same roll and the same scan in NLP.

My process is, I import the raw files to LR, then choose color temp based on a blank edge area of the negative, crop, and run through NLP with the default settings.

Some rolls of film come out looking great. And then sometimes, I get this. The first two images come out looking incredibly blue. And then the rest of the images are normal-ish looking:

8aa76a4a30553ecd5b246f0d12b55a5c66a9d9d3.jpeg


The blue image settings:

The blue image settings:

I then tried converting the blue image individually in NLP. It actually works out pretty well:

I then tried converting the blue image individually in NLP. It actually works out pretty well:

65412e4bc65619c925e3b5813bc980060c076711.jpeg


I tried throwing the ghastly blue image into SmartConvert and FilmLab, and they both produced relatively reliable results. So am I doing something wrong in NLP?

Here’s the raw file in case anyone wants to have a try: https://1drv.ms/i/c/499d6900d1cab18c/EbRaKUUkM3xChbamyKhKf1wBlXU8JjNto6aUJ6QWPWVMVw?e=4fVWUb
 
When you do your white balance on the film border, are you syncing that and your crop settings to every image from the roll. Are you white balancing EVERY roll? Inconsistencies in base tone exist from roll to roll due to the film stock and age of the film.
 
When you do your white balance on the film border, are you syncing that and your crop settings to every image from the roll. Are you white balancing EVERY roll? Inconsistencies in base tone exist from roll to roll due to the film stock and age of the film.
Yes, I am balancing every roll. I color balance on the blank strips of the negative on either side. And then sync the settings to every photo in that roll. Then I crop the photos so basically only the image is showing. Also, I set a 10% buffer in NLP in case I missed something
 
I get fairly consistent results with the following process:

1) Before taking the shots of the negatives: Balance on the camera the color of the orange mask to grey (as good as possible, may not be perfect as yet). Set the camera fixed to that color balance.

2) Take the shots of the negatives with this fixed balance. Take one shot that includes enough blank film to be used later again in NLP.

3) Depending on your light source this step might not be needed: Switch to NLP and retake the color balance on the shot mentioned above and copy in NLP this balance to all takes. The unexposed portions of the film should now all be neutral grey.

4) Add individual crops to exclude arbitrary borders. Verify the setting excluding borders for analysis by NLP. (The percentage excluded from analysis by NLP)

5) convert and make final individual adjustments.

--
- Alfred
 
Last edited:
I get fairly consistent results with the following process:

1) Before taking the shots of the negatives: Balance on the camera the color of the orange mask to grey (as good as possible, may not be perfect as yet). Set the camera fixed to that color balance.

2) Take the shots of the negatives with this fixed balance. Take one shot that includes enough blank film to be used later again in NLP.

3) Depending on your light source this step might not be needed: Switch to NLP and retake the color balance on the shot mentioned above and copy in NLP this balance to all takes. The unexposed portions of the film should now all be neutral grey.

4) Add individual crops to exclude arbitrary borders. Verify the setting excluding borders for analysis by NLP. (The percentage excluded from analysis by NLP)

5) convert and make final individual adjustments.
Do you think it’s a problem with shooting or some step in editing / post-production? I included an ARW file in the original post in case you want to take a look
 
Anyone?...
The blue may well be from inverting the orange mask, bus as to why it’s doing it I don’t know. If you’ve got software other than NLP you could always use that (I’m not convinced NLP is the best, it was just one of the first) ?
 
Assuming all the pictures were from the same masked color negative film: If a take turns blue that may be the result of a double conversion. One to compensate for the orange mask and then a second time the same correction, as if you would have photographed, or converted again, an already corrected picture.

I have never had that result using the steps mentioned in my post above. Did you check the photographed negatives before converting in NLP? They should all look similar without any noticeable global shifts. If so, the outliers you got, would be produced in the conversion step or thereafter only and then you have to look at your conversion process.

I would normally crop all negatives individually, then highlight the whole film and convert only thereafter with NLP all at once. Then, still in NLP, start from the beginning and fine tune the pictures individually in NLP, not leaving NLP. I make in particular no additional color changes in Lightroom alone on a converted picture directly after conversion - only with NLP. I think that a manipulation in Lightroom on the converted negatives could have produced your blue outliers.

If you want to do adjustments in Lightroom only, without NLP, it is better to save the converted film as a separate converted copy. Then the original conversion settings from negative to positive will be no more active and you can handle the pictures normally in LR alone.
 

Keyboard shortcuts

Back
Top