B&W camera mod B+W camera modification to monochrome ( black&white black+white black + & white )

Timmbits

Leading Member
Messages
617
Solutions
5
Reaction score
155
Location
Montreal, CA
Does anyone here have experience, or definitive knowledge, whether it is possible to convert existing cameras to monochrome?

I suppose that it is akin to an infra-red conversion, where you remove the color filter from in front of the sensor, and add UV filter in front of your lens... or is it?

The color filter array allows only red, or blue, or green green to pass through, blocking out most of the light, and using each photosite for only one specific tiny part of the color spectrum.

Removing the CFA would multiply by magnitudes the amount of light and colors that enter each single photosite, and the individual photosites would no longer be exposed to only one of the 3 "primary" colors that our eyes can see, but magnitudes more of light.

The result is multiplying sharpness (resolution if you prefer) by a factor of 4 (2 green dots plus one red and one blue dot makes 4), and you would radically increase the amount of light reaching each photosite - imagine the ISO equivalent you would have, even in the lower ISO ranges, with all that extra light. You could use your lowest ISO settings, resulting in no noise, in conditions where you would have had to increase it. Night shots all of a sudden offer new possibilities.

So I guess my question is, has anyone done it, know of it having been done, and the nuts-and-bolts of it (how to do it, step-by-step), and need we only switch our mode into B&W and all will be fine, or is the firmware not compatible with such a change?

The Leica monochroms are just too expensive, and IMO this would be an awesome way to recycle an outdated camera and give it a new life.

I tried to cram in an many keywords into the title line, hopefully it will help people find the thread and share their knowledge and experience.

EXAMPLE OF THE RESULT, Credit to Steve Throndson:

This company, LDP LLC - MaxMax, converts cameras to 'monochrome visible light'. Looks like you can buy one off the shelf for $2315.

https://www.maxmax.com/b&w_conversion.htm

But remember friends, this thread is about doing it yourself, with an old camera, not about buying a new one for this purpose.
 
Last edited:
I think the color array is bonded to the image sensor.
 
Does anyone here have experience, or definitive knowledge, whether it is possible to convert existing cameras to monochrome?
Yes. I've done it.

I also put together a business plan involving it a few years ago. But I felt that risks were too great for the returns.
I suppose that it is akin to an infra-red conversion, where you remove the color filter from in front of the sensor, and add UV filter in front of your lens... or is it?
Nothing like that. The infrared blocking filter sits in front of the camera sensor case. The red/green/blue filters are inside the sensor case, printed directly on the surface of the chip itself. The procedure is:
  1. Remove the sensor and board from camera totally.
  2. "Breach" the sensor. This entails removing the cover glass from the sensor using heat and dangerous solvents (and possibly, a hammer and chisel, depending on the adhesives used).
  3. Dissolve the color filters and the microlens array from the sensor using other dangerous solvents, and possibly more heat. There's nothing more fun than hot solvents: the heat multiplies strength as irritants, carcinogens, and explosives.
  4. Clean the sensor surface, without damaging the microscopic "bond wires" that surround it.
  5. Place the sensor in an inert gas chamber and seal on a new cover glass.
The color filter array allows only red, or blue, or green green to pass through, blocking out most of the light, and using each photosite for only one specific tiny part of the color spectrum.

Removing the CFA would multiply by magnitudes the amount of light and colors that enter each single photosite, and the individual photosites would no longer be exposed to only one of the 3 "primary" colors that our eyes can see, but magnitudes more of light.
No. A "magnitude" is 10x. The color filters cut light by about 1/2. The microlens array that you lose in the process of making a monochrome camera increases light by about the same factor, so the end result is near equivalent sensitivity.

There's a net gain due to the fact that noise isn't as objectionable in B&W, but it's not "magnitudes", it's about a stop.
The result is multiplying sharpness (resolution if you prefer) by a factor of 4 (2 green dots plus one red and one blue dot makes 4),
Nope. Resolution is linear, not a function of area, so even if camera pixel density worked the way you describe ("2 green dots plus one red and one blue dot makes 4") you would only increase resolution 2x. But data density doesn't work that way, the Bayer demosaic algorithms exploit mutual information between the channels, so the net resolution change is 1.4x, or about 40% more.
and you would radically increase the amount of light reaching each photosite
Sorry, no. Not "radically", not "orders of magnitude", just a stop, that's all.
- imagine the ISO equivalent you would have, even in the lower ISO ranges, with all that extra light. You could use your lowest ISO settings, resulting in no noise, in conditions where you would have had to increase it. Night shots all of a sudden offer new possibilities.
Again, you're way overestimating it.
So I guess my question is, has anyone done it,
Yes. I did a D100 and a D70. Iliah Borg did a D2X.
know of it having been done, and the nuts-and-bolts of it (how to do it, step-by-step), and need we only switch our mode into B&W and all will be fine, or is the firmware not compatible with such a change?
The firmware is fine. You just do a custom white balance, and you end up with a nice B&W preview and liveview.
The Leica monochroms are just too expensive, and IMO this would be an awesome way to recycle an outdated camera and give it a new life.
No. Because the first camera of that particular type you do is the big investment: learning to tear that model down, building the fixtures to hold the sensor board for the breaching, experimenting with the right solvents for that camera's filters (because they change from sensor to sensor, and some sensors have filters that cannot be dissolved).
I tried to cram in an many keywords into the title line, hopefully it will help people find the thread and share their knowledge and experience.

EXAMPLE OF THE RESULT, Credit to Steve Throndson:

This company, LDP LLC - MaxMax, converts cameras to 'monochrome visible light'. Looks like you can buy one off the shelf for $2315.

https://www.maxmax.com/b&w_conversion.htm

But remember friends, this thread is about doing it yourself, with an old camera, not about buying a new one for this purpose.
The only reason that the MaxMax people are able to pull this off is that they only worked out their procedures and tooling for 5 camera models, and even though they can spread the cost of all that work over multiple conversions of each model, the conversion still jacks the price of the camera up $2,000-4,000.
  • Canon 5Ti retailing for $649 is $2,515 converted.
  • Canon 5D III retailing for $2,499 is $6,550 converted.
  • Nikon D800 retailing for $2,799 is $6,050 converted.
You will also note that MaxMax doesn't list the latest cameras, like T6s, 7D II, 5DS, or D810, just ones at least a year old. Can't be selling that well, if they don't bother to keep the procedures, tooling, or inventory current, can it?

So seriously, there's a steep learning curve to get professionals skilled in the art of dealing with camera innards AND insanely hazardous chemicals to the point where we can do this without investing $10,000 effort per camera into the process. Save your money, your sanity, and perhaps your lungs, eyesight, skin, or any other body parts you can harm with the chemicals required and scrap this idea.
 
You will also note that MaxMax doesn't list the latest cameras, like T6s, 7D II, 5DS, or D810, just ones at least a year old.
They wait before the spare sensors become available, they rework those and put into the first batch of cameras. That adds to the price greatly, adds to the delay with new cameras, but allows experiment with new sensors without the need to purchase 2 or 3 cameras while figuring out the removal procedure. And when it is figured out, the replaced sensor may go into the removal procedure.
 
An alternative approach which is much less trouble is to use a Sigma camera and take the monochrome image from the top layer only of the sensor.

This top layer is panchromatic (like normal B&W film) and you get the full resolution as you do with a Bayer sensor with the filters removed.

The current Quattro models have a decent resolution of 19 Megapixels, which is good when there is no de-Bayering involved.

This is from the earlier Merrill series, with 14 Megapixels.



8c849ec84f87420fa3a126dfdf1cfdd7.jpg.gif
 
Last edited:
You will also note that MaxMax doesn't list the latest cameras, like T6s, 7D II, 5DS, or D810, just ones at least a year old.
They wait before the spare sensors become available, they rework those and put into the first batch of cameras.
Which means you need the manufacturer's noise mapping software in order to complete the operation. :(

Anyway, thank you Iliah, for that very valuable insight into how the business works.
That adds to the price greatly, adds to the delay with new cameras, but allows experiment with new sensors without the need to purchase 2 or 3 cameras while figuring out the removal procedure. And when it is figured out, the replaced sensor may go into the removal procedure.
 
Joseph, thank you very much for sharing your exprience.

OK, so I used the word "magnitudes" way too much - perhaps I shouldn't have used it at all. Please forgive my ignorance. You set me straight on that. I never tried this, and don't know, so obviously it was a guesstimate, and it was wrong. Fine. Let's move on past that, and have a productive conversation.

There are also some statements that you made, that I would like us to clarify a little bit, if you will don't mind.
Does anyone here have experience, or definitive knowledge, whether it is possible to convert existing cameras to monochrome?
  1. Dissolve the color filters and the microlens array from the sensor using other dangerous solvents, and possibly more heat. There's nothing more fun than hot solvents: the heat multiplies strength as irritants, carcinogens, and explosives.
The color filter array allows only red, or blue, or green green to pass through, blocking out most of the light, and using each photosite for only one specific tiny part of the color spectrum.

Removing the CFA would multiply by magnitudes the amount of light and colors that enter each single photosite, and the individual photosites would no longer be exposed to only one of the 3 "primary" colors that our eyes can see, but magnitudes more of light.
No. A "magnitude" is 10x. The color filters cut light by about 1/2. The microlens array that you lose in the process of making a monochrome camera increases light by about the same factor, so the end result is near equivalent sensitivity.
Consider it this way: a modern monitor, with 24bit color, maps 16million colors. but you can slice it up into even finer increments than that. this gives you an idea of how many different wavelengths/frequencies photons come in at. now what the CFA allows through, is only a tiny part of that - a very specific portion of only green, a very specific slice of the red, and same for blue. Only a few hues of each. Much of the other stuff gets blocked out: the purple hues, the yellow ones, oranges, etc. only red green blue get through and that is what the image gets composed with. incoming light, purple, for example, MAY be a mix of red and blue... but it can ALSO be a wavelength that is pure purple, with no red or blue. we just happen to have 3 types of cones, and detect only 3 in our eyes. (just like dogs only have cones for yellow, no other colors, plus cylinders like we also do to detect all light (ie: B=W)). Is this all correct?

If I got the attributes of light physics down ok, then it seems to me like a lot of light is being blocked out by the CFA.
There's a net gain due to the fact that noise isn't as objectionable in B&W, but it's not "magnitudes", it's about a stop.
The result is multiplying sharpness (resolution if you prefer) by a factor of 4 (2 green dots plus one red and one blue dot makes 4),
Nope. Resolution is linear, not a function of area, so even if camera pixel density worked the way you describe ("2 green dots plus one red and one blue dot makes 4") you would only increase resolution 2x. But data density doesn't work that way, the Bayer demosaic algorithms exploit mutual information between the channels, so the net resolution change is 1.4x, or about 40% more.
Yes, I was counting linear too. it's 2 green plus 1 red plus 1 blue - I haven't squared or multiplied anything. I was just trying to make the point that an image from our camera sensors is composed of 50% green, 25%blue, 25%red dots, expressed in different degrees of saturation/brightness. Each dot has a 25% market share of the total. (for example, on a 16MP sensor, out of 16 million dots, there will be 4 million photosites detecting red, 4 million detecting blue, and 4+4 (8) million detecting green. Out of a box of 4, each photosite has a 1/4 market share of the total. Remove the CFA, and each one will detect red+green+blue+++++++everything else++++++++++ = total
and you would radically increase the amount of light reaching each photosite
Sorry, no. Not "radically", not "orders of magnitude", just a stop, that's all.
- imagine the ISO equivalent you would have, even in the lower ISO ranges, with all that extra light. You could use your lowest ISO settings, resulting in no noise, in conditions where you would have had to increase it. Night shots all of a sudden offer new possibilities.
Again, you're way overestimating it.
Now, if you are also removing the microlens array as well, as you state above... I suspect THAT is where you are losing LOTS of light.

I have seen pictures of microlens arrays published here, and I am of the understanding that each microlens acts as a funnel. it takes the light from a larger area than the photosite, and concentrates it into the reduced area of the photosite's "entrance" - that hole where the photons go in to be retected - in-between the circuits, in-between the dividers.

Remove the microlenses, and all of a sudden, you are losing 2/3 to 3/4 of the light, as it is hitting the top of the dividers between the photosites, and not entering them to be detected.

So the result is that one stop advantage that you speak of... instead of a few, perhaps?

What I am trying to illustrate, is how much of the light gets blocked out if the limitations being imposed on the technology is our retina's cones that detect only 3 colors (we like to call them "primary" colors for that reason - but that is a bad term for it, as the universe doesn't work that way, and there are millions more, perhaps billions more wavelengths out there - we don't know how many because our equipment can only detect so much).

I had seen some vids on some infrared conversions. I seem to remember that in one of them, the guy was saying that you scrape off the CFA, taking care not to scrape too hard, so as not to damage the microlenses.

I think he used a plastic spatula or something of the sort. A different approach to using chemicals to dissolve everything including the microlenses.

Perhaps that is the key?
 
Timmbits,

Just ran across this old thread on the subject of monochrome cameras . . . .

http://www.dpreview.com/forums/post/55486476
@Steve: thank you for that.

@Joseph: the 3rd entry on that thead explains it better than I. there they don't use the word magnitudes, but instead say that "significantly more light reaches the sensor". I should have played it safe and used "significantly". ;)
 
... now what the CFA allows through, is only a tiny part of that - a very specific portion of only green, a very specific slice of the red, and same for blue. Only a few hues of each. Much of the other stuff gets blocked out: the purple hues, the yellow ones, oranges, etc. only red green blue get through and that is what the image gets composed with. incoming light, purple, for example, MAY be a mix of red and blue... but it can ALSO be a wavelength that is pure purple, with no red or blue. we just happen to have 3 types of cones, and detect only 3 in our eyes. (just like dogs only have cones for yellow, no other colors, plus cylinders like we also do to detect all light (ie: B=W)). Is this all correct?
not correct

the blue filter lets through all the wavelengths that are more blue than they are red or green (ideally - the actual boundaries aren't so sharp)

similarly for red and green filters

so between them, all wavelengths are being recorded

so by removing the filters you aren't getting more wavelengths, you are just getting them all at each site

depending on the purity of the filters that probably results in more or less light than the sum of the three filtered sites, but i wouldn't expect that difference to be very significant

I often wonder why the array doesn't have R + G + B + unfiltered, to do an RGB analogue of printing's CMYK, which would enhance shadow detail - albeit in monochrome, which is how our eyes tend to see it anyway.

--

[Due to mental disabilities, I may be erratically unable to respond to comments or even direct questions. Apologies to those who are aggrieved or frustrated, gratitude to those who are patient and understanding.]
 
Sheesh - Correct that:

The blue filter passes light for all wavelengths that can be represented by any combination of blue + red and/or green

similarly red & green filters

so without a filter, that site would (almost?) always receive significantly LESS light than the sum of adjacent R + G + B sites receiving the same mix of wavelengths

_

and as to my RGBW instead of RGBG question, I suppose that would increase issues with noise, but might also simplify noise reduction, at the cost of requiring computing resources not available a decade or four (1976 for Bayer patent) ago, but surely available in modern cameras
 
Sheesh - Correct that:

The blue filter passes light for all wavelengths that can be represented by any combination of blue + red and/or green

similarly red & green filters

so without a filter, that site would (almost?) always receive significantly LESS light than the sum of adjacent R + G + B sites receiving the same mix of wavelengths

_

and as to my RGBW instead of RGBG question, I suppose that would increase issues with noise, but might also simplify noise reduction, at the cost of requiring computing resources not available a decade or four (1976 for Bayer patent) ago, but surely available in modern cameras
 
... now what the CFA allows through, is only a tiny part of that - a very specific portion of only green, a very specific slice of the red, and same for blue. Only a few hues of each. Much of the other stuff gets blocked out: the purple hues, the yellow ones, oranges, etc. only red green blue get through and that is what the image gets composed with. incoming light, purple, for example, MAY be a mix of red and blue... but it can ALSO be a wavelength that is pure purple, with no red or blue. we just happen to have 3 types of cones, and detect only 3 in our eyes. (just like dogs only have cones for yellow, no other colors, plus cylinders like we also do to detect all light (ie: B=W)). Is this all correct?
not correct

the blue filter lets through all the wavelengths that are more blue than they are red or green (ideally - the actual boundaries aren't so sharp)

similarly for red and green filters

so between them, all wavelengths are being recorded
OK, if you are sure about that, that is better than what I suspected.
so by removing the filters you aren't getting more wavelengths, you are just getting them all at each site
OK, so, potentially it can multiply the sensitivity by 3, not more. That is still quite significant.
depending on the purity of the filters that probably results in more or less light than the sum of the three filtered sites, but i wouldn't expect that difference to be very significant
I agree - that would be the least of our concerns.
I often wonder why the array doesn't have R + G + B + unfiltered, to do an RGB analogue of printing's CMYK, which would enhance shadow detail - albeit in monochrome, which is how our eyes tend to see it anyway.
I had exactly the same thought.

Now that I think of it, I wonder if there hasn't been a manufacturer to finally introduce some sort of a prototype with that, last year?
 
so without a filter, that site would (almost?) always receive significantly LESS light than the sum of adjacent R + G + B sites receiving the same mix of wavelengths
No, it is the opposite. The filter blocks the other wavelengths, only allowing the desired one through.

It cannot magically increase the number of photons coming in from the lens, it can only reduce it.

 
Joseph explained the physics and chemistry involved in a DIY conversion.
Yes indeed, and we really appreciate his input. Experience is golden.

But the microlenses have to stay... so we need to see if there is a way to scratch off the CFA without removing the microlenses.
You basically have to "de-manufacture" the sensor.

It sounded to me to be a lot of trouble to go to for something so easily obtained by other means.

But I'm not a hot rodder either.
Yes, it is a lot of trouble. But is it the difference between spending thousand$$$ on a new monochrome camera, versus saving an old camera that you no longer need, and giving it a new and useful life... and since you already replaced your old camera, you are getting, through recycling and modification, the monochrome one for free. What's more, the increased sensitivity makes up for lower sensitivity in older models.

Wouldn't you rather learn to mod that old camera, instead of spending $2-5k? I think that many of us would.
 
Last edited:
I often wonder why the array doesn't have R + G + B + unfiltered, to do an RGB analogue of printing's CMYK, which would enhance shadow detail - albeit in monochrome, which is how our eyes tend to see it anyway.
Sensors with RGBW do exist. I believe Sony offers a sensor like this for use in phones.
 
Joseph explained the physics and chemistry involved in a DIY conversion.
Yes indeed, and we really appreciate his input. Experience is golden.

But the microlenses have to stay... so we need to see if there is a way to scratch off the CFA without removing the microlenses.
You basically have to "de-manufacture" the sensor.

It sounded to me to be a lot of trouble to go to for something so easily obtained by other means.

But I'm not a hot rodder either.
Yes, it is a lot of trouble. But is it the difference between spending thousand$$$ on a new monochrome camera, versus saving an old camera that you no longer need, and giving it a new and useful life... and since you already replaced your old camera, you are getting, through recycling and modification, the monochrome one for free. What's more, the increased sensitivity makes up for lower sensitivity in older models.

Wouldn't you rather learn to mod that old camera, instead of spending $2-5k? I think that many of us would.
A Sigma Quattro costs less than $1K.
 
A Sigma Quattro costs less than $1K.
Could they have made it any uglier?

I am not familiar with the foveon multi-layer sensor. I can only imagine that letting light through in stages, makes for much smaller photosites... not sure about the sensitivity loss there.

So they can use the top layer for B+W... that is interesting, espectially if it is under $1K.

We would have to investigate how good this is at night photography, compared to a true monochrome model.

But it still doesn't resolve our problem of wanting to recycle an old camera instead of putting it into retirement. (mine is a Samsung NX20 that I would love to transform. I need an excuse to get a new camera... not happening if I can't reassign this one to B+W. this is but one example... I suspect that many are in a similar situation, where you can't quite justify a new purchase because the one you have is quite fine already). ;)
 
I often wonder why the array doesn't have R + G + B + unfiltered, to do an RGB analogue of printing's CMYK, which would enhance shadow detail - albeit in monochrome, which is how our eyes tend to see it anyway.
Sensors with RGBW do exist. I believe Sony offers a sensor like this for use in phones.
I read somewhere recently about a RBW sensor which makes a bit of sense, we use a two axis color matrix in LAB or YCC
 
Joseph explained the physics and chemistry involved in a DIY conversion.
Yes indeed, and we really appreciate his input. Experience is golden.
Glad you enjoyed.
But the microlenses have to stay... so we need to see if there is a way to scratch off the CFA without removing the microlenses.
Unfortunately, the CFA is underneath the microlenses. It might be possible to get rid of the microlenses without bothering the CFA, but you can't go the other way around.
You basically have to "de-manufacture" the sensor.

It sounded to me to be a lot of trouble to go to for something so easily obtained by other means.

But I'm not a hot rodder either.
Yes, it is a lot of trouble. But is it the difference between spending thousand$$$ on a new monochrome camera, versus saving an old camera that you no longer need, and giving it a new and useful life... and since you already replaced your old camera, you are getting, through recycling and modification, the monochrome one for free. What's more, the increased sensitivity makes up for lower sensitivity in older models.

Wouldn't you rather learn to mod that old camera, instead of spending $2-5k? I think that many of us would.
I'm dead serious, the cost of acquiring the skill set to do this, and the equipment, is way past $2-5k.
 

Keyboard shortcuts

Back
Top