Silly RAW conversion question

Hi David,

I am not a chip designer or a programmer so I am make some errors here, but stay with me the meat of it is true.

The specialized chips are custom designed to do one thing only. You can see a example of how fast they work by taking a picture in JPEG mode in your camera. The image data starts off in a CCD-RAW (or CMOS-RAW for our Canon friends) form. This is the same data that is saved to the CF card if you have set your camera to RAW or NEF. Your camera then applies white balance, sharpening, and any color or saturation settings. The camera then applies JPEG compression and writes this data out to the CF card. How long does this take, on my cameras this process is about a second or less, I am guessing it is the same for yours as well.

Now when you capture an image as a RAW or NEF the camera skips most if not all of the steps listed above and just writes the CCD data to the CF card. You then open this file on your computer and the software does the same steps of white balance, etc and writes out a JPEG file to your disk.. As we all know this is much slower.

It is much slower for a number of reasons. (Here is where I might get in trouble) The biggest reason I think is because the conversion code on our computers is running in software as opposed to running in hardware as it does in the camera. Much like my example of the MPEG CODEC and Ron's better example of a 3D video card. Also this conversion software is not optimized to run on our hardware, it is designed to run on a "general" computer.

At some point our computers will be powerful enough to overcome this issue and be able to process a 6Mp RAW/NEF as fast as today's cameras. But by that time cameras will be producing 12Mp or bigger RAWs and we will be back at the same place. The end result is still that a dedicated piece of hardware running specialized code is faster than a generalized machine running optimized or non optimized code.
It is a special digital circuitry designed to perform JPEG
processing in a fast way.
The image data leaves the CCD sensor and is them written to memory
(yes, yes, I'm leaving out a few steps). Ok, it writes JPEG, or
TIF, or NEF

My computer reads the JPEG and TIF at about twice the speed the
camera uses to write the file. My computer reads the NEF data about
twenty times slower. (450 Mhz, G4)

Some users with much faster machines read the NEF data about 5 or 6
times slower.

Are the import NEF drivers that bad? Certainly it would appear that
Windows machines are faster then Mac and my machine is relatively
slow. But even so...

I think there's more to it, Ron Parr not withstanding.

Dave
--
Valliesto
'A hero is no braver than an ordinary man, but he is brave five
minutes longer.'
  • R.W. Emerson
Want to see my gear list? It's listed in my profile.
 
okay the arguement that dedicated hardware is faster than a software solution is true.. but.. that example hardware accelerator 3d card used still needs a powerfull cpu to drive it and it also needs super fast memory to feed it..

my computer with its athon 75 watt heat disipation cpu and noisy cooling fans kinda dont compare with the mickey mouse low powered chips in a digicam.. dedicated or not..

i also think there is more to it than a simple hardware verses software answer.. thow this quite clearly does play a part..

basically the camera does not have to perform the same operations as the computer does.. quite what the camera does have to do or how it does it.. i have no idea.. i know it has to take data from the ccd and turn it into a useable file fomat.. but thats about all.. he he

trog100
  1. #
I think there's more to it, Ron Parr not withstanding.
Aliens?

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
okay the arguement that dedicated hardware is faster than a
software solution is true.. but.. that example hardware accelerator
3d card used still needs a powerfull cpu to drive it and it also
needs super fast memory to feed it..
If a general purpose CPU were better at this than a GPU, then ATI and nvidia woudn't waste millions on developing and producing GPUs; they'd just put another CPU on the graphics card.
basically the camera does not have to perform the same operations
as the computer does.. quite what the camera does have to do or how
it does it.. i have no idea.. i know it has to take data from the
ccd and turn it into a useable file fomat.. but thats about all..
It has to do exactly the same thing the computer does.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Hi Valliesto

First, let me say I appreciate your post. Unlike Ron Parr, I do not claim to be an expert in these matters. Indeed if Ron had bothered to read my posts this would be quite clear. This question, has surprised me. I never thought of it. Now I've been enjoying my speculations. Obviously what's happening is being done by the camera.

As regards your post, my camera is quite capable of displaying a NEF on the LCD almost instantly.

My computer, as obsolete as it is, has no trouble loading in just about any format in a very practical time period, except NEF.

Ok, I am using a D1x, $4850 (now down to $4600 and falling). Others use the D100, D60, still expensive.

But the Nikon 5700, Minolta Dimage also use RAW and these camera's are quite a bit cheaper. Perhaps there are even cheaper cameras that use this format.

One of the things I've learned is that software USUALLY beats hardware. For example, the importance of an FPU in central processors has been mentioned. I wont argue with that because it's self evident. However I have a vector program that runs rings around Illustrator, and does this without using the FPU. Clearly this software optimises the RAM of the computer in ways that Illustrator even with an FPU does not.

Once again, clearly, this is happening in camera. Bibble is twice as fast as Nikon View on my machine. Bibble does this without making use of the G4 chip, while Nikon View claims to access.

Once again thanks for the post!

Perhaps Ron will return from Pluto and further enlighten us with his professional knowledge.

Dave
I am not a chip designer or a programmer so I am make some errors
here, but stay with me the meat of it is true.

The specialized chips are custom designed to do one thing only. You
can see a example of how fast they work by taking a picture in JPEG
mode in your camera. The image data starts off in a CCD-RAW (or
CMOS-RAW for our Canon friends) form. This is the same data that is
saved to the CF card if you have set your camera to RAW or NEF.
Your camera then applies white balance, sharpening, and any color
or saturation settings. The camera then applies JPEG compression
and writes this data out to the CF card. How long does this take,
on my cameras this process is about a second or less, I am guessing
it is the same for yours as well.

Now when you capture an image as a RAW or NEF the camera skips most
if not all of the steps listed above and just writes the CCD data
to the CF card. You then open this file on your computer and the
software does the same steps of white balance, etc and writes out a
JPEG file to your disk.. As we all know this is much slower.

It is much slower for a number of reasons. (Here is where I might
get in trouble) The biggest reason I think is because the
conversion code on our computers is running in software as opposed
to running in hardware as it does in the camera. Much like my
example of the MPEG CODEC and Ron's better example of a 3D video
card. Also this conversion software is not optimized to run on our
hardware, it is designed to run on a "general" computer.

At some point our computers will be powerful enough to overcome
this issue and be able to process a 6Mp RAW/NEF as fast as today's
cameras. But by that time cameras will be producing 12Mp or bigger
RAWs and we will be back at the same place. The end result is still
that a dedicated piece of hardware running specialized code is
faster than a generalized machine running optimized or non
optimized code.
It is a special digital circuitry designed to perform JPEG
processing in a fast way.
The image data leaves the CCD sensor and is them written to memory
(yes, yes, I'm leaving out a few steps). Ok, it writes JPEG, or
TIF, or NEF

My computer reads the JPEG and TIF at about twice the speed the
camera uses to write the file. My computer reads the NEF data about
twenty times slower. (450 Mhz, G4)

Some users with much faster machines read the NEF data about 5 or 6
times slower.

Are the import NEF drivers that bad? Certainly it would appear that
Windows machines are faster then Mac and my machine is relatively
slow. But even so...

I think there's more to it, Ron Parr not withstanding.

Dave
--
Valliesto
'A hero is no braver than an ordinary man, but he is brave five
minutes longer.'
  • R.W. Emerson
Want to see my gear list? It's listed in my profile.
 
But the Nikon 5700, Minolta Dimage also use RAW and these camera's
are quite a bit cheaper. Perhaps there are even cheaper cameras
that use this format.
All cameras read the RAW data from the CCD and makes a picture from it. Inclusive. Not a single one does anything else.
One of the things I've learned is that software USUALLY beats
hardware.
Hardware always beats software.
For example, the importance of an FPU in central
processors has been mentioned. I wont argue with that because it's
self evident.
Definitely. When doin floating point operations.
However I have a vector program that runs rings
around Illustrator, and does this without using the FPU. Clearly
this software optimises the RAM of the computer in ways that
Illustrator even with an FPU does not.
You might realize that Illustrator uses floating point for top precision, while the other vector program sacrifices precision by using integer calculations.

Integer calculations are a lot faster than floating point, even when an FPU is available. It's like comparing 2D graphics to 3D graphics; at the same processing speed, 2D will be vastly faster than 3D.

Illustrator doesn't take shortcuts. The other vector program does. For your needs that may not matter, but for other people it might. It has nothing to do with any mystical RAM optimization; it's very basic common sense.
Once again, clearly, this is happening in camera.
If so, the camera uses inferior algorithms and out of camera software would make a much better job. In practice, this also tends to be true, except for manufacturer provided software which is simply crud.
Bibble is twice
as fast as Nikon View on my machine. Bibble does this without
making use of the G4 chip, while Nikon View claims to access.
So what does Bibble use? The video chip? Maybe the serial port controller? All software you run makes use of the CPU in your machine. That it isn't optimized makes a few percent of performance difference, that's it.

--
Jesper
 
At some point our computers will be powerful enough to overcome
this issue and be able to process a 6Mp RAW/NEF as fast as today's
cameras. But by that time cameras will be producing 12Mp or bigger
RAWs and we will be back at the same place. The end result is still
that a dedicated piece of hardware running specialized code is
faster than a generalized machine running optimized or non
optimized code.
If this were true though, why wouldn't we just leave our camera's hooked up to the computer, then with the software on our computer figure out what conversion we want after looking at the picture (ie. WB... saturation), then send the file back to the camera, have it process it in only 1 or 2 seconds, then send it back to the computer. Even after adding in the transfer times we would still be way ahead of the game. And the 30 seconds that I first mentioned is I think the best case scenario, no?, as I hear people waiting easily 60 seconds. So why not just do what I say? Hmmmm... I might be onto something here.... disregard what I said... It's my idea!!! Seriously though, especially with the USB 2.0 or firewire, the file transfer would be very quick, the conversion done in 2 seconds by the camera, and your file could be on your hard drive in 10 seconds flat. Doesn't this sound good???
 
Hi Jesper
But the Nikon 5700, Minolta Dimage also use RAW and these camera's
are quite a bit cheaper. Perhaps there are even cheaper cameras
that use this format.
All cameras read the RAW data from the CCD and makes a picture from
it. Inclusive. Not a single one does anything else.
Yes but few "save" as RAW
One of the things I've learned is that software USUALLY beats
hardware.
Hardware always beats software.
Sorry, I should have added, "all things being equal" IOW poor software can make good hardware look bad
However I have a vector program that runs rings
around Illustrator, and does this without using the FPU. Clearly
this software optimises the RAM of the computer in ways that
Illustrator even with an FPU does not.
You might realize that Illustrator uses floating point for top
precision, while the other vector program sacrifices precision by
using integer calculations.

Integer calculations are a lot faster than floating point, even
when an FPU is available. It's like comparing 2D graphics to 3D
graphics; at the same processing speed, 2D will be vastly faster
than 3D.

Illustrator doesn't take shortcuts. The other vector program does.
For your needs that may not matter, but for other people it might.
It has nothing to do with any mystical RAM optimization; it's very
basic common sense.
Please give me a test that I can duplicate to check on whether my software has the limitations you describe.
Bibble is twice
as fast as Nikon View on my machine. Bibble does this without
making use of the G4 chip, while Nikon View claims to access.
So what does Bibble use? The video chip? Maybe the serial port
controller? All software you run makes use of the CPU in your
machine. That it isn't optimized makes a few percent of performance
difference, that's it.
The manufacturer of Bibble tells me that the present version does NOT take advantage of the velocity engine built into the CPU. This is basically the only difference between the G3 and G4 chips. If you still feel you're correct argue with him.

Dave
 
Hi Kiran

As you can see when a lay person has a bit of fun with speculation the nit pickers (the Swede) and people suffering with ego problems (Ron Parr) are forced to let the reader know who is boss. I must in fairness add that I visited Ron's site and liked all his images with one exception. This was a picture of a raw strawberry being eaten with chocolate - A clear example that the man needs help.

So I'm glad to point out to you that I strapped a car battery, adapter and computer to my back and I'm hiking out to the woods tommorow to continue my odyssey of taking low light pictures of my dog that don't come out as a blown highlight blur.

Dave
If this were true though, why wouldn't we just leave our camera's
hooked up to the computer, then with the software on our computer
figure out what conversion we want after looking at the picture
(ie. WB... saturation), then send the file back to the camera, have
it process it in only 1 or 2 seconds, then send it back to the
computer. Even after adding in the transfer times we would still
be way ahead of the game. And the 30 seconds that I first
mentioned is I think the best case scenario, no?, as I hear people
waiting easily 60 seconds. So why not just do what I say?
Hmmmm... I might be onto something here.... disregard what I
said... It's my idea!!! Seriously though, especially with the USB
2.0 or firewire, the file transfer would be very quick, the
conversion done in 2 seconds by the camera, and your file could be
on your hard drive in 10 seconds flat. Doesn't this sound good???
 
the fpu analogy is bad one so is the 3d graphics card one.. nvidia do put an extra cpu on their graphics cards.. its now being called a gpu.. graphics processing unit.. the main point of this is to take the load off of the main processing unit.. the same applied to the fpu unit.. floating point unit.. it simply took the load off of the main processing unit..

the fact that two pieces of seperate hardware will do a job quicker than one piece of hardware does.. dosnt automatically mean that the hardware solution is faster..

what we should be talking about here is a piece of hardware that is designed simply to do one job everthing else being equal will be faster at it than a piece of harware that is designed to do several jobs.. but everything else isnt equal..

but either way.. a huge piece of general purpose hardware could quite likely be able to do the job faster than a small low power piece of dedicated hardware.. assuming that the dedicated hardware has fundermental limitations such as power and heat dissipation holding it back.. the chips in the camera do have such fundermental limitations holding them back..

heat disipation is the thing that holds the average cpu back.. its quite easy to tell it to go faster.. but it simply gets to hot.. and consumes more power..

i still think my huge power hungry computer cpu with its load lessened by all the dedicated chips that surround it.. should at least be able to keep up with anything a little low power camera chip can do... dedicated or not.. he he

basically to argue that a small power deprived device (power deprived to lengthen batterry life) will always be able to do something quicker than a whopping great PC simply because it is dedicated is pure nonsense..

it might just be true in this particular case.. but the general line that it must be faster because its dedicated is pure rubbish.. everything else being equal yes.. but in the case of a camera and a pc it certainly isnt..

trog100
  1. ##
 
Hardware always beats software.
Sorry, I should have added, "all things being equal" IOW poor
software can make good hardware look bad
If all things are equal, hardware will beat software - period. Only if the software is written extremely well and the hardware is extremely poor, or not designed for the task it's doing will software beat hardware.

Your orthogonal argument that bad software can make good hardware perform poorly is completely accurate.
Please give me a test that I can duplicate to check on whether my
software has the limitations you describe.
Test the internal precision of a vector manipulation program? I'm sure there are tests for that; I wouldn't know where to find one, but that is really irrelevant. It doesn't use the FPU, therefore it doesn't use internal floating point representation. It would simply be too slow.

All floating point software uses FPU; the few highly specialized ultraprecision calculation programs that do not make watching grass grow seem exciting.

Not that it matters, integer representation is by far powerful enough, so I doubt you'll ever encounter any practical limitations.
The manufacturer of Bibble tells me that the present version does
NOT take advantage of the velocity engine built into the CPU. This
is basically the only difference between the G3 and G4 chips. If
you still feel you're correct argue with him.
It uses the CPU. The CPU is fast. If it uses the new insert hypeword here doesn't make much difference, regardless of what the chip maker benchmarks say.

Besides, they're both software; badly written software will always be outperformed by well written software; I don't see how that proves anything.

You're trying to argue that a software solution on a generic integer CPU can and (even USUALLY) will outperform a specialized hardware solution at the same clock speed, doing the same task. That is simply inaccurate, magical RAM optimizations notwithstanding. All else being equal, hardware will beat software.

--
Jesper
 
Seriously though, especially with the USB
2.0 or firewire, the file transfer would be very quick, the
conversion done in 2 seconds by the camera, and your file could be
on your hard drive in 10 seconds flat. Doesn't this sound good???
Maybe but I can process a RAW file in about 5 seconds so 10 seconds sounds like a step backwards. But seriously I made a post in a different forum thinking about having a dedicated decoder board, which would function much like your idea of connecting to the camera for processing. 5 seconds is not bad, but when I have 400 files to go through, I would love to cut that time by 80%.
--
Valliesto
'A hero is no braver than an ordinary man, but he is brave five
minutes longer.'
  • R.W. Emerson
Want to see my gear list? It's listed in my profile.
 
okay the arguement that dedicated hardware is faster than a
software solution is true.. but.. that example hardware accelerator
3d card used still needs a powerfull cpu to drive it and it also
needs super fast memory to feed it..

my computer with its athon 75 watt heat disipation cpu and noisy
cooling fans kinda dont compare with the mickey mouse low powered
chips in a digicam.. dedicated or not..
Your athlon has to work hard because it is generalized chip that is asked to do many different tasks. The "mickey mouse" chips in your camera do the same amount of work as your athlon on a RAW process. They do it with less power, far less heat and faster as well. So how would those chips qualifiy as "mickey mouse" in any way? When you take a JPEG with your camera, those chips do the same amount of work as your computer when it converts a RAW to a JPEG, but in a fraction of the time, for a fraction of the cost both in power and money.
i also think there is more to it than a simple hardware verses
software answer.. thow this quite clearly does play a part..

basically the camera does not have to perform the same operations
as the computer does.. quite what the camera does have to do or how
it does it.. i have no idea.. i know it has to take data from the
ccd and turn it into a useable file fomat.. but thats about all..
he he
This is both true and false. The camera does not have to worry about running games or office or things like that. But as I said above the camera does the same calculations as your computer when converting RAWs to JPEGs or TIFFs. This simplicity of design is what allows the camera to do its' tasks so quickly and efficently.
--
Valliesto
'A hero is no braver than an ordinary man, but he is brave five
minutes longer.'
  • R.W. Emerson
Want to see my gear list? It's listed in my profile.
 
i still think my huge power hungry computer cpu with its load
lessened by all the dedicated chips that surround it.. should at
least be able to keep up with anything a little low power camera
chip can do... dedicated or not.. he he
Would be nice if things worked this way but they don't.
basically to argue that a small power deprived device (power
deprived to lengthen batterry life) will always be able to do
something quicker than a whopping great PC simply because it is
dedicated is pure nonsense..
A dedicated device does not need as much power simply because it does not need to do so many things. Think of a PC as a jack of all trades, it does a fairly good job at a great numbers of tasks. It needs to have lots of power because it uses brute force to accomplish many of these tasks. A dedicated chip is a master of only one task, it is not concerned with any other function except for the one it was designed for. Therefore it can use far less power, be far smaller and even run at a far lower clock speed. And it will run circles around PC chip, even the chip in my image processing machine which can process a RAW to TIFF in about 5 seconds, does not hold a candle to what the chip in my camera can do.
it might just be true in this particular case.. but the general
line that it must be faster because its dedicated is pure rubbish..
Yes if is it stated that a dedicated system will always out perform a non-dedicated system, that would be inaccurate. But in general and under most circumstances, a dedicated chip with optimized code will perform faster.
everything else being equal yes.. but in the case of a camera and a
pc it certainly isnt..

trog100
  1. ##
--
Valliesto
'A hero is no braver than an ordinary man, but he is brave five
minutes longer.'
  • R.W. Emerson
Want to see my gear list? It's listed in my profile.
 
I do not know how exactly the RAW to JPEG processing is implemented in a digital camera (DC). I can think of two ways:

1. Using a digital signal processor (DSP). The DSP executes a set of special intsructions to perform signal processing. In this case, the JPEG compression. This approach has the advantage of higher flexibility because of the programmability of the DSP.

2. Using special circuitry. This circuitry does not execute instructions. It takes the image input (with some parameters settings) and processing the image (NOT by executing instructions). Take a simple analogy. The CPU in a computer executes instructions. It sends data and control signals to the Arithematic and Unit (ALU) to process data (e.g. ADD). The ALU does not execute instructions. The CPU does. Similarly, there must a CPU somewhere in DC to control various operations. The special circuitry under the control from the CPU to process the image data. I believe that most likely this is the approach used in DC because of fast operation required (this is commonly known as real-time).

The above may expalin why it is faster to perform JPEG processing in a DC.

There is also another factor to consider.

The results of JPEG processing in DC and in computer may not be the same. Why? JPEG is a standard. It does not dictate the implementation. In camera, because of the cost and the fast operational requirements, there may be some compromises. In the computer, the timing is less critical. Since the processing is not real-time, we can afford to implement more sophisticated algoirhtm for JPEG conversion to gives better JPEG images. In fact, this is not surprising. We have seen different JPEG quality from different software packages.

That is also why to have better qaulity, it is better to shoot in RAW and let the computer do the JPEG conversion. The new release of RAW conversion software may introduce better JPEG processing software. Some years down the road, there may be better algorithm to do JPEG processing or even better standards than JPEG. With RAW, you can re-process your favoured images to get better quality images. Save the RAW files of your favoured images. They may become surprising better in the future. If a manufacturer does not offer RAW on their prosumer/professional cameras, ask them for it. Do not give up your consumers' right if the manufacturers have different visions on this issue. Hopefully they would eventually listen if enough people are asking. If you do not ask for RAW, they will never give RAW to you.

Regards,

K. Tse
I am not a chip designer or a programmer so I am make some errors
here, but stay with me the meat of it is true.

The specialized chips are custom designed to do one thing only. You
can see a example of how fast they work by taking a picture in JPEG
mode in your camera. The image data starts off in a CCD-RAW (or
CMOS-RAW for our Canon friends) form. This is the same data that is
saved to the CF card if you have set your camera to RAW or NEF.
Your camera then applies white balance, sharpening, and any color
or saturation settings. The camera then applies JPEG compression
and writes this data out to the CF card. How long does this take,
on my cameras this process is about a second or less, I am guessing
it is the same for yours as well.

Now when you capture an image as a RAW or NEF the camera skips most
if not all of the steps listed above and just writes the CCD data
to the CF card. You then open this file on your computer and the
software does the same steps of white balance, etc and writes out a
JPEG file to your disk.. As we all know this is much slower.

It is much slower for a number of reasons. (Here is where I might
get in trouble) The biggest reason I think is because the
conversion code on our computers is running in software as opposed
to running in hardware as it does in the camera. Much like my
example of the MPEG CODEC and Ron's better example of a 3D video
card. Also this conversion software is not optimized to run on our
hardware, it is designed to run on a "general" computer.

At some point our computers will be powerful enough to overcome
this issue and be able to process a 6Mp RAW/NEF as fast as today's
cameras. But by that time cameras will be producing 12Mp or bigger
RAWs and we will be back at the same place. The end result is still
that a dedicated piece of hardware running specialized code is
faster than a generalized machine running optimized or non
optimized code.
It is a special digital circuitry designed to perform JPEG
processing in a fast way.
The image data leaves the CCD sensor and is them written to memory
(yes, yes, I'm leaving out a few steps). Ok, it writes JPEG, or
TIF, or NEF

My computer reads the JPEG and TIF at about twice the speed the
camera uses to write the file. My computer reads the NEF data about
twenty times slower. (450 Mhz, G4)

Some users with much faster machines read the NEF data about 5 or 6
times slower.

Are the import NEF drivers that bad? Certainly it would appear that
Windows machines are faster then Mac and my machine is relatively
slow. But even so...

I think there's more to it, Ron Parr not withstanding.

Dave
--
Valliesto
'A hero is no braver than an ordinary man, but he is brave five
minutes longer.'
  • R.W. Emerson
Want to see my gear list? It's listed in my profile.
 
David I think you missed his point...he said since the camera did the saving to Raw..why not use the camera to convert the Raw file to a Tiff> JPEG or whatever and then send it to the computer...capish!!

He makes very good sense!!

Jon J. Both (Edgeman)
As you can see when a lay person has a bit of fun with speculation
the nit pickers (the Swede) and people suffering with ego problems
(Ron Parr) are forced to let the reader know who is boss. I must in
fairness add that I visited Ron's site and liked all his images
with one exception. This was a picture of a raw strawberry being
eaten with chocolate - A clear example that the man needs help.

So I'm glad to point out to you that I strapped a car battery,
adapter and computer to my back and I'm hiking out to the woods
tommorow to continue my odyssey of taking low light pictures of my
dog that don't come out as a blown highlight blur.

Dave
If this were true though, why wouldn't we just leave our camera's
hooked up to the computer, then with the software on our computer
figure out what conversion we want after looking at the picture
(ie. WB... saturation), then send the file back to the camera, have
it process it in only 1 or 2 seconds, then send it back to the
computer. Even after adding in the transfer times we would still
be way ahead of the game. And the 30 seconds that I first
mentioned is I think the best case scenario, no?, as I hear people
waiting easily 60 seconds. So why not just do what I say?
Hmmmm... I might be onto something here.... disregard what I
said... It's my idea!!! Seriously though, especially with the USB
2.0 or firewire, the file transfer would be very quick, the
conversion done in 2 seconds by the camera, and your file could be
on your hard drive in 10 seconds flat. Doesn't this sound good???
 
OK, this may seem silly to some, but I've been plauged by this so
excuse my ignorace. I keep seeing posts over and over again about
how long it takes to convert a RAW image on a computer, something
like 30 seconds or more, and this is with dual processors and the
whole bit.

What I wonder is how does the camera do it so quickly? I'm
assuming that the brains inside the D60 or D100 are nowhere as
quick as that of a dual processor system. So how can your jpeg
file be saved in several seconds, and have it take multiple times
longer to do on the computer. I'm assuming of course that the
camera captures RAW information, which I guess it has to, but then
it has to process it to jpeg given your White Balance, compression,
saturation... etc. settings. So how does the camera go from the
RAW information from the sensor to a jpeg file on your card in
seconds in the camera while it takes well over 30 seconds for the
computer to come up with the same jpeg? Thank-you.

Kiran
--

My personal image processing system (eye, retina, optical nerve, visual center of brain, etc), is built using fairly slow hardware (nerve impulses, etc). But it is special-purpose dedicated hardware, so I can process images with lots of pixels (more than my camera has), at a rate of several images per second - fast enough to process moving images. It does this by having hardware that's dedicated to the task at hand. Instead of processing each pixel (the image from a single cone in the retina) at a time, the retina processes them all at the same time using the extensive nerve interconnections in the retina. Then instead of sending the pixels one at a time to the brain, they're all sent in parallel over the optic nerve.

This is one example of using how using specialized simple hardware can greatly improve the speed of an image processing system.

I spent several years working for Intel. I was primarily involved in software development there. But I was also involved on several teams involved in the design of new chips. It continually amazed me how fast dedicated hardware could be compared to the same process done in software on a general purpose processor -- hundreds or thousands of times faster.

I have not studied the particulars of the conversion of images from a packed 10- or 12-bit RAW format to JPEG. But it seems to me that there are probably several aspects of the process that are slow on a general processor that could be greatly speeded by some simple, dedicated, low-cost hardware. For example, on the general processor, I might go through a process like:

1. increment an index so that it points to the next source pixel (which is bit- rather than byte-aligned)

2. load the packed pixel from memory (which is very inefficient for bit-alligned data a CPU oriented to addressing bytes or 4-byte words)

3. decode the packed data from the pixel so that we have the indivdual color components of the pixel (R, G, B)

5. apply the appropriate adjustments to the pixel to adjust contrast, color adjustments, sharpness (by comparing to the values saved from the adjacent pixel processed the previous time through the loop)

4. process each of the pixel colors to compress them into fewer bits (JPEG) compression
5. increment an index so that it points to the next destination pixel location

6. write the compressed pixel out (which may be very slow if I have to do a bit-aligned write)
7. repeat the process until all the pixels have been processed

Since there is so much repetition, it's fairly easy to build several dedicated hardware circuits, one for each of these steps. Each step would then be a dedicated instruction on my dedicated speciial-purpose processor. Also, it would be quite simple (and much faster) if my memory system could address indivdiual pixels directly, rather than having use software to shift data around because the general purpuse CPU has to address bit-aligned data in a memory system that's byte-oriented.

This is probably a simplified explanation, but it's the answer I came up with when I first considered this question myself.
 
David I think you missed his point...he said since the camera did
the saving to Raw..why not use the camera to convert the Raw file
to a Tiff> JPEG or whatever and then send it to the
computer...capish!!
Yes, but that means you are using the JPEG mode, not "RAW" mode.

Regards,

K. Tse
 
Hi Valliesto

First, let me say I appreciate your post. Unlike Ron Parr, I do not
claim to be an expert in these matters. Indeed if Ron had bothered
to read my posts this would be quite clear.
I never presumed that you were an expert.
One of the things I've learned is that software USUALLY beats
hardware. For example, the importance of an FPU in central
processors has been mentioned. I wont argue with that because it's
self evident. However I have a vector program that runs rings
around Illustrator, and does this without using the FPU. Clearly
this software optimises the RAM of the computer in ways that
Illustrator even with an FPU does not.
Hardware is always faster than software. Whether your program is using the FPU or the ALU, it's still a software program. You've merely discovered that many floating point operations can be closely approximated by integer operations at higher speed.

I don't know what you mean by, "opimizing RAM."

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Hi Kiran

As you can see when a lay person has a bit of fun with speculation
the nit pickers (the Swede) and people suffering with ego problems
(Ron Parr) are forced to let the reader know who is boss.
Thanks for the psychoanalysis. I now have much more insight into why I'm wasting my time here...
I must in
fairness add that I visited Ron's site and liked all his images
with one exception. This was a picture of a raw strawberry being
eaten with chocolate - A clear example that the man needs help.
Thanks... I think. Can you explain the how the strawberries related to my ego problems? Will you send me a bill for this?

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
the fpu analogy is bad one so is the 3d graphics card one.. nvidia
do put an extra cpu on their graphics cards.. its now being called
a gpu.. graphics processing unit.. the main point of this is to
take the load off of the main processing unit.. the same applied to
the fpu unit.. floating point unit.. it simply took the load off of
the main processing unit..

the fact that two pieces of seperate hardware will do a job quicker
than one piece of hardware does.. dosnt automatically mean that
the hardware solution is faster..
Right, but the point was that if, as David suggested, general purpose software solutions were better than hardware solutions, the GPU could be replaced with another CPU, i.e., a dual processor PC with no GPU would be faster at rendering than a single processor PC with no GPU. This isn't the case. The GPU, which has a much lower clock speed than the CPU, runs circles around it for rendering.

--
Ron Parr
FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 

Keyboard shortcuts

Back
Top