Quarkcharmed wrote:
PetteriJ wrote:
Quarkcharmed wrote:
PetteriJ wrote:
Quarkcharmed wrote:
PetteriJ wrote:
If the workload (i.e. number of bytes) doubles, so does the processing time
Not necessarily. You imply a) there's linearity and b) there's no processing that doesn't depend on the size of the input data
Ok, you're right - if images are from different cameras, they might have extra information (like DLO) that takes more time to process. It would be better to say "twice the information, twice the time".
Nope, that only holds for a limited range of algorithms. You may as well get "processing time t ~ log2(information)". Or "t ~ information ^ 2"
Ok, now I see what you mean: Big O notation. It's true that many algorithms (e.g. sorting) need n log (n) time when n increases.
Big O is for describing the limits (worst cases), it doesn't help here.
The size of CPU caches, multithreading, use of GPU etc. may significantly distort the big O estimates.
But still I wonder, does this also hold for image processing? All traditional algorithms I can think of (noise removal, sharpening, merging layers etc.) scale proportional to pixel count. Can you find an operation which behaves differently?
Just as an example. If you apply the same batch processing to two sets of raw files from R5 and R6, but have the same target jpeg dimensions, some or all of your algorithms will be applied to the target pixels, not source pixels. Writing to HDD may be taking longer than the actual processing, depending on the nature of processing. Etc etc etc.
And don't forget to include the user in the equation. Some computer actions are fast enough that their speed relative to the time it takes the user to kick it off is not very relevant. Gone are the days of needing progressive JPEGs because transferring the files takes so long, for example.