Dialing in your printer - SpyderPrint and Spyder5

I am following up on my own reply with the results of my latest adventure with SpyderPrint. Using all defaults with the software, I generated a profile from the one sheet High Quality target. I then used the profile on a photo image. I then generated a Delta-E report comparing the original image to the one that results from the new printer profile, using a color sampling template on the image.

Here are the results:

>>>>>>

dE Report

Number of Samples: 140

Delta-E Formula dE2000

1). Overall - (140 colors)

--------------------------------------------------

Average dE: 5.09

Max dE: 18.17

Min dE: 1.22

StdDev dE: 2.85

2). Best 90% - (125 colors)

--------------------------------------------------

Average dE: 4.28

Max dE: 6.99

Min dE: 1.22

StdDev dE: 1.16

3). Worst 10% - (15 colors)

--------------------------------------------------

Average dE: 11.84

Max dE: 18.17

Min dE: 7.62

StdDev dE: 3.76

<<<<<<

Note the 10% samples that have the largest Delta-E which goes from 7.62 to 18.17. After reviewing the color list of this sampled image, I see there appears to be about a dozen outliers compared with all the other color samples. I looked through all measure colors and found that the color for each patch looked like a good measurement was taken. This appears to be the same result for the last time I used the SpyderPrint.

I want to add that it took several calibrations of the measurement device before I started to obtain what appears to be valid measurements. Also, FWIW there appears to be a tonal shift in many of the patches that were measured. This is where it appears that some degree of grey has been added to each of these colors scanned. So I think this means that the white point still appears to be off? There is no good white. Colors that appear to be more saturated in the original image suffer from this problem too after the profile has been applied.

So I do not know why this is happening. Any thoughts? Perhaps there is a problem with my measurement device, a sort of a quality control problem at the manufacturer. This can explain why some users can generate good profiles with SpyderPrint and others cannot. ???

Bob

--
With sincere regards, Bob
 
Last edited:
Note the 10% samples that have the largest Delta-E which goes from 7.62 to 18.17.
Yeah that not good. I can't tell you why, sorry. Mark might be far more help as he's got the product. If I understand, you're comparing the reference values that built the target to the actual target run through the profile?

What I think you might do is measure the chart a couple times and report on that dE. IOW, if you get high dE values after measuring the same target 3 times, that's bad! You want the max dE (the one worst patch) to be below dE 1.
 
I have to do the same when using my very accurate Gretag Spectrolinos to measure my light fade test targets here at AaI&A, and I would bet one should be doing the same with Colormunki, EyeOnePro instruments,etc., as well.
... scans like the X-rite devices which take 100 samples per second and do the averaging
Actually, according to the specs the i1Pro2 is 200 Hz, compared to the i1Pro which was 100 for Rev A and 200 for Rev B-D.

[Just getting back into the tread, which is now hard to read with the lack of indents.]

Brian A
What puzzles me about that specification is if the Xrite device can truly compile 200 full spectral data sets per second, analyze and throw out any of them that occurred while the aperture was landing on leading and trailing edges of each color patch, then average all the good data sets together all while keeping up with the scanning mode, then why wouldn't a single patch reading mode on this device be blazingly fast? It's not. So, I suspect that during those 200 pulses per second, the device is really only collecting a very small subset of wavelengths and using that to detect leading and trailing edge position. But I would be pleased to stand corrected :-)

Anyway, the SpyderPrint Pro does not have this pulse mode sophistication. In "scan" mode It uses just one dedicated LED which gets pulsed quickly to detect the leading edge, then it literally counts by a simple timer to .5 seconds and then reads the full color value with it's full LED array at that point in time. Hence, once that leading edge is detected and the timer starts, if you go too slow you won't be fully on the solid color patch, and if you move the device too fast, you will be landing on the trailing edge or worse yet, the next color patch. Obviously, the unit will flag an error if you are way off and post the color in the software such that you can visually see the gross color error, but the more sublte operator error problem that goes undetected is when, say, you are 95% of the way into the correct color patch position but not quite all the way. Then you will get a plausible color measurement that even looks more or less correct as posted in the sofware, but it will nonetheless have more error in it than is desirable to build an optimal profile. My advice: don't use Datacolor's scan mode feature. You can become just as quick doing the readings manually because the device does take reading very fast when you press the "read" button on the device or the return key on your computer keyboard.

cheers,
Mark
 
Note the 10% samples that have the largest Delta-E which goes from 7.62 to 18.17.
Yeah that not good. I can't tell you why, sorry. Mark might be far more help as he's got the product. If I understand, you're comparing the reference values that built the target to the actual target run through the profile?
I'm having a little trouble following what was the actual input data being used as the reference to generate these reported delta E errors. The only valid delta E test one can do is to "round trip" the data (which Colorthink can probably do, I don't own Colorthink so I'm not positive). Round tripping means you are comparing the profile predicted LAB colors (not the source file lab colors) to the actual measured and printed values. And if you are using the same instrument used to to generate the profile's data set, then any systematic error in the instrument actually gets nulled out in this round tripping procedure, so delta E values should be quite low which in turn tell us that the profile's predicted output and the printer's actual output agree with each other. That's what you should get with a properly profiled system.

It seems in this case we are discussing here like the source file color values are being used as the reference values, not the predicted color values, and if so, very large delta E errors are indeed likely, even with excellent profiles. For example. If my image target file has a pure white patch with RGB 255,255,255 and thus L* = 100, no paper on earth is going to be able to print it at L=100. It will land on media white at perhaps 96 L value (which translates into a delte E= 4 error as a minimum between source file data and printed output even if the a* and b* values are a perfect match. Likewise for max black. Source file might have L= 0 (RGB = 0,0,0) but a matt paper can't go much below L*= 15. So the profile prediction will be for L=15, and that's approximately what you should measure, but that's a far cry (delta E >=15) from the target file's L= 0 reference value.

I hope this makes sense.

It sounds like the OP needs to gain confidence in the instrument readings first, then move on to the more advanced validation of the printed output. If you have a Macbeth Color Checker handy, that target serves well as an absolute reference, and it's CIELAB color patch values are published. When I measure mine, the SpyderPrint Pro returns reasonable measurements, not quite as accurate as my other spectrophotometers, but certainly close enough that good profiles can be made. As I said earlier, the challenge is to verify that there are no operator errors in the measured data sets. I found the best way to do that was to make two independent measurements, compare them to confirm that all readings were good. Colorthink might be able to do that comparison for you. I did it in Excel. Over several hundred measurements, as careful as I was, I still routinely found one or two patches that needed to be remeasured and the corrected value used in order to ensure that the subsequent profile was behaving well over all colors and tones. Datacolor should build a "compare" feature right into it's software. It is an oversight that Datacolor really ought to address. As noted earlier, the workaround is to compare two exported data sets in Excel, Colorthink, etc.

Perhaps the ColorMunki is indeed far more forgiving than the SpyerPRintPro in actual use (I don't own a ColorMunki). In any case, the Spyderprint Pro and the ColorMunki seem to be the only two choices for low cost instrumentation these days. Both work when proper care is taken. For Spyderprint, the achilles heel appears to be hand held operator mistakes, whereas for the ColorMunki, the enduser has to have clear knowledge of system short term color drift and make sure to wait long enough before generating the second iteratively selected color target.

Cheers,

--

Mark McCormick
 
Last edited:
The only valid delta E test one can do is to "round trip" the data (which Colorthink can probably do, I don't own Colorthink so I'm not positive).
Yes it can do this.

I'm a bit confused for a number of reasons, primarily not having any experience with the product under discussion.

It appears there could be issues with the measurement data? That's why I suggested first and foremost a test of the device whereby one reads the same target several times and using ColorThink (which the OP has) build a dE report. If the device isn't consistently reading the targets, no need to go further doing a round trip dE analysis.
It seems in this case we are discussing here like the source file color values are being used as the reference values, not the predicted color values, and if so, very large delta E errors are indeed likely, even with excellent profiles.
Agreed there will be dE differences but the one's reported by the OP seem a tad high. A max dE (the one worst patch of only a few, 140) of 18 is a red flag.

An average dE of over 5 with only 140 patches again seems a tad high. But I'd test the profile on actual images. One like this really shows profile issues (Bill's Balls are great for this):


My custom profile for a 3880 vs. the canned Epson profiles (which are pretty good) show how much better mine are when viewing the Ball gradients. Here's a quick and dirty comparison. Notice the smoothness of the blue ball with the custom profile versus how Epson's is not only dark but a lot of 'black' is being applied instead of dark blue:



Also notice dark, blackish ring on Epson profile on left ball.
Also notice dark, blackish ring on Epson profile on left ball.

It sounds like the OP needs to gain confidence in the instrument readings first, then move on to the more advanced validation of the printed output.
Absolutely agree. That's why if possible, he should measure the target used to build the profile which I suspect is more than 140 a couple times and see what ColorThink reports.
As I said earlier, the challenge is to verify that there are no operator errors in the measured data sets.
So doesn't the product provide some kind of reference values of the target? If not, ColorThink can build one from the target itself. I assume it produces some TIFF file. Let's say for this example it's only 140 colors. In Photoshop, resample that target such that each patch is sampled using Nearest Neighbor to produce 1 pixel for each patch. That can be loaded into ColorThink. Use Extract All colors command, it will produce what it calls a color list. Lab values for each individual color that will create a patch. Scan the target, one should now have a text file that has one Lab value for each patch. Use CT with both color lists to produce a dE report. We can now see how the measured values compare to the reference value without looking at all at the subsequent profile, round trip or otherwise.

So I see two tests here. Reference vs. measured color. Measured color of target multiple times. High dE values in either isn't good, no reason to continue building a profile or analyzing it until the other two tests produce better data.
I found the best way to do that was to make two independent measurements, compare them to confirm that all readings were good. Colorthink might be able to do that comparison for you.
Yes. Described above. Also examine the dE report the OP provided from CT, it's really useful.
Perhaps the ColorMunki is indeed far more forgiving than the SpyerPRintPro in actual use (I don't own a ColorMunki). In any case, the Spyderprint Pro and the ColorMunki seem to be the only two choices for low cost instrumentation these days.
All I can tell you is during testing of the ColorMunki, I found it produced excellent profiles that got close to those I compared using far more patches and an iSis with i1Profiler. It scans very patches which are huge and difficult to miss-measure. It's using the same color engine as i1Profiler, no issues with blues, etc.

If we can take feedback from users posting to the web, far fewer complaints and the need to futz around with the ColorMunki than Spyder. That said, X-rite is proven over the last number of years to be rather dysfunctional in terms of writing and updating their software and very good at producing bugs so no love lost here. Hardware is really good. Software? Not so much. The color science and color engine is really great. But a product that might run today on say OS X 10.5 could die with a dot release of the OS tomorrow and it might take X-rite months if not longer to fix the product, only to introduce other bugs. Their Q&E is pitiful and if they have any outside beta testers anymore, they either don't know how to test the product or worse, the X-rite engineers don't pay attention to the reports. Knock wood, i1Profiler is working fine and I have a back up (Copra) which produces excellent quality profiles.

--
Andrew Rodney
Author: Color Management for Photographers
The Digital Dog
 
Not that this is fair, yet to give folks an idea of what to expect from measuring the same target twice on a device, this being an iSis XL, here's a dE report.

567 colors with an average difference of 0.19, tiny and to be expected.

The colors were designed to be difficult to measure.

--------------------------------------------------

dE Report

Number of Samples: 567

Delta-E Formula dE2000

Overall - (567 colors)

--------------------------------------------------

Average dE: 0.19

Max dE: 0.70


Min dE: 0.01

StdDev dE: 0.10

Best 90% - (509 colors)

--------------------------------------------------

Average dE: 0.17

Max dE: 0.29

Min dE: 0.01

StdDev dE: 0.06

Worst 10% - (58 colors)

--------------------------------------------------

Average dE: 0.39

Max dE: 0.70

Min dE: 0.29

StdDev dE: 0.09
 
I have to do the same when using my very accurate Gretag Spectrolinos to measure my light fade test targets here at AaI&A, and I would bet one should be doing the same with Colormunki, EyeOnePro instruments,etc., as well.
... scans like the X-rite devices which take 100 samples per second and do the averaging
Actually, according to the specs the i1Pro2 is 200 Hz, compared to the i1Pro which was 100 for Rev A and 200 for Rev B-D.

[Just getting back into the tread, which is now hard to read with the lack of indents.]

Brian A
..., then why wouldn't a single patch reading mode on this device be blazingly fast? It's not.
I don't know about blazing, but I don't see much delay between spot readings done with an I1Pro 2 using ColorPort 2.0. No more delay than I would imagine it takes to get the information LEDs to glow telling you to move on to the next patch.
So, I suspect that during those 200 pulses per second, the device is really only collecting a very small subset of wavelengths and using that to detect leading and trailing edge position. But I would be pleased to stand corrected :-)
All I have is from the manual:

During a scan measurement the i1Pro device is performing 200 measurements per second. The automatic patch detection of the device identifies useable measurements made on a patch and unusable measurements made between two patches. Valid measurements on a patch are averaged and the device reports the averaged result to the software. Thanks to this technology the virtual aperture of the i1Pro device adapts to the length of a patch. For best measurement results the length of the patches on your test chart should be selected based on the resolution of your printer. For printers with lower resolution or a grainy screening you should increase the length of the patches on your test chart.

Brian A
 
I don't know about blazing, but I don't see much delay between spot readings done with an I1Pro 2 using ColorPort 2.0. No more delay than I would imagine it takes to get the information LEDs to glow telling you to move on to the next patch.
Wow, then I gotta get me one of those new i!Pro2 units!!!!. My venerable and highly accurate Spectrolinos take about 1.5 seconds to ready for the next patch reading, so reading my 30 patch AaI&A color targets independently twice takes a few minutes. I have asked more than one Xrite sales person how much faster the new instruments are (both the i1Pro2 and the much more expensive Xrite Exact unit) and all they would say was "a little faster". Maybe they don't really appreciate the difference between making several hundred manual mode readings over one second apart versus making several hundred readings fractions of a second apart! The Spyderprint Pro is ready for the next measurement in less than 0.1 second, and I'd love to use it for my research studies because it would be a real time saver. However, I can't use it for accurate standards work (the ColorMunki, either, for that matter) because it doesn't comply with the new ISO 13655 standard for measuring OBAs to either M0 or M1 standards, but in profiling practice and at its respective price point it's "good enough". The i!Pro2 measures M0,M1, and M2 conditions but should be factory recalibrated periodically. For a few thousand more the i1 Exact does the same but can be calibrated to spec at any time in the field...you get what you pay for!

Wouldn't it be nice if we could all have a Ferrari or Lamborghini in the garage!

best,
Mark
 
If we can take feedback from users posting to the web, far fewer complaints and the need to futz around with the ColorMunki than Spyder. That said,
Right, but few of those endusers actually know how to validate the quality of a profile, and have no clue how to investigate short term drift issues with dye-based printers/inks/media which can undermine the quality of the ColorMunki built profiles. Hence, profile quality suffers at the amateur/enthusiast level, and the vendors could do a better job explaining things, but people are generally happy. Your example of the canned Epson 3880 profile versus your custom built profile is a prime example. Most people are very happy with the Epson canned profile performance.

All that said, your point is duly noted that more Profile building threads seem to bog down on the Datacolor product as compared to using the ColorMunki. That is in all likelihood indicative of general customer satisfaction with the product. I just feel it's important to let Datacolor SpyderPrint owners know that the product can get the job done if its idiosyncrasies are understood.

As Clint Eastwood (aka "Dirty Harry) so famously said, "A man's got to know his limitations".

best,

Mark

--

Mark McCormick
 
I don't know about blazing, but I don't see much delay between spot readings done with an I1Pro 2 using ColorPort 2.0. No more delay than I would imagine it takes to get the information LEDs to glow telling you to move on to the next patch.
Wow, then I gotta get me one of those new i!Pro2 units!!!!. ...
They are currently at the lowest price they have been for a while. They were $1550 (i1Phote Pro 2) just 5 month ago, and now with the $300 rebate just $1.1k. Although I don't know if 'just' is the right word.

I would note that it is considerably slower using iProfiler, maybe 1/4-1/2 second per patch.
Wouldn't it be nice if we could all have a Ferrari or Lamborghini in the garage!
Nah, I would be satisfied if the audio system work in my jalopy. The engine doesn't sound so good with no audio system.

Brian A
 
Last edited:
Right, but few of those endusers actually know how to validate the quality of a profile, and have no clue how to investigate short term drift issues with dye-based printers/inks/media which can undermine the quality of the ColorMunki built profiles.
All I can say is, the ratio of users who appear to have issues with the DataColor products appears (that's key) to be far higher than ColorMunki users. There's no profile tweaking provisions as we see in the Spyder product because it's not necessary.
Hence, profile quality suffers at the amateur/enthusiast level, and the vendors could do a better job explaining things, but people are generally happy. Your example of the canned Epson 3880 profile versus your custom built profile is a prime example. Most people are very happy with the Epson canned profile performance.
And the canned profiles are actually quite good and consider that many were produced using older software technology (MonacoPROFILER) so if you built two custom profiles, one with that product and one with a newer and better color engine (i1P), you'd see similar results. Also, Bills Balls and similar synthetic images built to test profiles are not the same as images that most users print.
I just feel it's important to let Datacolor SpyderPrint owners know that the product can get the job done if its idiosyncrasies are understood.
Better IMHO to tell them to buy a better designed product prior to purchase. Unfortunately once they have the product, water under the bridge.

That said, it would be interesting to test the two products side by side. I've got the Munki collecting dust in a drawer, you've got the Spyder and have produced results you're happy with. If you're willing, I could try to remotely build a profile with the ColorMunki but it would take two print sessions. And I'm not sure if I'd have to leave the software open after measuring target A while you output and sent back target B.
 
Wow, then I gotta get me one of those new i!Pro2 units!!!!.
I owned the Lino for years, great product at the time. But the i1Pro-2 isn't your fathers Spectrolino! You really should try to get one. It also supports the new M Series measurement protocol, measures ambient light etc.

What the product can't do is utilize a Polarizer which for some is really necessary.
 
... It also supports the new M Series measurement protocol, measures ambient light etc.

What the product can't do is utilize a Polarizer which for some is really necessary.
I can live without the polarization.. use my current Spectorlinos when polarization measuerments are needed. I do intend to transition the Aardenburg light fade testing database to M1 condition for OBA containing products, and that has put the i!Pro or i!Exact on my radar screen for some time, but M1 versus M0, while important from a standards conformance point of view, doesn't really inform us of anything about the media that we don't learn from the older M0 specification, so I haven't considered it a must have feature.

On the other hand, I"m so weary of doing tediously slow hand held measurements that any instrument which can speed things up for me is a big deal. Pity that Xrite marketing folks don't understand that many of us in the industry have to have total control over target geomety layout, and we can't use scrambled patch targets with weird line spacers and such between the patches. We have to use hand held instrument measurements for much of our work. So, the speed of manual measurement acquisition is a really big deal.

Thanks for tipping me off to the spot measuring speed improvements with the latest i!Pro2 device. It will be worth its weight in gold to me if it cuts my measurement time in half.

best,
Mark

--
Mark McCormick

http://www.aardenburg-imaging.com
 
Last edited:
Pity that Xrite marketing folks don't understand that many of us in the industry have to have total control over target geomety layout, and we can't use scrambled patch targets with weird line spacers and such between the patches.
Can't help you with line spacers but you can absolutely deal with non scrambled patches with the i1Pro depending on, yup, the spacers and whoever is creating the targets.

Or just get an i0 with the i1Pro-2, it's a Spectrolino Spectroscan without all noise and much faster.

But heck, drop some real bucks on an iSis. No spacers, no scrambling, set it and forget it.
 
Or just get an i0 with the i1Pro-2, it's a Spectrolino Spectroscan without all noise and much faster But heck, drop some real bucks on an iSis. No spacers, no scrambling, set it and forget it.
We are definitely OT here, so sorry to other forum readers, but it's never been clear to me an iSis will meet my specific needs. I need Xrite to give me a demo unit for a week or two to know for sure, but so far my Xrite customer experience, possibly like yours, has resulted in very poor customer experience, and no answers to my questions. I hate to spend a lot of money only to find out the iSis unit doesn't meet my needs. The issue is how compact I can make an iSis compatible AaI&A test target. ColorPort seems to have a bit of a mind of its own and it's not optimized to consider precious real estate on the print media. Bigger targets mean fewer targets in test on very dear light fade testing unit real estate. Hand held instruments get one down to the pure essence of color patch real estate.

The iO unit is a sliding/physical contact instrument, good for one or two measurements, but ultimately it damages the surface of delicate inkjet media. I have to make multiple reads over long periods of time. The iO unit is definitely out of the question. The iSis might work, but again, as noted above, I'm just not sure.

best,

Mark
 
Last edited:
Yep, definitely a hijacked thread, but with many lurkers. I give back what I can, but never cease to learn. Hijacked or not, it can still be very useful for many, but perhaps not to the OP.

The users who post, have all reported abrasions with thicker papers and i1iSis, and also with substrates like canvas. It is only rated to 254 g/m² paper, but appears to be fine with 310 g/m² paper, providing you don’t want to measure it a second time.

For a mere $2.5k, there is the i1iO, which automates the i1Pro 2.

Brian A
Or just get an i0 with the i1Pro-2, it's a Spectrolino Spectroscan without all noise and much faster But heck, drop some real bucks on an iSis. No spacers, no scrambling, set it and forget it.
We are definitely OT here, so sorry to other forum readers, but it's never been clear to me an iSis will meet my specific needs. I need Xrite to give me a demo unit for a week or two to know for sure, but so far my Xrite customer experience, possibly like yours, has resulted in very poor customer experience, and no answers to my questions. I hate to spend a lot of money only to find out the iSis unit doesn't meet my needs. The issue is how compact I can make an iSis compatible AaI&A test target. ColorPort seems to have a bit of a mind of its own and it's not optimized to consider precious real estate on the print media. Bigger targets mean fewer targets in test on very dear light fade testing unit real estate. Hand held instruments get one down to the pure essence of color patch real estate.

The iO unit is a sliding/physical contact instrument, good for one or two measurements, but ultimately it damages the surface of delicate inkjet media. I have to make multiple reads over long periods of time. The iO unit is definitely out of the question. The iSis might work, but again, as noted above, I'm just not sure.

best,

Mark
 
I have done a very careful measurement of the target and a careful analysis focusing on a couple observations based on repeatable results. I am getting much more accurate readings from the probe. I was using the plastic guide to help me measure spot values more quickly. I even pressed down hard as I took the reading. This plastic "thingy" is warped, and no manner of use improves the results. So I threw it away and measure patch to patch very carefully where the probe is right in the middle of each patch/ This took some time. But it was worth the effort considering the nice improvement in the results. After I measured each row, I compared the measured color on the screen directly with that of the printed target. I then remeasured the patches that looked wrong. I did this for the one sheet target that I used. I then read the entire target once again. I found I obtained the best results with the first pass.

Further analysis allowed me to make some observations. The colorimeter had problems with the dark green patches. On these patches there was a shift in hue that is noticeable with a critical eye. Also the probe had some problems measuring some of the more saturated colors on the printed target. I did reread these areas of color several times for consistent results and ended up with the same results.

I used several reference images and some other images that I thought would challenge the profile. Once again, the major area in the need of improvement for the SpyderPrint are the dark greens. I do think the green sensor is not as sensitive as it should be in my particular colorimeter. Also despite my efforts, there still appears to be a couple patches that were misread with no way to identify the patches with the offending measurements. But if it were not for that problem with dark green colors, the profile generate by SpyderPrint would be as good or even more accurate the Colormunki, without any customizations on the advanced screen. Even though this is true, the differences are not by that much, perhaps one or two Delta-E units. By far the best rendering intent for SpyderPrint is the Saturation setting, the one that Datacolor has identified to be the best rendering intent for their customers. The Perceptual rendering intent can be good too with this package. In comparison, the best rendering intent of Colormunki is the Relative Colormetric setting. When it comes to Perceptual, the SpyderPrint seems to do a better job in many cases. I have been referring to the stats that ColorThink Pro provides on different images. So what I ended up doing is bumping the Saturation slider and the Green slider for a bit better results. But this did not fix any of these problems.

So IMHO the solution was to throw away that plastic "thingy", take careful measurements while comparing the results to the printed target as measured row by row, and for a bit better results, boost the saturation setting in the advanced screen a couple units. I also found that my particular colorimeter has a problem reading dark colors with a green hue. I also found out that no matter how careful I am, there will be a couple measurements that will bias the accuracy stats.

All of this are my observations. Maybe someone else will notice different results with their probe. And remember that this is using the 225 patch High Quality Target. Now was this worth the effort? I must say this effort was not worthwhile to me inn comparison ti the Colormunki approach. But for those who have the knack for such things, they can get good results from SpyderPrint.

I hope this turns out to be worth reading. Oh yes, the gamuts between the two are virtually identical. And after taking the first few measurements and the colors are not coming out accurately enough, go back and recalibrate the device.

PS: I numerically examined the target that I used. There appears to be the same number of colors printed for various levels of luminance. As the luminance decreases, there is spotty coverage of the color spectrum. I think that this can be improved. FWIW

--
With sincere regards, Bob
 
Last edited:
I was mistaken for I was using Perceptual rendering intent for both profiles without realizing it. With some images, the Perceptual rendering intent of the SpyderPrint s a bit better than the same with the Colormunki. But after going through dozens of images obtained from the Internet, Colormunki appears on average to be the more accurate profile when using the Relative Colormetric intent, and in a few cases, by a large margin.

My apologies.

--
With sincere regards, Bob
 
Last edited:
Interestingly, the latest iteration of iProfiler almost negates the need for patch scrambling. By simply reversing the x-y axis, the likelihood of two similarly colored patched being next to one another is rare.

1.6.1 is as buggy and ill conceived (UI-wise) as the last, but it does have benefits.

Brian A
 

Keyboard shortcuts

Back
Top