Medium Format Forum

Register a free account now!

If you are registered, you get access to the members only section, can participate in the buy & sell second hand forum and last but not least you can reserve your preferred username before someone else takes it.

H3D

Hi Jurgen,

> The instructor ( not an EIZO man ) told us , that > the EIZO is the only > screen on the market , which can produce 16 bit colour depths > images .

I don't believe he is correct. There is no video display spec that actually displays 16 bit per color, at least for standard display hardware. What he may have been confused by is what is called "16 bit mode" and is 5:6:5, or 5 red, 6 green and 5 blue bits per pixel. This is a standard display mode.

A typical DVI implementation supports a maximum of 8 bits/color or 24 bits/pixel normal (called 8:8:8), which is what the DVI spec allows.

Even if it did "support" 16 bits/color, your eyes are simply not capable of distinguishing that many tones. What you were seeing was probably a difference between the two monitors, having nothing to do with 16 bit/color.

Was this an LCD or a CRT? As a note, the Eizo "FlexScan" models, which appear to be their high end monitors, only display 16.77 million colors, which would be 24 bits/pixel or 8 bits/color. I also checked their FlexScan L997, which they claim "The high-end choice for professionals using CAD/CAM, DTP, and other imaging applications with its S-IPS panel and wide range of color control capabilities.", and it is just like any other FlexScan I looked at, which is 16.77M colors, or 24 bits/pixel or 8 bits/color.

If you know the model number, I can look it up...but it looks like Eizo doesn't make a monitor (at least LCD, the one CRT I looked at, the FlexScan T966, didn't specify the number of colors, much to my surprise) that claims to actually support 16 bits/color.

Regards,

Austin
 
Hello Jürgen.
I believe Austin is correct about the Eizo.
I have an Eizo ColourEdge CE240w monitor... it is beautiful (yes better than apple) but it is not Eizo's expensive flagship monitor, the ColorEdge CG221.
http://www.eizo.com/products/graphics/cg221/index.asp

Quote from the website:
"The 16-bit internal calculation of the CG221 produces grayscale rendering that is on a par with highend CRT monitors.The result is not only a much greater degree of detail in dark areas, but overall uniformity of brightness and color throughout the entire screen."

While this is no doubt true, I don't think it means that the monitor outputs 16 bit colour, but that the calculations to display the 8 bit colour are more accurate.
I may be wrong of course.
It can display the full Adobe RGB colour space... but that is a different topic.
A very nice monitor if you can afford it.
Even though I do high end retouching, for me the larger 24inch CE240 with a good spectrophotometer to calibrate it is more than enough.
regards
Andrew
 
Hi Andrew,

> "The 16-bit internal calculation of the CG221 produces grayscale > rendering that is on a par with highend CRT monitors.

->GRAYSCALE<-

???

I also find "internal calculations" interesting. The only "calculations" I can think of, aside from tonal correction, that they would have to do in an LCD monitor (I've designed LCD controllers before believe it or not ;-) is resizing (as in, displaying a non-native resolution), but that's not what they mean when they refer to "rendering", so I don't know what they are talking about here. It may just be marketing speak, and a red herring.

They also have a 12 bit LUT (Look-Up Table), which is basically a tonal curve, and it makes sense to do those "changes" in a larger bit space. This makes sense.

But, the spec sheet clearly says it displays 16.77 Million colors, so it's only an 8 bit per color/24 bits/pixel display.

Regards,

Austin
 
Austin,
Your right IF you use the Kodak ex&les. After I read the specs it does say that the accuacy can be improved with modification to the circuitry shown, And I think the design boys at Imacon steped up to the plate and did just that.

As for the rest of US photography buffs, TRUE 16 bit digital photography is here and Hasselblad has done it and so has PhaseOne and Leaf. This Imaging Chip is analog and yes it's the same PIE but, Hasselblad has just figured a way to slice it into smaller pieces. Remember the old days in the darkroom and test strips for exposure, well its the same thing. A 5 segment test strip done at 2 second intevals compared to a 10 segment test strip done at 1 second intervals or a 12 segment projection print scale exposed for 10 seconds. Which one do you think is more accurate? From dark to light on a pixel. 8 steps, 12 steps or 16 steps, the smaller steps are more accurate. The sensor is the sensor and it's analog meaning there is no steps or bits, the back manufaturer has to design the electronics to take the pixel's signal and store it digitally and 16 bit just means it need to be more complex and more accurate.

THE BOTTOM LINE IS...
16bit digital photography is alive and well and WORTH EVERY PENNY.

Marc, I can assure you that all your hard work and dedication to the 16 bit world is worth everything you spent on it, both time and money.

Simon,
Your assumtions are correct about the video and monitors. As for film. A few years back AGFA did some testing and found that in theory you would need to have a 34MP Digital camera to equal 1 frame of 35mm ISO 50 film scanned to the point that the film grain got larger than the pixels and I'd hazard to guess that the film has at least 16 bits of color. So if we say that 6x6 is 2.5 times the size of 35mm we'd have a 85MP A12 film-scanned digital back.

Time to get off my soap box...
Franc
 
Franc,

The Agfa theory is long gone.
Many photographers have tested independant from each other that Hasselblad with 25 Mp Sinar backs and selected lenses can equal or improve 4"x 5" quality.
It takes much less to equal 35 mm film.

Paul
 
Thanks Colin - I think I have to agree with you about film's organic qualities and that to measure anything you need an instrument with the capacity to detect and measure!

Jurgen, I'm glad the colour workshop went well for you. If you master colour 1/2 as much as you have mastered B&W, you will have done very very well indeed.
happy.gif


Oh, and I nearly forgot to say how much I liked Colin's $25 camera shot - taken at an Aussie BBQ beer fest?
happy.gif
 
Austin

It was an EIZO CG211 . I have looked up the specs . Other members have done so as well , so there is no need to repeat , because it is described above .
What I don't know , is , if the guy , who told us the 16bit story was just parroting or if he was convinced of what he told us .
But in any case , this is the end of the 16bit discussion for me .
happy.gif


Simon
Unfortunately , I do not have the colour mangement under full control yet . But I am working hard on it . I do need to calibrate my printer , because the colours I obtain in my prints are awful .
The calibration hard/software is rather expensive . (X-rite)

Colins HOLGA digiback produces nice images . Thanks Colin .
I tried to find the product on internet , but was not successful yet . :)
 
Jurgen,

If your monitor is correct and you use the standard icc profiles for your printer and paper in photoshop you should be really close with your colours in your prints from your inkjet printer.

Franc
 
Franc

As this is far off topic , I would like to open an other thread in the DIGITAL DARKROOM section .
I would be pleased to learn from your experience .
Regards Jürgen
 
Hi Franc,

> Your right IF you use the Kodak ex&les. After I read the specs it > does say that the accuacy can be improved with modification to the > circuitry shown, And I think the design boys at Imacon steped up to > the plate and did just that.

I don't find anywhere in the spec that says the device specifications for dynamic range can be improved by any modifications. If you could point that out, I'd appreciate that. From my experience dealing directly with very similar devices and the actual developers/manufacturers of them, this simply can not be done by simply modifying circuitry. The spec of the device IS the spec of the device, and manufacturers want to show their devices in the best light they can.

The only thing that can be done to improve noise/dynamic range performance is cooling, how much, I can't say without experimentation...but it would take more than would fit in a back that size to make it significant in my experience. I think they're lucky if they can actually meet the published Kodak spec given how small the back is and how minimal cooling they've provided.

The only things that I saw that can be "improved" in the spec are bandwidth, which has not a thing to do with dynamic range.

> As for the rest of US photography buffs, TRUE 16 bit digital > photography is here and Hasselblad has done it and so has PhaseOne and > Leaf. This Imaging Chip is analog and yes it's the same PIE but, > Hasselblad has just figured a way to slice it into smaller pieces.

Unless you have any specific documentation or test results that shows this possible, I stand by my claim that this being just simply wrong.

> The sensor is the sensor and it's analog meaning there is no > steps or bits, the back manufaturer has to design the electronics to > take the pixel's signal and store it digitally and 16 bit just means > it need to be more complex and more accurate.

This is wrong. You're missing some very basic understanding of the operation of these devices, and that is noise. The device has a minimum noise floor. Your understanding of what analog also seems amiss. Analog devices do in fact have "steps", per se, and those steps are the noise. You can not "slice" (resolve) below the noise, it's just a fact of electronic design.

Systems can perform no better than the worst performing device in the system. In the case of digital image capture, you can't perform any better than the imaging device it self.

> THE BOTTOM LINE IS... > 16bit digital photography is alive and well and WORTH EVERY PENNY.

That is wishful thinking, I'm afraid, at least with respect to the Hasselblad back, as s well any other "consumer" digital camera.

Regards,

Austin
 
"The only thing that can be done to improve noise/dynamic range performance is cooling ..."

In the past three iterations of the Flexcolor firmware/sofware, both these aspects have visibly improved. Same with the software/firmware for my Leaf Aptus 75 which uses a Dalsa sensor (as does Sinar). There are well documented differences in the response of Hasselblad backs and Phase One backs which use the same Kodak sensor... not lesser or better, just different. Same with differences between Leaf and Sinar.

So, while the sensor may be a constant, what is done with the data afterwards seems plastic.

"Systems can perform no better than the worst performing device in the system. In the case of digital image capture, you can't perform any better than the imaging device it self."

I think I understand this concept, yet have experienced "better use" of the sensor's finite performance. Perhaps it's like saying the worst part of taking a highly corrected photograph is the limit of the lens to avoid aberrations ... making it a weak link in that quest ... yet the current firmware/software from Hasselblad corrects aberrations to such a visable degree that it appears as if by magic, Distortion is also eliminated when using lenses to 80mm.
 
Hi Marc,

> "The only thing that can be done to improve noise/dynamic range > performance is cooling ..." > > In the past three iterations of the Flexcolor firmware/sofware, both > these aspects have visibly improved.

Firmware/software can not improve the noise or dynamic range *specifications* or performance limits of the imaging device it self. See below.

> There > are well documented differences in the response of Hasselblad backs > and Phase One backs which use the same Kodak sensor... not lesser or > better, just different.

Sure. The results you'll get are certainly *implementation* dependant...as in, one may have a cleaner power supply or better analog front end, or better shielding or even software/firmware (as you have indicated), but those are irrespective of the actual sensor's specifications/performance limits.

The specifications and performance limits (as outlined in the specifications) are as GOOD as it'll get (of course, there are minor variances amongst other units of the same imaging sensor), but that doesn't mean some design element, as outlined above for ex&le, couldn't make them perform worse or very well "different" as you say.

> So, while the sensor may be a constant, what is done with the data > afterwards seems plastic.

Absolutely, a very good way of putting it.

> "Systems can perform no better than the worst performing device in the > system. In the case of digital image capture, you can't perform any > better than the imaging device it self." > > I think I understand this concept, yet have experienced "better use" > of the sensor's finite performance.

Better than what though. Better than the imaging sensor specification says it's capable of? Probably not. Better than it was before (but within the specifications of the imaging sensor), sure.

> ...yet > the current firmware/software from Hasselblad corrects aberrations to > such a visable degree that it appears as if by magic, Distortion is > also eliminated when using lenses to 80mm.

That's got nothing to do with the imaging sensor though, it's post processing of the data after the A/D converter, and pretty deterministic at that.

FYI, there is a thing called "PRNU". PhotoResponse Non Uniformity. Each sensor cell/element (photosite) is slightly different than another with respect to it's output being equal to another cells output given the same input, and of course, each sensor it self will be different.

This PRNU is a characterization/calibration that is done at the factory during manufacturing to each individual camera/back by using an even field and measuring how "off" each sensor cell/element is, and then providing a calibrated correction that is stored in the camera/back, and this correction is then applied to the data just after it's read from the A/D. The same type of "characterization" and calibration can be done for lenses as well. But again, this is all done to the data after the Sensor->A/D path.

Thanks, this is great info. I didn't know that Hasselblad applied correction for the lenses IN the back (or does it do it in the software on the host computer?). Pretty slick (either way). I'd like to see an image with and without this "feature" enabled. I assume you can still get a raw file out of this back?

Regards,

Austin
 
Hi Marc,

> "Systems can perform no better than the worst performing device in the > system."

And ex&le that I feel is appropriate. A lense has a certain lp/mm specification that it's been tested to (characterized). Some ex&les of the same lense are a little better, some ex&les are a little worse. But, you won't get an image on the film (or digital back) that exceeds that specification by any significant amount.

A number of things in the overall system can reduce the lp/mm you get from it though, and in fact, typically do. Like camera shake, focus accuracy, film grain/image sensor resolution, dirt on the lense or a host of other things, that you are probably all to familiar with.

But, the point is, no matter what camera body you put that lense on (500C, 500C/M. 2003, 203), what film or imaging sensor you use, whether you anchor the camera to a 200kg block, or wave it in the air while taking the picture, you can't get better lp/mm than the lense has been characterized/specified to provide...but you can get a lot worse.

Regards,

Austin
 
> I would really like to get a digital back for my Hasselblad ( 553/ > ELX and/or 501CM ), but don't have funds for the "newer" generation > backs. $10K USD would completely wipe my photo budget for the next > couple of years ( I am a serious amateur, not a pro)!!!! How do the > older backs (like Kodak's discontinued Pro Back model) fare in > comparison?
 
Robert, they are excellent. The Kodak backs were ahead of their time. I used one on a 555ELD and another on a Contax 645. When I got a Canon 1DsMKII, it was touted as being as good as a digital back ... which my first job shooting jewelry disproved. The Canon struggled with specular highlights, and the Kodak back handled them with ease.

Austin, not disagreeing with the capture device being a constant, just that the use of the information is getting better and better, which has accelerated the imaging abilities of these backs considerably. In other words, it seems the utilization has lagged behind the sensor performance. The question is how much more can be pulled out of the same sensor?

The lens corrections are a function of in-camera firmware and the software. What I don't quite follow is the notion that the end result cannot be any better than the base performance of the lens itself. I understand the resolving ability, but there are other aspects of lens performance such as distortion and aberration that are being made better after the fact. The lens itself doesn't perform as well as with these DAC firmware/software solutions. Put that same lens on any other body and it will fare worse than on my H3D using the DAC corrections.
 
Paul,
I think that scanner technology has come a long way as well and IMO a good 4x5 cannot be touched by any digital camera.

Austin,
I never said the dynanic range could be improved, I said the accuracy can be improved and the 74db range can be cut into smaller pieces. I think it would be impossible to advertise 16 bit color and not truely have it. I beleive it is there, everything I see tells my logical brain there is an improvment. Deny it if you like but the proof is in the pudding. I know what I was taught and I know what I beleive to be true and that is 16 bit color is here and you cannot change that fact, if it looks like a duck and quacks like a duck, its a Duck and I think most here would agree.

Enough with the techno bable, I'm here for the photography, 8, 12 and 16 bit.
Franc
 
Hi Marc,

> The question is how much more can be pulled out of the > same sensor?

That's why I suggested that there are tests that can be run to determine the performance of the "system" as a whole. Then you match that up with your sensor specifications, and see if it seems on par. If not, then you know there may be more to go. But, if the back is designed "correctly", as in, allows the raw output to meet the specifications of the sensor, then the back hardware is at least designed well enough to match the performance of the sensor. It would be a matter of software/firmware past that. Only a raw image can be used to test the hardware performance (sensor, A/D and possibly PRNU), since processing the image "taints" the results.

> What I don't quite follow is the notion that the end result > cannot be any better than the base performance of the lens itself. I > understand the resolving ability, but there are other aspects of lens > performance such as distortion and aberration that are being made > better after the fact.

That's why I picked resolution as the factor. Obviously, as we've discussed, some deterministic aberrations can be corrected for.

Regards,

Austin
 
Franc,

I respect your opinion but several other photographers who did this research as well came up with the same conclusions.

I do'nt care whether it is 8,12 or 16 bit as long as the results are good and get better all the time.


Paul
 
Hi Austin,

Being curious here, given the PRNU correction table: do image sensors exhibit 'drift' to any extent over time, due to aging or otherwise?

I know that chip manufacturing processes can yield rather different characteristics in chips coming from the production line. At least they did when I was involved with them (quite a while ago ;). But those were pure digital chips, no analog or mixed digital/analog. On digital-only chips the max clock frequence etc vary because of this.

Wilko
 
Back
Top