Interviewer: sCMOS cameras are now the most common type of camera technology used for scientific imaging. In this interview, we will look at how and why these cameras have come to be so popular with Dr. Colin Coates who has been involved in the development of sCMOS cameras at Andor Technology.
So, you are part of the group that was involved in the development of the fastest CMOS cameras. Can you describe how the development of sCMOS came about?
Dr. Coates: So this goes back to the mid-'90s, really. CMOS and CCDs were developed, originally, back in the late '60s into the early '70s. But then CCD technology became dominant. One of the reasons for this is that basically CCDs had better image integrity, image quality, generally, better performance than CMOS for decades, really. But we noted that things were changing in the '90s. There were enough technological developments out there. A lot of the research was funded not by the need for enhanced scientific usage, but for mass commercial developments feeding into mobile phones and video cameras. Nevertheless, a lot of this technology and advanced fabrication techniques were being put in place. So we did see some opportunity, ourselves, and a couple of partners in the scientific field to harness these developments and technological enhancements into something that actually be quite useful for science and to actually make CMOS a valid technology for scientific usages.
So, it was actually us and a couple of key partners. It was Andor, a German camera company called PCO and a U.S. company called Fairchild Imaging, part of BAE. And they carried a lot of expertise on the sensor development front. So, we decided to group our forces and some of our funding and developed the first real proper camera, which we then called Scientific CMOS. And we actually called it Scientific CMOS to differentiate it from the CMOS which had been developed decades earlier to say, "Yes, this is something different. This is something which actually can perform really, really well."
So, we've seen a lot of opportunity here because it was looking like the noise characteristics, in particular, were very exciting. This was showing at a much faster read-out speed, you know, way less noise than a CCD. It could manage at anywhere, even approaching a 10th of those read-out speeds. This opened up a lot more application possibilities for us, basically.
Interviewer: So you mentioned EMCCDs and also CCDs, but what were the specific benefits that these new sCMOS cameras provided?
Dr. Coates: The way that we positioned it initially was really focusing on this idea of achieving lots of performance parameters simultaneously. I think the technology that CMOS or sCMOS really impacted the most on was CCD technology.
CCDs were very dominant across a lot of application areas up until this point. And if I take microscopy as the example or fluorescence microscopy as the example, a ubiquitous sensor out there, for example, would have been what was called the Sony ICX285. And it had a 6.5 μm pixel, maybe about 8 or 9 e- read noise at a speed of about 10 MHz pixel read-out, which would give anywhere between 10 and 20 frames a second. And then if you wanna push CCDs faster, you impact their read noise. It's like the grassy noise floor gets larger and larger the faster you push it. And this is something that CMOS did very differently. There's a lot of massive parallelization in the read-out structure along with some just very intelligent pixel design architecture as well. And all this serves to drastically minimize this grassy read noise floor while still being able to read out at a tremendously fast rate.
So, this is what it really brought to the table. It was a way of saying, "Well, we didn't get you, you're quite fast dynamics." They let you look at dynamic processes, fast-moving processes, or even moderately fast-moving processes while giving you even lower read noise than you're used to with CCDs. So, if a CCD was getting, say, 8 or 10 e- before, we were reading out faster, getting faster frame rates, and getting the noise down to about 1 or 2 e-, and this was really quite significant.
Then there's other technologies which we'd built into Scientific CMOS to enhance dynamic range as well. All the sCMOS detectors that have been built to date since then have what's called a multi-gain architecture on the pixels, which means you can do something you couldn't do with CCDs before. The CCDs, you either had to optimize it in the software, to optimize it for either low light or bright light, basically, or maximizing the well depth available. You had to choose one or the other. With this technology on sCMOS, you could have both at the same time in the same image. You can go from like the read noise floor right up to the full well depth in one single image, which massively expands the dynamic range.
Interviewer: One of the recent developments for sCMOS is back illumination. What is the difference between a back-illuminated and a front-illuminated sensor, and how does this affect their imaging performance?
Dr. Coates: Yeah, in a sense it's not much different from a front-illuminated versus back-illuminated in CCDs. The problem with front-illuminated is that you tend to have some pixel architecture which is necessary for a pixel to function and for a sensor to function, but which impedes photons making their way into the active area of silicon. The technique is then to flip the sensor onto its reverse, put all that architecture underneath, out of the way. And then you must thin down to get to the active area of silicon and make it exposed. It's quite an expensive technique, which is why back illuminated sensors tend to be a bit more expensive than front-illuminated. Basically, it then gives you unimpeded passage of the photons through to the active silicon. And that's why we're seeing these quantum efficiencies which can exceed 90% commonplace in back-illuminated.
I mean, having said that, it would be wrong to look at back-illuminated sensors and say, "Yeah, that will answer everything. That's all you'll ever need." For sure you have the quantum efficiency enhancement, but sensitivity is a function of how efficiently photons are captured. But also, the photons or photoelectrons that are created should not be buried in read noise, which means that you can't forget about the noise floor as well. Certainly, the current generations of back-illuminated sensors don't actually have as low of a noise floor as the generation of front-illuminated sensors that we first launched from Fairchild Imaging / BAE.
So credit where credit is due. There has not yet been a technology which convincingly does back illumination and very, very low noise combined in one. So there's always a trade-off there, and coupled with the fact that we have been seeing some extraordinarily high quantum efficiency from front-illuminated devices. A number of years ago, we launched one in our Zyla product, the Zyla 4.2, with 82% quantum efficiency. That's not far short of what you're getting with back illumination. And that's one of the sensors which gives you the lowest possible read noise. So it would be incorrect to say, "Take a back-illuminated sensor. Regard it as the best of the best and only ever use that." It's never that straightforward. There's always tradeoffs to think about.
Interviewer: Okay, so you've talked a little bit about the quantum efficiency and improvements with the back-illuminated sCMOS cameras, what other developments have happened over this time?
Dr. Coates: Yeah. Well, I suppose over the year since we first developed sCMOS, I mean, in truth we put a lot of functionality, of new functionality into the very first sensors. And for the microscopy market, there's largely been variations on a theme around that ever since, you know. Beyond back illumination, there really hasn't been a lot of innovation on the microscopy front. There's been just different variations of sensors. What we try to do is focus on some different markets that we felt could avail of this technology. And some of that was in the physical facts. I mean, for example, we put a fiberoptic butt onto the sensors and launched a version of our camera which could be adapted easily for things like X-ray tomography, you know, indirect X-ray detection of high-energy photons, things like that.
More recently, we launched a camera called the Balor, which is a very large area, Scientific CMOS detector. So that's taken advantage of the fact that, again you have this massive multiplexed read-out on a sCMOS that you would never have had on a CCD, which means you can take really large arrays, really high-resolution arrays, but still read them out tremendously fast with no noise. Now, that's really, really unique and that does actually benefit quite a few areas of astronomy because with a high-resolution CCD, which we'd use in astronomy to image lots and lots of the sky, which is very important for astronomy, you can apply an exposure time, but then you will have like typically about 45 seconds absolute dead time, during which time you're just sitting there reading out the image of the sensor. To put that in contrast, you can get that down to about less than 40 milliseconds, typically, with a sCMOS for a similar, 17-megapixel array, that's massively different! What we've done, we've just taken out 45 seconds of dead time during which photons are just being wasted and made it available again.
So you can increase the throughput of measurement, follow more dynamic processes, which are actually surprisingly prevalent in the astronomy field, it's not all slow moving up there, or you can even extend your exposure times, increase your signal-to-noise into this previous dead time that we've just released. This whole concept of very fast read-out, of high-resolution images is very, very interesting for astronomy in general. We expect that to develop in the future. The Balor was basically the first release. It will probably be a longer term of large-array CMOS sensors.
Interviewer: So you've spent some time talking about sCMOS, but what about EMCCDs? What does the future hold for this camera technology in light of the developments that you've talked about with the sCMOS camera range?
Dr. Coates: Yes, for electron-multiplying CCDs, that is still a very, very valid technology, really. Prior to the introduction of sCMOS, EMCCDs were really the solution for overcoming the problem I just described at the start, of having high read noise when you want to read out fast. So, yes, they are essentially CCDs with an additional technology on board for amplifying above this big, grassy noise floor that you suffer normally from reading a CCD out fast. But the key difference about EMCCDs is, because you can apply this amplification factor and the amplification factor actually can be quite large, that means they are actually truly single-photon-sensitive. And they are also back-illuminated.
All commercial EMCCDs in the market are back thinned, so you're combining, you know, the capability to capture more than 90% of your incident photons, with the fact that even a single photon event will appear as a spike on top of your noise floor. So they are single-photon sensitive. You can use them for photon counting, which is fantastic for things like, you know, untangled photon quantum experiments. But beyond that, they simply do remain the most sensitive technology out there. There's lots of applications benefit for that across the life science and physical sciences. For applications like single-molecule detection, in life sciences where you can watch, in realtime, you know, two-single biomolecules interact with each other in realtime. It's still absolutely invaluable technology, for sure. The numbers that we used to ship of EMCCDs, that's been impacted since we launched CMOS. We absolutely expected that but, you know, it's been far from cataclysmic for EMCCDs because a lot of people have realized, yes, for these really, really ultra-sensitive techniques, we still do need this sort of technology.
Interviewer: And I guess, just to finish off, what do you think the future holds for sCMOS? From a development point of view, an application point of view...
Dr. Coates: Yeah, yeah. It's often a tricky one to answer this one, but there's a couple of...I guess, you could call them Achilles' heels for CMOS technology, which, within time, I'm sure will be addressed. One of them is that they typically have higher dark current than CCDs, which means that...that kind of puts an application flexibility limitation onto them. It means right now you would purchase a sCMOS when you know you just want to do dynamic stuff, you know, stuff that maybe goes frames per second rather seconds per frame. But once you try to tailor your solution to say, "Okay, I want to record, you know, 40-second images for a big, long astronomy acquisition," now you start to reach the limitation of sCMOS because the dark current is, you know, maybe a couple of orders of magnitude higher. So it doesn't lend itself quite as well as a CCD would for that sort of thing. But there's nothing to say that with enough time and development focus that won't be overcome. So I'll expect some movement in years to come.
CMOS commercial, Scientific CMOS detectors right now don't do true object binning, pixel binning, whereas CCDs do. They have an architecture to do so quite readily. That would be a benefit as well because it means that right now with Scientific CMOS, say I want to do 2x2 pixel binning to create a superpixel of four times the area, which is very commonplace for all types of imaging. Sometimes we just need that flexibility to, you know, compromise the resolution but gather so many more photons in really, really low-light scenarios. But if we do that on a Scientific CMOS, you're basically doubling the read noise. When you do it on a CCD, the read noise remains unimpacted. So that's the development which would be well-appreciated with CMOS detectors. And, again, I think it's surmountable.
I've already mentioned, you know, with our Balor large-area sensor, I would readily imagine the future will hold introduction and development of even larger sensors, again. Astronomy, in particular, just loves to go large. They wanna see as much of the sky as possible. So there will still be a motivation to do that. On the microscopy application side, likely I would see more development of even smaller pixels. A lot of sensors today are standardized on 6.5 μm pixel sizes, which first became popular with this previous Sony ICX285 CCD I mentioned. But I could readily see in the future more development of smaller pixel devices. That just lends itself to utilization of lower magnification objectives and therefore high-resolution imaging of larger and larger samples as well.