Part of the Oxford Instruments Group
Expand

Microscopy School Lesson 4 - Microscopy Cameras - Comparing Camera Technologies and Matching them to Applications

In this, the second of two modules on the topic of imaging cameras we explore how the different camera technologies compare relative to the parameters covered in the first module (Lesson 3 – Microscopy Cameras - Fundamentals of Digital Imaging and Sensor Technologies). We discuss the performance of these camera technologies available for sensitivity, field of view and speed. 

The sensitivity needed depends on the application – high sensitivity is not always required, while EMCCD is still essential for when sensitivity is a priority. Modern microscopy cameras are developed around providing a good field of view from a microscope with sensors generally being designed within this size. As with the other parameters, field of view cannot be viewed exclusively with camera speed also being required to allow effective operative at large fields of view. Imaging speeds listed for a camera are not so simple for determining the real-world speed of an imaging camera. Other factors come into play such as the sensitivity of the camera, the read-out mode and the background noise levels. 

A number of applications are summarized against their main imaging requirements. Suitable camera types are then suggested for these based on their ability to fulfil these criteria. 

Key  learning objectives :

  • The relative sensitivity of different camera technologies
  • Comparisons of field of view and how this can be matched to the microscope
  • Camera speed on paper versus what is required for running cameras in high speed imaging applications
  • sCMOS Rolling and Global shutter operation for high speed imaging  
  • Techniques to boost imaging speed and limitations to these approaches
  • Important criteria for common imaging applications, and suitable cameras

Find out more about our other microscopy training sessions offered through Andor's Complete Microscopy Training Course.

I am Dr. Allan Mullan, a product specialist for microscopy cameras at Andor. And I'd like to welcome you to lesson four in the Andor Microscopy School. So, this is the second of two lessons that are about microscopy cameras and the role they play within microscopy imaging. In the previous module about cameras, we looked at what types of different camera technologies there were, and really what are the important parameters from a camera's perspective to do with imaging? So, we were looking at the sensitivity field of view that they can provide and the speed.

In this module, we're going to go on really to compare those different camera technologies using those parameters that we've discussed, and then see what we can do with that towards working out what cameras are more likely to be suitable for different applications. For people that are interested in some of the more advanced camera features, triggering, and features of EMCCD cameras and rolling and global shutters, binning, that's something that we can probably cover in a separate module. But if you want to find out more about those technical aspects, there are a number of technical articles that cover an A to Z of technical material in the Andor Learning Center.

It's important to note that no one camera does everything for everybody or should every specific application all the time. So, as much as there's a lot of interest in the latest back-illuminated sCMOS cameras. And indeed, from a technical point of view, they're superb. They have excellent sensitivity, provide wide fields of view, low noise, and are high speed with good dynamic range. However, since there's such a range of different microscopy techniques that are used for different applications, they're not always what you need. So, it's important to remember that sometimes a low-cost color CCD camera is perfect because you just don't need the level of sensitivity that these high-end scientific cameras will provide. But other times, you do need to use high-end deep cooled EMCCD cameras because only those models will provide you sufficient sensitivity to image what you need to do.

If we briefly recap the free primary parameters for microscopy imaging cameras, they were the sensitivity, being able to detect the signals that we are working with, having a field of view that is enough for the images and the image data that we're working with. And then, also, for the speed of the camera, if it's a dynamic cell process, we need something that is sufficiently fast enough to resolve that temporal information. Other times, we may have very long exposure applications where there are small changes in intensity over a very long period of time.

And these different applications will all prioritize different requirements. But the first one that we will want to look at is how these different camera technologies compare to do with sensitivity because that's just a key one from the most applications allows us to work out what camera is going to be suitable or not. As we have mentioned already, different experiments operate under quite different light regimes. That is to say that if we're looking at, say, bright field with low magnification, maybe on fixed slides or cells, light is going to be plentiful. So, sensitiviy is maybe not quite so important.

But for many fluorescence microscopy techniques and applications, in those experiments, light is limited because by performing fluorescence microscopy and looking at specific wavelengths of lights, we've greatly reduced the amount of light available. Similarly, with luminescence, we also have, again, a much weaker signal than even for fluorescence when we're trying to capture that weak signal over a long period of time against a very low background level.

In addition to all of this, especially when we're considering common fluorescence microscopy imaging, we're also trying to maintain the accurate cell physiology and potentially study some quite dynamic processes. So, to try and avoid damage to the cells that we're looking at, we want to try and minimize the illumination power that we're exposing the cells to and the effects that that may have on the physiology that we're trying to look at. If we're looking at dynamic processes, we also are dealing with quite short exposures, which are, again, reducing the amount of signal that is available that the camera can detect.

We've mentioned that we're only looking at specific emission wavelengths, not a broad range of light wavelengths. If we're dealing with thicker samples, such as working with organoids, thicker tissues, and model organisms, there's going to be absorption and scattering happening with the signal as well. And if we're wanting to study the fate of individuals over time and looking at cell division and replication and developmental studies, we're wanting to keep the illumination and exposures down low so that we can study those cells over a long period of time.

And the techniques that we use to do these studies are often about controlling light, either optically sectioning lights or using another technique so we can have confocal in which we're using spinning disc confocal, for example, where we have pinholes. So, we've greatly reduced the amount of light that's going through in order to get the improved resolution and clarity of our image over techniques such as TIRF and light sheet microscopy, or SIM as well, structured illumination microscopy.

We're using a range of techniques to restrict and control the light effectively and cut down that background unwanted fluorescence that we would have to have. Whenever we consider the labeling, we have to consider the abundance of label that is going to generate the signal. If you have something such as, say for a virology application, if you have directly immuno labeled capsid, for example, you're only going to have a small amount of signal. Whereas, if it's something in a cell where we're looking at components of the cell, which are in abundance, and we have a strong label, a lot of label attached, then you are going to get a much stronger signal to that. So, depending on what experiments we're doing, you can see that already, that we have got quite a range in the sensitivity that any camera would need to be able to provide.

When we're comparing the sensitivity of the different cameras, we're primarily interested in comparing the signal-to-noise ratios of those cameras, which we covered in the previous module. And the first of those factors is looking at the quantum efficiency, and that being the ability of those sensors to convert the signal in the form of photons into electrons. And in this graph, we see a range of different profiles of the QE responses for some of the different camera technologies that we would be dealing with. The first of these being an example of a Compact Monochrome CMOS camera. If this was a color camera, it would have separate blue, green, and red profiles and that would further reduce the efficiency of that camera because light would have to hit the corresponding pixel that is responsive to that wavelength.

In terms of the sCMOS cameras, we can see this blue profile for the front-illuminated sCMOS. And this has increased over the years, reaching this peak of around 80%. And then, with the last generation of sCMOS cameras, which has been around for the last three or four years, with back-illumination, that boosted that QE response up to a peak of a similar level to the CCD and EMCCD cameras. You'll also note that, especially with the sCMOS cameras, that the maximum QE is centered around the visible wavelengths, and this is for good reason. This is because this is where the majority of the fluorophores and the dyes that we're using for measurements are within this range of high-400s to, say, 700 nanometers.

But of course, with the back-illuminated CCD and EMCCD cameras, you also have a range of sensor coatings that's like standard QE response, down into the blue and UV regions, and also with probably more biological interest into the near-infrared region where you can take benefits of less scattering and less damage to biological tissues for those longer wavelengths. But currently, for sCMOS cameras, they're generally optimized within the visible region.

The second component that feeds into that signal-to-noise ratio and allows us to make the comparisons are the noise components of the camera, which, to recap, are the read noise, which is a main component, especially in short exposures. But then, if we start to extend the exposure time for certain applications, then the dark current also becomes very important.

Whenever we look at the cameras, again, even the quite entry-level microscopy cameras on paper can look to have some quite good specs. Here, we see the read noise of these, from some examples, to be within this range from under 10 electrons right down to 2, and in some cases, there are some with even less than that. But what's often left out of those spec sheets for those cameras is the dark current component. And that's normally because those cameras tend to be passively cooled. And when you don't cool those cameras, dark current can build up to quite high levels. So, this isn't actually a misprint. This is actually values of dark current that those kind of cameras can have.

So, in the early days of digital microscopy, along with much higher read noise, lower QE, and higher dark current, you were faced by quite a few challenges with those cameras. But now, even these entry-level models have got quite a lot better than what they initially would have been. But still, this dark current can remain a problem if you have to extend exposures beyond very short exposures. For the scientific-grade microscopy cameras, and starting with the front-illuminated sCMOS cameras, these have very low read noise, as you can see here.

And also, the dark current, often, in respect to the previous ones I've mentioned, is very low. Then moving onto the latest generations of back-illuminated sCMOS cameras such as the Sona models, you can see that, again, read noise is very low, slightly higher than the previous generation of sCMOS cameras in most conditions. And again, the dark current is also low, much lower compared to the, sort of, basic microscopy cameras.

Whenever we look then at the back-illuminated CCD and EMCCD cameras like this iXon Life and iKon-M model that we've mentioned, we can see that it's really this dark current that stands out as being particularly low. So, it can be 100 or 1000-fold lower than an sCMOS camera. So, even a camera like this Sona sCMOS camera, whenever you cool it, you can drive down the dark current as low as it's possible to go. But the CCD cameras, it's still, it's possible to get much lower dark current.

And that becomes very important once you extend those exposures and applications that require an exposure time to be extended beyond seconds and to even minutes, as long as 20 minutes in some cases for some bioluminescence applications. Again, just to point out, with the EMCCD cameras, it's common to refer to them as having this sub-electron read noise. But in reality, as we discussed that they do have a read noise, but it is the electron multiplication which is amplifying the signal up so high that it is essentially mitigating the read noise so that you have extended the usable signal well beyond the noise floor.

So, next, we will look at how the quantum efficiency and the noise components of the cameras fill in together to build up our signal-to-noise ratio comparisons. Taking those various signal and noise components of the camera, we can combine those together into our signal-to-noise ratio equation and plot the results of those according to the number of photons per pixel, or alternatively, we could plot that as photons per unit area of the sensor. I've just left it as photons per pixel, just for the sake of this comparison.

And in terms of signal to noise, the higher the signal to noise ratio, the better. And if we only had a signal-to-noise ratio of one, which we have here, that would mean half of our signal is the signal that we're after and half of the image information then would be made up of noise. If we look at light regime that you may have for fluorescence microscopy, just within this region here, we can see the example of a back-illuminated sCMOS camera giving us this nice, very high signal-to-noise ratio. And of course, the image then, we have a nice, sharp, clear image with a lot of detail and high contrast.

But under these circumstances, if we had been using the Compact CMOS camera, we don't get a good signal-to-noise ratio at all. So, this is nearly actually close to one. So, as a result of that, in this example, which is backed up here, we can see that we have a grainy image, and it's quite difficult to pick up the structures and the wanted signal information against the background noise. But whenever light would be in more abundance, we can see that the Compact CMOS cameras are capable of providing a very good signal-to-noise ratio, so are going to be suitable.

And back-illuminated sCMOS cameras then are also giving us this high signal-to-noise ratio under slightly more demanding conditions. And then ultimately, due to the EM technology that EMCCD cameras have, that whenever it really comes down to these very low levels of light, that still these EMCCD cameras are giving us the improved image. And again, remember that those cameras can operate with very small numbers of photons per pixel, which may not come across as effectively here as in real life as it does.

The next thing we can look at and compare is the field of view that the cameras can provide. There's a really overwhelming range of sensor formats and pixel sizes, which combined together result in many different sensor sizes. So, we'll just have a quick look at some key examples. For front-illuminated sCMOS, we have many cameras which have this standard 4.2-megapixel format with a 6.5-micron pixel size. So, this fits in well with the different microscopes that are available. And again, that's seen with the back-illuminated sCMOS cameras, with cameras with that format.

And also, the first of those ones came out with a larger pixel, which give more priority to photon collection efficiency, but also resulted in a much larger sensor size. EMCCD and CCD cameras are also available in that format. However, they have different pixel sizes, which results in those formats. If we have a smaller sensor format such as you would see with many EMCCD cameras, this would be much smaller than this common 4.2-megapixel format that we see in the sCMOS cameras especially. But we do also have some CCD and EMCCD cameras, which are within that sensor format size of about 18.8 millimeters diagonal.

The larger format sCMOS cameras with the Sona 4.2B-11 model, which is the largest of those for microscopy applications, gives you this massive 42-millimeter diagonal image. So, if you compare that back against these other formats, you can see that you're able to see a much larger area. There are even larger format CCD cameras such as this iKon-L model, which is bigger yet again. There's other ones for astronomy, which are really not suitable for microscopy, which are much larger, again, than these.

But importantly, what we need to do is we need to match those to what's available through the microscope. And there's two things we need to look at. We need to have the pixel size to achieve full resolution of the microscope for the objective that we're working with. And then also we need to have a sensor size which makes the best use of the field of view of that microscope port and objective combination that we have. And we want to do that so that we don't have any darkening effects of a non-uniform illumination towards the edges of the image, which we call vignetting.

So, if we have one of those smaller format sensors, we don't make the full use of the microscope field of view that we could have. And again, then, if we have a very large sensor area, we're missing out on some of the sensor area that we're actually paying for because the larger the sensor size, the more it's going to cost. And of course, with that other standard sensor format that we had, this is pretty much a standard fit for many microscope ports.

To summarize for field of view, microscopy cameras are generally designed so that they will match the microscope port size and the objectives that people would typically be using. With that said, in general, sCMOS cameras are going to provide options to allow for much wider fields of view, whatever the conditions may be. The sCMOS cameras will also have smaller pixels, allowing you to maintain the resolution of the microscope at lower magnifications. CCD cameras and EMCCD cameras are available with large fields of view.

But certainly, in terms of the CCD cameras, the use of those large fields of view is limited due to the lower speeds that they will provide, which we'll come to shortly. Modern microscopes and software, however, also allow you to do image stitching, where you can combine multiple small images into a much larger single montage image. So, that's another solution that some people may have to allow for wide fields of view over something like a zebrafish or a larger model organism that they may be studying.

You can, of course, use additional magnification to either reduce or increase the sensor size to fill the area available through the microscope and the objective used. That will also affect the pixel size. So, if you add magnification, that will reduce the effect of pixel size. But you can also demagnify as well to increase the pixel size. And these will also impact the field of view. So, if you add additional magnification, they will reduce your field of view. But if you demagnify, they will increase your field of view. However, bear in mind, you're adding an additional optical element to the system. So, you may have a loss of some small percentage of sensitivity from adding that additional magnification.

Whenever it comes to talking about the imaging speed or the frame rate performance of a camera, there's a number of things that we have to carefully consider. For these comparisons, I haven't included the Compact CMOS cameras simply because there's such a wide range of performances in those parameters, and in terms of under real conditions, can they actually achieve those values? In general, from what we've discussed previously, we know that sCMOS cameras are certainly capable of much higher frame rates for the same field of view compared to CCD and EMCCD cameras.

We can see here, whenever we are considering looking at two regions of interest, so 1024 by 1024 pixels, and comparing the cameras across that and also with a smaller region of interest of 512 by 512. So, we can see with these sCMOS cameras that we have here, with the Zyla and Sona camera models here, at 1024 by 1024 in blue here that we can have easily get over 100 to 200 frames per second. And we can reduce that further to get up to 400 frames per second without making any other compromises to our images at all, which we'll also mention.

In terms of the EMCCD cameras, you can see that they are offering a much higher amount of speed performance than the equivalent CCD camera. And by using optical cropping, you can boost iXon Ultra here up to very high speeds, nearly approaching just short of 100 frames per second at 512 by 512. So, the EMCCD cameras are certainly capable of providing high frame rates.

Speed can be improved as well. There's a number of things that we can do. We can crop the sensor further. These sCMOS cameras are already cropped to allow comparison with the generally smaller CCD and EMCCD cameras. So, we can crop the sensor and get...The only compromise we're really making there is having less field of view. We can also do things such as pixel binning, where we can combine the pixels into larger pixels.

And certainly, for CCD cameras, we can take advantage of much higher speeds at the downside of we're effectively making those smaller pixels, grouping them together into four or even eight pixels so we're losing resolution. sCMOS, you will see some advantages in speed through pixel binning, but certainly not quite as much as CCD due to how the readout being row by row with sCMOS as opposed to pixel by pixel. Another thing you typically see with sCMOS cameras would be reducing the bit depth of the image. So, for a high dynamic range, we might be talking about a 16-bit image. And for a high-speed mode, we might be reducing that to 12-bit. And in some cases, even lower than that.

Speed is not only important for dynamic processes that would be occurring within the cell, but it's also important from a throughput point of view for imaging large samples. If we look at relatively high frame rate or 40 frames per second, that works out to exposures of being about 25 milliseconds in length. And if we run up to 100 frames per second, that is dropping us down to 10-millisecond exposures. So, that's very short exposures for a camera to have to collect photons within.

But that being said, even for what you would consider relatively fast applications, being able to hit 40 frames per second at the full sensor size of an sCMOS camera is very good. And of course, you can reduce the region of interest and increase the frame rate performance. But often, we're only talking about requiring to have 100 to 250 milliseconds exposure to be able to capture enough signal. And that will equate to 10 to 4 frames per second is all we need.

But there are some cases where we may be imaging highly dynamic samples and processes such as for calcium imaging, where we're seeing a rapid change from a low to high signal intensity, and at a very short period of time. Or else we may be looking at objects moving across a field of view at high speeds or things like flagella or parts of cells that may be moving.

In addition to the frame rate performance of the camera, there's other things going on there as well. The camera itself has certain processing overheads. And importantly, the data connection that's used may limit the actual real-world speed that the camera will be able to run on a continuous basis. So, even the same sensor running from a CameraLink, CoaXPress, or other high-speed interface will allow for much higher speeds than if it was connected through an earlier generation, but yet practical USB-free connection.

As I mentioned that if we look at the impact of reducing the exposure time to allow for high frame rates, we can see that in 25 milliseconds, compared to 10 milliseconds, we have a much shorter period of time in which the camera is collecting photons to allow to generate a good signal to allow for a good signal-to-noise ratio. And in fact, reducing it from 25 milliseconds to 10 milliseconds means that you're going to have 60% less photons to play with.

And on top of this, whenever cameras run in these high-speed modes, they're typically boosting the speed at which the sensors will run at so that imparts higher noise to those cameras generally. So, that is going to increase the noise floor of the camera. And adding together these two factors of the smaller signal and the higher background noise, that means that, in practice, it might not be able to run that sensor at that high speed for the application that we want to.

The sensor, in running in some of these high-speed modes, may also not have a very good dynamic range and it may not be able to capture the increases in signal intensity effectively. For example, if we run a camera at an 8-bit mode, we only have 256 levels within that signal, and that may not be enough depending on what we are imaging. With sCMOS cameras, one of the things that is important to look at in a bit more detail in terms of speed is to do with rolling and global shutter.

Generally, we will see sCMOS cameras having rolling and shutter modes of operation. And it's called this because readout proceeds in a line-by-line fashion like a rolling wave through the sensor. And there's good advantages to this mode. It means that there's lower noise and it allows for a higher ultimate frame rate operation for the camera. But since each line is exposed before the next one, not all of the image is captured at the same point in time. So, it is not a snapshot effectively. There is a time delay between each row and the next.

So, this will operate like this. For some sensors, is operating in a top to bottom, or else it is running in a center so that the rolling shutter proceeds in this fashion. So, this is fine for a static image, but whenever we apply that to a moving image, you can end up with these weird optical effects happening due to these time delays that are happening during the readout process.

For sCMOS cameras that can operate in this global shutter mode, this works like a snapshot, which older Interline CCD and other cameras would have had. And this allows all pixels to expose at the same time. And what that does is that avoids any temporal distortions occurring across the image, which is important if there's any motion happening. But in that mode of operation, it has a lower frame rate and also has a higher read noise.

So, the global shutter will just continually gather signal over time as it exposes, and then it will read that out. So, whenever it comes down to a moving image, you're able to see it's able to capture the motion effectively compared to rolling shutter. So, of course, rolling shutter generally works quite well for most applications. But however, it is an important consideration that we have to make whenever we're talking about high-speed imaging.

There's another mode that you sometimes will come across with sCMOS cameras sometimes call it simulated global shutter or global reset. So, what is that? And does this avoid issues that you would see with the normal rolling shutter? Well, in true global shutter exposure, it's happen continuously. So, you end up then that throughout the cycle of the camera, there's no dead time. It's always exposing, collecting the signal throughout the cycle. It's also easy to synchronize exposure starts and stop with other illumination sources and other parts of the equipment you may be using. And the benefit of that is it's eliminating any motion artifacts.

The other rolling simulated global shutter mode is operating...it's still operating in this rolling shutter mode effectively. But what it does is it clears a sensor of charge before it starts to gather signal during exposure. And that works with the fire-all signal. And this does allow you to help synchronize those cameras with different light sources. And it may reduce motion artifacts to some degree. But you can see here, compared to true global shutter, it is really not doing the same job.

During the cycle time, only 50% of the time it's actually capturing the signal and then it's reading out for the other 50% of that time. So, it still has that row-by-row readout. So, that means that this does not totally remove any motion artifact effects. And although it can be synchronized, it's still quite difficult to use that mode. So, it's not for cameras that only operate within rolling shutter. You will see this mode, but it does not truly replace global shutter for those applications that really do need that mode.

So, to summarize for camera speed, a camera that looks fast in a specification may offer 4 or 5, even 1,000 frames per second. In practice, it may not always work quite that well or be able to run that fast in a given high-speed imaging application. Importantly, for these applications, we need a high sensitivity to capture the limited number of photons that are going to be present within that very short exposure time. And also, in addition to that, we need to keep control of the background noise to allow for a good signal-to-noise ratio. So, we don't want that background noise performance of the camera to creep up too high.

We can make some compromises to obtain high speeds, even in cameras that may not give some very nice headline high-speed performances. Often, reducing the region of interest will help you run the camera at higher speeds. And for SCMOS, remember that the readout of those cameras is per row so you can reduce the camera on a height basis. So, you will still have the width of the image and be able to improve the speed of those cameras, which is already quite high.

For some applications, we may be able to use pixel binning if resolution is not so important. It could be some fluorescence correlation spectroscopy kind of application where we want to run really fast, but we can sacrifice some spatial resolution sometimes. And this is particularly useful for the CCD cameras, which are slightly slower in speed. And we can use us to improve the signal-to-noise ratio, but of course, we're losing that spatial resolution.

It's important to be aware of image artifacts that could happen during high-speed images. If you have objects that are moving particularly fast within the field of view, then you do have to be aware of this. So, for the standard mode of operation of sCMOS cameras and rolling shutter, sometimes a global shutter, which provides that snapshot and avoids any temporal distortions happen, may be beneficial. Note also the dynamic range that the camera is running at to achieve high speeds.

Typically, with sCMOS cameras, you will see high-speed mode being in 12-bit, and that is seen as generally being okay for most things that you may need to do. Some cameras do offer lower 8-bit modes, which is fine for displays such as monitors and so on. But whenever you're talking about making quantitative measurements, this is not so good. And then you also have to factor in that if you have a high noise floor, that will be really reducing the dynamic range that you have to work with for your image.

So, moving on to selecting the right camera for your application. There's a wide range of different technologies around. There's a lot of different marketing messages out there claiming that different cameras are the most sensitive detector technology, which seemingly contradicts statements elsewhere. And other ones come up with more general statements such as the camera being ideal for fluorescence.

So, what we can move on to is look at some application examples, and then with what we've discussed previously, we can use that information to make some practical and logical assumptions and determinations over what camera is going to be the most suitable for those different applications. For histology imaging applications, typically, we're working with stained, fixed tissue sections have been prepared, and we're observing those under low magnifications and we're screening those samples for any signs of anomalies for certain cancers, for example.

And because of the type of microscopy we're using there, we're going to end up with very high levels of light as you can see here. So, the camera requirement then for those is because we have those high light levels, we don't need one of those very high sensitivity, deep cooled cameras. And possibly even the color model may be useful providers for some extra flexibility in terms of recording exactly what we've seen in the screen with new futher manipulation of the images required. And since we're working at low magnifications, we need to have small pixels to preserve the image detail there.

And with fixed slides, we have no need for a camera that can do high frame rate. The only thing would be if we had some more automation involved there. So, for a high throughput screening application, you may want something that had a high frame rate. So, with those considerations in mind, probably a compact color CMOS camera will be perfect for those requirements. And of course, it benefits for those as well as those are going to be a much lower cost than some of those other higher value cameras, which we really don't need in this case.

For many people doing microscopy, widefield fluorescence is the mainstay of their imaging. And within this, it covers a very wide range of applications and experiments. And that means there is a wide range in the signal levels that are present. And we could be looking at things such as fixed-stained slides or we could be looking at some live-cell imaging studies. And the camera requirements then are generally to provide a balanced performance.

For live cells, we need additionally good sensitivity to allow us to keep the exposure levels low and maintain that cell viability and the function of those cells over the time that we're trying to infer measurements and information from the imaging experiments. We also want to, where we can, view as large an area as we can. So, that means then that the camera will need to have a wide field of viewing. And typically, we may need to be working at 60x or 40x magnification. That means then that we need a suitable pixel size to maintain the resolution.

And if we're looking at dynamic processes, so that could be wide levels of signal variation from low-level information through to high-level information such as for neurons, and also for imaging things at high speed, we need to have that capacity within the type of camera that we use. So, the cameras that are typically used for fluorescence microscopy are the sCMOS cameras. And of course, they are the most common sensor type. And this is for good reason because they meet these requirements very well. And of these, back-illuminated models will provide a very well-balanced performance. But you can consider the previous generations of sCMOS cameras as they are also highly capable.

When we move to confocal fluorescence microscopy, we're starting to operate at lower light levels than we would have for typical widefield fluorescence applications. So, although we do get a sharper image caused by the rejection of out-of-focus light by using the spinning disc confocal, for example, this causes there to be reduced light levels. The camera requirements then are really towards higher sensitivity to work at these lower light levels. And also, because we may want to be looking at live-cell experiments, we want to maintain the low exposures and the low illumination intensities. And that keeps the viability of the cells under study.

We could be working from small organisms, say, bacteria, yeast, right up to larger model organisms, C. elegans through to zebrafish. So, for those different studies, a wide field of view is generally something that we would be interested in. And if we're starting to study more dynamic cellular processes, then, of course, we start to need to have the ability to have moderate frame rates. Camera recommendations for this then would be a bit wider. We would be looking at potentially sCMOS and/or EMCCD cameras. Sometimes we still need EMCCD cameras to provide the highest possible sensitivity for confocal.

But other times, we can get away with using the sCMOS cameras, and that will provide those wider fields of view and is more natively suited to the lower 40x and 60x magnifications that could be used. So, for these reasons, sometimes you can actually see both camera technologies on one system. For example, this Andor Dragonfly spinning disc confocal system has EMCCD and sCMOS cameras. So, someone can select between the different technologies as they need them for their different imaging requirements.

Ratiometric imaging, for example, calcium imaging where we're looking at things like calcium sparks, that could be through using a range of different chemical indicators or some of the genetically encoded calcium indicators. And sometimes, you may just be using one single wavelength, but often, we may want to take advantage of a dual-wavelength ratiometric style experiment. We could be using those traditionally for neurons, especially, but also for cardiomyocytes and other cell types for various studies.

The camera requirements then for these types of applications are the ability to be able to capture these dynamic changes in signal intensity. And some of these changes may be quite small so you need to be able to quantify these with high degrees of accuracy. So, for these types of applications, sCMOS cameras tend to be very suitable because they're allowing for the high speeds and the ability to capture the wide dynamic ranges of the signals that may be involved.

Sometimes the global shutter, which we mentioned earlier, can be useful. And that can be if we have a wide field of view and we're trying to get information with accurate timing across that whole field of view. Also, sometimes EMCCD cameras can be very useful for these types of studies because they can work at these very short exposures, albeit at a fairly small field of view, and it allows you to pick up small variations in intensity of very weak signals.

Super-resolution microscopy has resolution-based microscopy and research into cell biology. And this is because it allows us to view processes and structures that would be otherwise hidden below the classical diffraction limit of light. And, of course, that covers many processes and structures within the cell. So, by using super-resolution techniques, of which now they are many, such as STORM, PALM, STED, SRRF, SIM, DNA Paint, and many more, and they can be overusing localization super microscopy or other optical techniques. To achieve that, we can now have these powerful techniques.

Camera requirements generally for these are in that we want to keep our laser powers low to maintain cell function. So, that means that we need to have high sensitivity for the cameras to operate at these very short exposures. We also want to have a good contrast and high signal-to-noise. So, that means that we need to have cameras that have very low noise levels. Smaller or medium-sized pixels can be helpful for improving sampling. While, at the same time, we don't want to have a pixel that is too small where we really lose out on the photon-gathering ability and the signal-to-noise ratio that the smaller pixel will compromise on.

We also may need to, for some techniques such as SRRF stream, we may want to capture multiple frames very quickly. So, we need a camera that's capable of high-speed imaging. Camera recommendations then for super-resolution microscopy would be EMCCD and/or sCMOS cameras. They're both widely used. and it just depends what level of sensitivity versus what level of field of view. And also, another advantage then of the sCMOS cameras, providing you have enough sensitivity for an sCMOS camera, you're getting that wide field of view, but also the improved sampling that that smaller pixel can sometimes provide.

Bioluminescence imaging can be seen slightly differently to the other fluorescence-based techniques that we've largely covered so far. And then this using luminescence-based genetic reporter systems, they allow for exceptional sensitivity, much higher than is possible using fluorescence-based techniques. But they have very low signal levels produced over time relative to fluorescence. But importantly, to achieve this high signal-to-noise ratio is they need to have a very low background level.

Camera requirements then that allow you to image well in bioluminescence applications are the cameras need to have a high sensitivity. And that is because luminescence reactions themselves generate a very small number of photons. So, the camera needs to be sensitive enough to detect those. And with that low level of signal, then this signal takes time to build up sufficiently. So, the camera needs to have a very low dark current to be able to collect that signal over many minutes against a very low background noise that's inherent to the camera.

Imaging speed, of course, is not going to be important here because we're dealing with these long exposures. And in this case, that is the deep cooled CCD cameras that are most prevalent and most suitable for these techniques because they have the lowest possible dark current and they also have a high level of sensitivity. And the downsides of that technology of being the slow readout and only allowing you to have very low frames per second is not important here.

And for whole-cell imaging then, the deep cooled CCD cameras are perfect. But in some cases, if you're starting to go to single-cell level and things like bacteria, you may need some extra sensitivity. So, sometimes, EMCCD cameras may be suitable for these in vivo plant imaging or bacteria luminescence-based studies.

Single-molecule imaging is probably one of the most challenging and demanding areas and certainly as far as experiment-wise and also for the detectors. But it's a very powerful technique and lets you study processes right down to the signal molecule level, where you can start to look at binding and other relationships between different molecules. And in doing these things, because you're below the diffraction limit of light, they're all diffraction-limited imaging experiments. And we're working with single molecules, which of course, are inherently going to be very low-level light emitted from those.

Camera requirements then for these, because we're working at the lowest levels of light possible, this prioritizes high sensitivity over many of the other parameters. So, we're talking about the highest possible sensitivity from the detector, and that includes having larger pixels and also requiring deep sensor cooling as well to eliminate any background noise. And if we're starting to look at single-molecule trafficking studies, we do need a moderate frame rate capability and field of view for practical purposes.

EMCCD remains the detector of choice for these kinds of studies simply because it has the highest sensitivity that's available for any detector technology. In addition to just having that high QE, which sCMOS cameras have, and also in some cases, the large pixels, it's the ability to use that EM gain. And whenever we do any single molecule comparisons between the EMCCD cameras and back-illuminated cameras. And these graphs here, on the right-hand side, are just an example of showing you it is possible to detect weaker and more signals of interest using the EMCCD technology.

For some experiments, however, so this could be if we're dealing with brighter signals and Quantum dots, or potentially an example of this, it may be possible to use back-illuminated sCMOS cameras. So, they can be used for these applications sometimes. And if we can use them, then that is quite useful because we can make use of that wider field of view that those cameras are going to have.

To conclude and summarize some of the key points to lesson four, we can see that camera technology has progressed very far in the last 10 to 20 years with sCMOS and EMCCD camera technologies. And these have played a key part of new microscopy techniques such as light sheet microscopy and super resolution. And these microscopy techniques have really helped to further our understanding of the fundamentals of cell biology. It's allowing us to see much more than we ever have done before.

But still, as far as detectors go, no one camera does it all for everybody for every application. And for some applications such as bright field microscopy, a compact sCMOS or CMOS camera should be perfectly capable. And of course, it's very low price as well relative to some of the other camera technologies. But when we start to look at fluorescence microscopy research applications and working with short exposures, it's the scientific sCMOS cameras that normally prove the most suitable. And that's why we see those as being the dominant detector technologies used today.

There are still some important niches for what would be seen as older technology and EMCCD and CCD technology. For single-molecule imaging especially, the sensitivity that EMCCD cameras is often essential for those applications. And for longer exposure applications such as luminescence, we really need to still use deep cooled CCD cameras, which can have dark current values of 100 to 1,000-fold less than sCMOS cameras, making them suitable for luminescence studies whereas an sCMOS camera is simply not capable of providing a good signal-to-noise ratio at longer exposures, working into the minutes, whereas the sCMOS cameras will have millisecond exposures.

And as we look through the parameters for different cameras, we have to be mindful and careful of the camera specifications that we do see because not all of those specifications are able to be delivered simultaneously. For example, sometimes we can't have high speed plus low noise, or we can't have a wide field of view at high speed. So, we have to make a compromise somewhere. But from looking through what we've looked through earlier, we can see that we can make some compromises in some areas, but they will not have an impact on the types of experiments that we are going to do.

We should also be much better informed now about how we can carefully match the important parameters of the camera and consider those in relation to what we actually need for those applications. Do we really need a high-speed camera? Do we need something that's very sensitive? And how much field of view do we actually have?

Related assets