Part of the Oxford Instruments Group
Expand

Andor Microscopy School – Imaging and Sensors Digital Microscopy Cameras and Sensors

Microscopy cameras play an important, and for the most part, largely unseen role in our imaging experiments. Modern microscopy covers a broad range of imaging requirements that can place quite different demands of the imaging camera. Sometimes we need to see broad fields of view across tissues, sometimes we need to see single molecules below the limit of diffraction. Biological processes themselves occur over the course of milliseconds, for the impulse of a neuron, seconds and minutes for intracellular trafficking and development may need to be studied over several days. 

In this, the first of two modules on this topic we cover the fundamentals of microscopy cameras. We explore what are the key parameters in images we take on a microscope from a cameras perspective. Then we break down the anatomy of a modern camera. Finally, we look at the different sensor technologies that have been developed, leading up to the current technologies in use today. We address some of the common questions such as how cameras work, why would you use a mono camera or a colour camera for imaging and what does back-illumination mean? 

This sets the background for the following lesson that compares the different camera technologies and how they suit different microscopy applications. 

Key learning objectives:

  • Understand the important imaging parameters for microscopy cameras
  • Signal to noise ratio: why is this important to our imaging?
  • Sources of noise: what are the sources of noise?
  • Camera Anatomy: What are the main features of a modern microscopy camera?
  • What are the current sensor technologies. 

Find out more about our other microscopy training sessions offered through Andor's Complete Microscopy Training Course.

Previous Lesson: Microscopy School Lesson 2 - Transmitted Light Microscopy

Next Lesson: Microscopy School Lesson 4 - Microscopy Cameras - Comparing Camera Technologies and Matching them to Applications

Hello. My name is Dr. Alan Mullan. And I am a product specialist for microscopy cameras at Andor Technology. And in this module, we are going to look at microscopy cameras. At a high level view, we can say that we have some kind of detector that is sensitive to light, and we'll be able to detect light in our sample in the form of photons. And these could be a either transmitted through the sample or they could be from one or more fluorescent labels that we have in a sample. The detector will convert those photons and turn electrical digitized signal. The signal will be very low. It could be an order of even single figures of photons per pixel. So it will need to be boosted by an amplifier, and then the data sent onto the PC, and on the PC then we can view and analyze that data within a specialized software package.

The agenda then for this module is to look at the fundamentals of digital imaging and sensor technologies. So we will look at an overview of the imaging parameters from a camera point of view, then start to look at the microscopy cameras themselves, and just an overview of the key parts of the camera. Then we'll start to look into more detail and to the sensors, the camera technologies that are used. In the following module, we will look at how we compare those camera technologies, those performance criteria that we have discussed in this module, and then how we use those to match them to different applications.

To start off with in part one, we're going to look at the image parameters from a very high level overview, from the camera's point of view. So let's consider what kinds of images that we may be trying to capture using a camera. So we may have a range of different imaging techniques, depending on what we're looking at. It could be brightfield, DIC, fluorescence, as we have here, or we could have luminescence. The scale that we're looking at could go from trying to look at whole organisms for some kind of developmental studies, looking at large areas of tissues at low magnifications, and the individual cells, right down into the single molecule level.

We also have different timescales to consider. We could be looking at quite dynamic processes and events within the cells happening within a small number of milliseconds through to events that happen over a much longer duration. So developmental studies, taking many hours into over the course of a few days. And then for luminescence studies, we may need to capture a very weak signal over a longer period of time in order of minutes. Then we also have to look at using a different number of or type of formats. We could be using standard slides, or plates, or multi well plates as well, for example.

So even though there are going to be a wide range of different types of images that we may be looking at using a wide range of different techniques over different scales of time and we may just want to have structural information or you may be wanting to quantify things, we can rationalize this all down into a number of primary imaging parameters. And the main ones of these are sensitivity. Simply can we find and can we visualize what we're interested in? Then the field of view, how much of an area do we need to see at any one time? Then the speed, is a process something that's fast that needs very high imaging speeds, or is it something that happens over a longer period of time?

There are also a number of secondary criteria, which are also very important, such as resolution, how much resolution do we need for what we're looking at? The dynamic range, are there a range of low level and high level signals in the image? So different applications are going to have a bias towards one of these areas or one area may not be particularly important. But the different camera technologies and the models will have their own strengths and weaknesses. So strength in one area may result in a weakness in one of the other ones. Improvements in sensitivity have been one of the main drivers for a camera development. And it's pretty obvious why this is because it is fundamental that we were able to detect the signals that we're interested in and get a very good signal to noise ratio from them to work with.

So whenever we're talking about the signal and noise ratio, then later on we will break this down into the different elements that are involved in the signal and also the sources of noise as well. So for a low signal to noise represented here where we see a golf ball, which is partly hidden behind the noise of the grass, and we have a very small difference between the signal and noise. So it makes it very difficult to discern any images or any detail within the image that we would have. But if we have a very good and high signal to noise, it makes it very easy to see the object. So any images that we have with a high signal to noise ratio are what we want to achieve. So that comes about by trying to maximize the signal and minimize the noise elements. And we will look into those in part three.

For field of view, sometimes we might need to capture a large area. Tissue sections, for example, we may want to have a good look at, say, cell sections for histology studies. And we're looking at very low magnification. But we also might want to look at the development of an embryo over time, such as Drosophila or zebrafish, for examples. And then we also might want to be starting to look at maybe cell membranes or even single molecules of diffraction limited spots at very high magnification. So we also need to consider then the microscope that we're attaching the cameras to, so that the camera needs to be able to mount to the microscope port and also then match up with the field of view that's possible from the objective lens.

And generally speaking, a wider field of view is going to mean there's more data as required to be transferred from the camera to the control PC. And it will take us more time. There's more information so it takes longer to transmit that information. And as we will know that as we increase the magnification, just generally in microscopy, we will have decreasing field of view. Whenever we're talking about cameras and the imaging speed so they can provide or what they need to do, there are two aspects to this. The first of these is the nature of the process that we're wanting to study. Very dynamic events, such as cell signaling, really need resolutions within the window of a few milliseconds in some cases. So we need to be able to run very fast.

But for the most part, much of what we're dealing with in normal cell biology, we don't need to kind of run at those kinds of speeds. But it's always a good idea to run exposures as low in duration as we can and also just knock down the elimination intensity because we want to minimize the effects on the cells that we are looking at. Some experiments, of course, they need much longer acquisitions. The signals that are involved in experiments using luminescence, whether this is bio luminescence from an animal system, or quite commonly, whether it's in plant imaging, these kinds of luminescent signals are very low. So we're having to capture a small amount of signal and accumulate and build that up over the course even as many as 5, 10, even 20 minutes. So there's other considerations to do with that.

The other side to this then is of simply getting more data faster. A faster camera is going to really come into its own if we have an experiment where we have large datasets. So it could be screening of multi-well plates. Or if we have a large sample, which we may be constructing from multiple tiles and looking at that in ZED as well to build up a 3D image. So that will take some time, and a faster camera will help with that. But of course, we are also limited to do with the wider system that we have that camera in to do with stages and so on, too. Camera speed is normally expressed in frames per second. Alternatively, sometimes we see that as 10 hertz. So 10 frames per second would work out to be 100 milliseconds, exposure time that the camera would be running out.

The camera also needs to be able to capture enough signal during this exposure to allow it to run at those high speeds. Simply a camera being technically capable of running at a very high frame rate whenever it comes to the real world, we need to remember that if we go from having 100-millisecond exposure to a 10-millisecond exposure, the amount of photons that will be captured will be correspondingly reduced. So we could be limited to do with how much signal we have in terms of how fast we can actually run that camera. And cameras and sensors, there's quite a lot of things that we can do with them. And we can improve the speed of the camera by cutting down the region of interest to a smaller sub region of the sensor so that we can run that much more quickly. Or we can use another technique, especially for the CCD cameras, called binning, where we can combine one pixel into a larger number of pixels, which we call a super pixel, and then some lots, that sort of output from that one larger pixel to run the camera more quickly. But it will reduce the spatial resolution because we've effectively gone from having a pixel of a certain size and we've scaled that up to a much larger size.

Resolution, then, and in terms of resolving the details, well, we need to be able to resolve the image information so that we want to resolve this could be structural information and want to see if components are co-localized to a cell membrane or a microtubule, for example. And then this contrast is very important in terms of resolution, as it is easier to distinguish two high contrast features compared to two low contrast features. We need to be able to ensure that we are matching the resolution that the microscope itself is capable of. And for that we look at meeting Nyquist criteria to ensure that we are sufficiently sampling the image that is getting through to the camera.

So for that, camera terms, pixel size is probably the key parameter that we want to consider. And, of course, there's always compromises to these things. And in this case with pixels, smaller pixel is not necessarily an advantage. We'll cover this in more detail later on. Contrast and resolution may be combined together into a single specification such...which is called MTF, which is modulation transfer function. So sometimes this is mentioned, and other times it is not. The dynamic range is the difference between the very lowest and the highest signal level that can be in the image. A typical image can have a range of intensities within it. And for many images, we will have low level information that is important to us and also brighter areas.

And in these cases, it's very important to have a camera with a wide dynamic range that's able to capture this information. Typically, cameras with low noise and large well depth values are going to have very wide dynamic range. They have the capacity to handle and these large variations in signal whenever they're appropriately configured. In this example here, we can see that whenever we don't have enough dynamic range for the image, we end up with this oversaturation effect here where we can see that it's maxed out, and we're getting no useful image information in these regions. However, in this area, whenever we have a sufficiently wide dynamic range, we can capture this low level information and also this high level information where we can now see the detail.

So before we get into the details of the cameras themselves, let's just take a basic overview of a modern microscopy camera. So the first thing that we have here on the front face of the camera is we have the mounting of the camera to a microscope or whatever optical setup is. So commonly, we will have a C-mount for this. And then for some of the larger field of view cameras and microscope configurations that are above 22 millimeters, they can use F-mount. Additionally, there will be some mounting locations for other kind of optical setups. And these all need to have light tight connections to avoid any light leakage as these cameras are so sensitive that any stray light can tend to be picked up by the camera and we can see that impacting our image.

So there may also be some additional mounts as we can see here in the camera. And that allows us to secure the camera to an optical table. Some of the cameras can be anywhere between one to three kilograms in weight. So that avoids any stress on the optical train and also avoid any issues to do with vibration. Something that's sometimes overlooked is the camera window. There's a window which protects the sensor from debris and damage. So this is a quite an important job. The window is an important optical element of the system as well and it needs to be able to transmit in the wavelengths of interest. So the materials that are commonly used for this are quartz or UV grid silica.

Different anti-reflection coatings can be applied to this camera window to optimize the transmission characteristics. Normally, this isn't something that we need to consider whenever we're doing standard kinds of experiments using fluorophores or widefield microscopy. But whenever we start working outside of that visible spectrum, either into the blue on UV region or upwards into the near infrared, then we may start to have to look at other coatings for those camera windows, otherwise we will be starting to lose some of the efficiency of the overall system.

Additionally, in this space between the mounting plate and the camera window, some cameras may have shutters. So normally you will see these on older CCD cameras. So for many modern microscopy cameras, they will not have a shutter on them, and that is really to stop light shining on the sensor between acquisition. Then we have the sensor chamber. So the sensor is very delicate in nature, and also to reduce noise as we will cover later the sensor may be cold. So in this kind of environment then, the sensor itself is generating heat and we also have...so we have hot and cold elements. So we need to be careful to prevent any condensation forming. That means that the sensor is going to need to be in a sealed chamber of some sort without any moisture.

There are different ways of doing this. One of the most common ways of doing this with cameras is by a process called backfilling. And essentially this chamber is filled with a dry gas. It could be dry air or it could be nitrogen, for example. And then some desiccant may be added in there to mop up any access or any additional moisture that's there. And then that's sealed up. The other last column, the ideal way to have a sensory chamber for the camera sensor is using a hermetically sealed vacuum chamber. This is only really found on higher performance cameras because it is quite a difficult process to do from a manufacturing point of view. But a fiber [SP] system allows that sensor to operate within a sealed chamber environment.

When the sensor is CCD or sCMOS, it is composed of silicone and will have an array of many pixels. So it could be modern sensors may have between 1 and 4.2 megapixels. And although that's a lot lower than what we would know from our consumer cameras that we would have on our mobile phones as we'll go into later, these are matched up to do with the types of magnifications and fields of view for microscopy. The sensor itself may be color or a mono design. And again, we will look at why you may want to use a color camera or why many of these high end cameras tend to be mono design. So, they will not be color. Like many electric components, sensors will generate heat through their operation, and that heat needs to be dissipated.

And in the case of sensors, it needs to be dissipated because it can result in noise affecting the measurement. And the faster the sensor or the larger the sensor is, the more heat is going to be generated. So, small, slow sensors may just require passive cooling. So that would be the heat that's generated is simply removed through heat sinks. And that may include the casework of the camera itself, but larger and high performance sensors need to have active cooling. And that means that you have a thermoelectric cooler attached to the sensor assembly and either then you use air cooling using fans to extract and remove that heat or else you can use water cooling as an alternative way to remove the heat from the sensor.

And water cooling model, it adds some extra complexity to the system and needing to have water cooling setup. It also maximizes the potential of any vibration. So for particularly sensitive measurements, water cooling would be an option for people to consider with their camera. Modern cameras have onboard electronics that have a range of different functions. And these include such things as noise correction. We also have, especially for sCMOS cameras, that there are corrections or maps applied on a pixel by pixel basis to make sure that we have a uniform response to light across every pixel within the sensor. And scientific cameras are controlled normally by a control PC and software, so the camera then has to interface with that PC and software package.

Connections-wise, the camera will have a number of power and signal cables. The most basic cameras may only need to use a single USB cable to be part as well. And other times, whenever we have active cooling, we need to have dedicated power supplies for those cameras. So it requires a lot of power to cool cameras down to minus 25 or even as, in some cases, for iXon EMCCD cameras, it's actually going to be minus 100 degrees C. And especially with the modern sensors, we need to be able to support the high quantity of data which is being streamed off the camera so that we start to see then older cameras which may have used USB 2. These have been replaced by other formats, such as CameraLink, more recently CoaXPress, GigE, or even later versions of USB 3, 4 mounts allow the camera to run at either 40, 50, or even 100 frames per second and 16-bit mode. So that's a lot of data that gets transferred.

On part three of this module, we're going to start to look at things in a bit more detail now. So that is going into the sensor and camera technologies that are involved. So this will look at a number of the CCD and sCMOS cameras about how they work and some of the common questions that come up with these technologies. And then in the following module, we're then going to start going into the real detail and compare the different cameras, the performance parameters that we've mentioned, and see how they will set different applications.

Modern imaging sensors are generally based on either CCD or sCMOS. Both of these sensor types have been around for quite some time from around the late 1960s and have been gradually developed, as we'll talk about over time. CCD stands for charge coupled device. We'll also see later EMCCD, which adds an electron multiplication. Then we have cameras that are based on CMOS technology, complementary metal oxide semiconductors. And the variation of this that we are talking about for microscopy applications is sCMOS, which stands for scientific CMOS. As we mentioned before, the role of the sensor is to convert photons into usable electrical signals. So let's go on to look at how this actually happens.

How photons are converted into an electrical signal within the individual pixels can be summarized really as follows. A bias voltage is applied across each pixel to these gate electrodes so that a depleted region is created within the silicone. And in this depleted region, if incident photons have sufficient energy, they will liberate an electron, and this can be detected subsequently as an electrical charge. And the electrons themselves can be transferred through the sensor by applying precise timed clocking voltages. The efficiency of this process is called quantum efficiency and is expressed as a percentage. Fifty percent QE would mean that half of this incoming signal would be converted into an electron.

Shorter wavelengths into the blue and UV region, they tend to be absorbed as they fall through the center so they don't reach this photosensitive region and as high a percentage. So you will see a drop off in QE normally in the blue region. And in the red region as well, these longer wavelengths may pass entirely through the depleted region. So, again, there can be a drop off in QE. So you'll typically see a QE response that will peak in the visible range. Wavelengths above 1.1 microns or so don't have enough energy to generate electrons within silicone. So then you have to start to look at other types of detector technologies to start studying those higher wavelengths.

One of the terms you will hear being used fairly regularly about sensors is about back illuminated sensors and front illuminated sensors. So what does this mean? Well, as we looked earlier, each of these pixels are photosensitive elements. It needs to have a number of additional elements as part of that. So there's a get structures and various wiring, which allows voltages to be applied to the pixel to allow this conversion to happen. And it also needs to carry out the electrical information from that pixel. So from a manufacturing point of view, front illuminated sensors have this placed above the photosensitive area of the sensor. But while that's good for manufacturing, obviously, this is not going to be good for capturing the signal that we want to measure. The signal has to pass through this circuitry or network, we'll call it.

So while some of the incoming signal well pass through this, these structures obstruct the incoming signal. So this will reduce our quantum efficiency and the conversion of our signal into an electrical signal we can work with later. In the back illuminated sensor design, it addresses this problem. As you can see here, what's happened is that we have essentially flipped the sensor upside down. But there's a bit more to it than that from a manufacturing point of view. So what we have to do is we have to back fin or etch this excess silicon substrate. So this is why sometimes these back illuminated sensors are called back finned sCMOS or devices. So this means that the sensor can now obtain very high conversion rates of photons into electrons as we have removed these obstructions.

So, the latest models now can reach an efficiency of about 95%, whereas the front illuminated designs, actually we would expect them to be around 60% or so. But if we're using microlenses and other strategies, it was quite an impressive feat that some of these devices were able to get up into 80% efficiency. But back to the back illuminated sensors, so, this is great. We have maximized our conversion of photons to electrons. But the downside of this process is that, of course, it's going to cost more as we have added additional manufacturing processes. And also then you'll get reduced yields of those sensors. So there is a bit of a price premium to pay for these kinds of designs.

Another important thing to discuss briefly is about the use of color or mono sensors. Typically, if we consider standard low cost camera that may come supplied with a microscope, it could be a color camera. And whenever we start to do more demanding work and try not get rid of that camera, then we find out that the camera we're going to replace it with is potentially a mono camera. So that will not generate immediately a color signal. So why this is to do with sensitivity. Color cameras are not going to be as sensitive as these mono cameras. But they are going to be suitable when the sample is relatively bright. And they are not as sensitive because there are specific red, green, and blue pixels that will only respond to light within a specific range of the spectrum.

And that also means then that there's going to be less chance of one of those specific photons hitting that pixel. And they need to be placed in an array pattern over the sensor. So that means then that we're not going to get as good sensitivity as we can, where in a sensor design where every single pixel within it is going to be sensitive to light across a much wider part of the spectrum. But that being said, color cameras are very flexible. And for quite a lot of work, they can be quite useful. Some brightfield work with low magnification where we can be looking at a range of samples and we want to use the cameras simply to get some basic information from it.

Modern mono cameras on the other hand have pixels that respond with very high efficiency to light across that much broader spectrum. And they're going to be optimized across that spectrum. And that will cover the majority of the typical fluorescent dyes and proteins that will be used. So whether that's DAPI, GFP, or any of the Alexa Fluor dyes. And how we generate then color images from that is that we use the excitation emission filters to capture information which is specific to those wavelengths of interest for those fluorophores. And then we use a software to scale the intensity of the signal that's detected into a color channel. And, of course, we don't have to be limited to you choosing a blue to represent a particular fluorescent protein. We can recolor these to suit our needs as effectively, we're trying to just label structures of interest within that.

Now we start to look at the sources of noise as they're going to be involved in the different sensors. To get a good image we want to make sure that we get the best possible signal and minimize any sources of noise. So before we even think of the sensor, noise itself has its own inherent noise associated with it called photon shot noise. So this is just to do with the nature of light and the uncertainty of the signal, so that while if we are monitoring a signal that will fluctuate up and down. So signal to noise ratio then, which we'll look at, we have a useful signal information that we can do something with. And then we have the other background noise that we want to minimize.

For cameras, this can be translated into this simplified version of this equation by the following relationship where we have the signal, which is the quantum efficiency at the wavelength that is being emitted or is hitting the sensor, times the number of photons that are hitting it. Then we have the noise sources that are involved in the camera itself. So dark current is the noise that is generated within the silicone of the sensor itself. And it's called dark current because it is present whenever there is no signal applied to the sensor. And it will scale or multiply as exposure increases. So dark current times exposure times squared, you see here. Then the read noise is the noise generated by the sensor as a result of the readout process.

For one of the camera technologies we're going to look at shortly, EMCCD, it has an additional noise factor, which is due to the unique process that it has. But we'll talk about this specifically whenever we're talking about EMCCDs. And as I mentioned, there is also the noise associated with the light itself. Now let's work our way through the different types of sensors that are available in these common types of camera. Starting off with CCD. Charge coupled devices have been around in various forms since the late 1960s. So now, of course, they're going to be very well understood and the designs have been well optimized over that time to achieve a high imaging performance.

There are a number of designs that have appeared. But in these designs, there's normally one amplifier and one of the corners, and the charge from each of the pixels is sequentially transferred down a night in a serial process and as output for that amplifier. So CCD sensors operate in a very serial process. The first of the CCD designs is called a full frame CCD. These are the most basic designs. Essentially there is one image area and the charge is transferred down each pixel to the next using a series of voltages, dying through those columns, and then it is shifted horizontally. And an important part to note about these is that in terms of imaging, it is important that the light does not continue to fall on the sensor during that readout or whenever that charge is being shifted because otherwise new electrons will be generated in those pixels and you will end up with an image mirror effect.

So for some of those devices, you need to use a shutter to avoid this. And that means then that those configurations and those devices are going to be slow. So that is the limitation to full frame CCDs. Frame transfer CCDs are a type of CCD that goes some way to try and resolve some of the speed and image mirror issues that we would have had for a full frame device. So there is a second area of silicone, which is masked from light, and that is used to transfer the electrons from the imaging area down into this masked area. So while that happens, then the signal can be acquired again through the imaging area of the sensor. So that allows the sensor to run much more quickly and generally speed up the overall process of those sensors.

So there's two downsides really to that and that you have doubled up the amount of silicone that you have because you have replicated the imaging area in this masked off area so it's going to be more expensive. And that also, because it is not instantaneous that we're able to transfer all of that information across that full sensor right down into the masked off area, there still can be a small amount of smearing being possible. Another way to try and progress CCD sensor devices are with interline CCDs. So what we try and do with these is we have an additional column placed beside the imaging area. So, again, this is a masked off area or a column of pixels. And this time, what this allows you to do is it allows you to shift the charge very quickly from the imaging pixel into the masked off area, and then transfer that information down through that column in the masked off region. So that really allows you to speed up the sensor. And also because you're only shifting across by one pixel, it avoids any issues of the other designs to do with image mirror effects.

But, of course, there is always a downside to things like this. Since we've introduced this extra masked off series of columns across the sensor, we have reduced the light collection ability of that area. So this is called the fill factor. So what can be done to improve this fill factor and make sure that we're getting all of the photons that are going to be landing over the sensor area is by using micro lenses, so a micro lens, and will extend out past the imaging pixel so that it fills up that space. And these work very well for microscopy because normally the incident light is the light is parallel effectively, so that any of the light down is channeled through the micro lens and reaches the imaging area and eventually the photosensitive area of the silicone. For other applications, using micro lens may not be quite so good because once you have the light off at an axis, then you tend not to be able to converge the light to the required area and you can end up with cross talk between pixels and various other effects of reduced efficiency.

Although interline CCDs have largely been replaced by the latest sCMOS cameras, and the normal CCD versions of cameras are really restricted to luminescence and longer exposure applications, one type of CCD camera, which remains very much relevant, are EMCCD cameras. And this is because they have sort of an exclusive additional on-chip function or process, which they can make use of. And that makes them particularly suitable for the most challenging imaging applications, such as single molecule studies. They have those additional electron multiplication register, and unless they have...they make use of high voltages to make use of that process called impact ionization. And by moving through the register and applying these voltages, it is able to generate more electrons than you started with. And these will accumulate and build up as it moves through the register.

And that allows us to boost the signal manifold before we read out that signal from the sensor. So you may only have a very small number of photons, two, three, four, or even one photon, for example, and you can still get a useful signal from an EMCCD camera by this means. Sometimes people may say that EMCCD has effectively zero read noise or less than one electron read noise. That's not strictly true. The noise floor of the camera is still there. What you have done, though, as you're negating that read noise floor effectively by boosting the signal many times, say, 500-fold. So the signal you have whenever you read it out, it's so much in excess of that background noise. So EMCCD cameras are one of the camera technologies which still remain relevant to this day.

With the CCD sensors, we have seen that they have been around for quite some time. And over that length of time, a number of increments of improvements to the technology occurred or happened so that they were able to extract more speed and improve the performance from those sensors. However, inherently they were limited by the serial nature of the readout process of those sensors. So over than EMMCD, CCD cameras have typically been replaced by the sCMOS technology. So, CMOS, to go back to where this started from, CMOS sensors have also been around for a very long time. It's not a new technology at all. But in these sensors they have on pixel amplifiers, and this allows them to perform some of the steps that CCD would have been trying to route all through a one common pathway.

So, this means that the readout can be much faster. But earlier CMOS sensors, you would have seen them in camcorders and other consumer and low cost devices. So, the image quality of them was not really anywhere near the level that you could get from the equivalent scientific CCD sensors. But, nevertheless, the high speed, low noise aspects of those sensors are something that was seen as these could be good for scientific applications. Recognizing the potential for CMOS for scientific imaging applications, there was a joint collaboration between one of the sensor manufacturers, E2V Fairchild Imaging, and also two of the camera manufacturers, which were PCO and Andor Technology.

And the result of this was what is now known as scientific CMOS, or sCMOS for short. And an introduction of these sCMOS cameras, the benefits of these cameras over the interline CCD cameras at the time were very clear and were a major step forward in terms of imaging for not only microscopy, but for a wider range of applications. So the first of these designs was called the Andor Neo, that was released in about 2009, 2010. So this technology now has been around and refined forever for 10 years. The benefits of sCMOS and what they can provide is that you can have large sensor sizes and high speeds along with low noise. And this gives you very good attributes for microscopy. In addition, they also allow you to have high resolution and they allow you to have good dynamic range. And as I said, these were so successful, or this formula was so successful that the Neo, the Zyla, and other cameras based on sCMOS really quickly displaced the interline CCD cameras that were around as the most common detector type at the time.

We've looked at some of the fundamental aspects of digital microscopy imaging in terms of microscopy cameras. Microscopy cameras have a role to collect photons from our image and convert those into usable electronic digitized signals. And some of the important aspects in considering cameras are their sensitivity, their field of view, and their imaging speed capabilities. There are also the other secondary aspects that we have to consider, such as, can they resolve the image information that's coming through the microscope? And do they have the dynamic range that's necessary for the kind of samples that we're dealing with? And to try and meet these requirements, there has been a number of different technologies, such as CCD and CMOS more significantly that tackle these imaging requirements. And over the years that these have been present, they've continued to be refined and updated over the years.

In the next module, we're going to start to compare the key cameras of interest. So that is CCD, EMCCD, and sCMOS. So what we want to do then is we want to match them back up against these original criteria of the sensitivity, field of view, and speed, and see how relatively they compare. And then we can look at how we can match those different camera technologies against the specific needs of different applications, whether this be calcium imaging, something that's very fast, some kind of called focal imaging experiment, or whether this is into looking at luminescence studies over a longer period of time.

Related assets