Part of the Oxford Instruments Group
Expand

Microscopy School Lesson 9 & 10- Super Resolution Microscopy

 

Super-resolution microscopy allows users to resolve fluorescently-labelled structures on size scales beyond the diffraction limit (200-300nm). In the past few years, super-resolution microscopy techniques have become increasingly commonplace methods in cell biology. This increase in popularity has been due to an active community of developers, growing numbers of commercial instruments, novel biological discoveries, and the awarding of the 2014 Nobel Prize in Chemistry to pioneers in the field. However, understanding how these techniques work can be somewhat daunting due to the range of optical, photophysical, and computational concepts underpinning them. In this talk, we will cover the basics of how the three most common super-resolution techniques (SIM, STED and SMLM) work, including factors such as labelling, hardware, image processing, and the advantages and disadvantages of the methods. We will also see examples of how different super-resolution techniques have been used to successfully address different biological problems. Finally, we will examine the current state-of-the art and future directions for the field of super-resolution microscopy.

Learning objectives

By the end of the talk, you should understand:

  • How the super-resolution methods SIM, STED and SMLM achieve resolutions beyond the diffraction limit.
  • The relative advantages and disadvantages of different super-resolution techniques.
  • Examples of where super-resolution microscopy can has been used successfully.
  • What the next generation of super-resolution microscopy techniques might look like.

Lesson 9 Transcript

My name is Siân Culley and I'm going to be giving you two lectures on super-resolution Microscopy, very basics of the techniques, so kind of get to grips what this whole family of microscopy methods are all about. I personally am currently a postdoctoral research associate at the MRC laboratory for Molecular Cell Biology at the University College London. And I've been doing super-resolution microscopy for getting on for the best part of a decade now, which is slightly worrying and makes me feel quite old.

Okay, so this is just an overview of what we're going to be talking about in the next couple of seminars. So we're going to talk about the basic principles of resolution, what limits it, and why we need this thing called super-resolution microscopy. We're then going to have a bit of kind of a deep dive into the three main techniques that are available to us. And those are SIM, STED, and SMLM. And I promise the acronyms will become clear as we go on.

And finally, we're going to finish up by talking about where the field is at the moment, what kind of current challenges are with super-resolution microscopy, and what you can expect to see in the next few years, what's the cutting edge of super-resolution microscopy. So, to begin with, let's just think about size scales in cell biology. And I'm assuming most people here are kind of cell biologists of some description. And I really like this animation here, because it shows us the full range of what we're interested in when it comes to cells. So, right at the smallest end we have viruses, come up an order of magnitude, and then we have, for example, bacteria. We go through bacteria, and then we'll come another order of magnitude, we'll have eukaryotic cells, and we'll start getting bigger and more complex.

And so, if we're doing fluorescence microscopy, there's a really large range of size scales that are actually interesting to us. So you might want to look at a whole cell, but you might want to look at very small objects, such as a virus or small structures within larger cells, for example, mitochondria, or other organelles. And so, to begin with, let's think about what actually limits the smallest size objects that we can resolve with fluorescence microscopy, what's our limit?

The thing that limits our resolution is diffraction, so it's then called the diffraction limit. So, what I'm showing you here is a laser beam that's being shone through a slit, which you can see on the bottom, that's the slit, and this is what the laser beam looks like afterwards. Now, let's say we want to try and make this spot here smaller, the obvious thing that you would do is make this slit smaller, so close that down. And we'd hope that makes our laser spot smaller on the other side. So let's just watch this and see what happens to this spot in the middle as we close down the laser slit. So it gets smaller, but then something alarming happens. And our spot actually gets much wider and spreads out again, which is somewhat counterintuitive and unexpected.

And that's because what's happening is diffraction. So, on the left here, if we have a plane wave, so, light is a wave, we have a plane wave that's incident upon a slit that's quite big, then you just get that plane wave propagating through and just kind of truncated by your slit. However, if we start making that slit smaller, kind of towards the scale of the wavelength, then what actually happens is our wavefronts spread out, so it becomes a circular wavefront, we haven't got a plane wave incident anymore. And this happens with all sorts of waves anyway, will do this. And here's a really nice example of it happening at sea. So this video shows that somewhere over here we have the sea waves coming in, again, a straight wavefront. They're coming through this small kind of wall. And as you can see, they're spreading out in a circular wavefront. And that's exactly what's happening with light passing through a slit. But it's just a little easier to visualize when it's a lovely beach where we can all imagine that we are rather than learning about microscopy.

So, if we think about our microscope, then we have a situation like what's on the right here. So our light source, in this case, it's a fluorophore, so the tiny little GFP there emitting plane wavefront. And the slit itself is actually our objective in this case. So this small aperture is actually the aperture of your objective. And that's what's causing this diffraction and spreading of the light through the microscope.

So let's put a number on this, what's the smallest thing that we can see with fluorescence microscopy? I've got an our little kind of GFP here, which we imagine to be pretty much a point-like source of light. The GFP molecule itself is, I think, around 5-nanometers in size, which is very, very small, as we all know. And as it passes through the objective, what you get is not a point source of light that's 5-nanometers small, but you get something called an airy disc pattern, or which the 430 is the point spread function.

And that's the interference pattern of diffraction through a circular aperture, like our objective. And you can see it's got this central maximum, and then kind of smaller maxima going out towards the edges. And the size of this central maximum is about 300-nanometers, okay? But what limits this? Why is this 300-nanometers? Well, to explain that, we have our friend over here, Ernest Abbe, he was a German physicist, with an excellent beard. And Abbe proposed this, he said that the smallest resolvable distance between two objects in your microscope, so if you've got two things next to each other, how small can you see apart, is equal to the wavelength of the light divided by two and the sine theta. And sine theta, what on earth is that? Well, it's actually on your objective, it's the numerical aperture of your objective.

So, if we substitute some numbers in, we've got light of 600-nanometers, so red light, numerical aperture of 1.4, then our resolution is about 214-nanometers. Okay. So that gives us a kind of numerical representation of the smallest objects or the smallest resolvable distances we can see within our light microscope. And just to show you an example of that here. Here's a structure that I've drawn, and you should be able to see that it's a pair of lines that are getting closer together, at one end there 800-nanometers separated, at the other, they're 10-nanometers separated. If this object was imaged with a fluorescent microscope that was limited by diffraction, as you've seen here, then you get something that looks like this.

So above the diffraction limit, you can still resolve the two different lines. But as we get to our diffraction limit, about 210-nanometers for this simulation here, that's when these two objects are no longer resolvable as separate things, they blur into one, okay? And that's what the resolution limit is. It's where the smallest distance where you can still make out that there are two objects rather than one, for example. And you can imagine this can be really, really problematic, right? So, for this simulation, we have context, we're like, "Okay, cool. Well, I'm pretty sure this is some kind of V shaped structure." But if we didn't have the added context of our kind of low resolution structure that we could resolve, if we just had this, you have no idea what's within that blur, which is why resolution is a bit of a problem.

So here's a kind of just another schematic of our different size scales in cell biology. And what we want is we want appropriate imaging techniques for these different objects. So, fluorescence microscopy is great, I love it. And it's really good because we can look at live structures, live samples, we can look at dynamics, we can get our samples to express fluorescence proteins, we have really large palette of labeling tools available to us, loads of advantages for fluorescence microscopy. But of course, the big disadvantage is its resolution, which as you can see, I've limited to about 300 nanometers.

Of course, we also have electron microscopy. And electron microscopy is also limited by diffraction. But the way it kind of gets around this is that the wavelength is much, much smaller. So, visible light uses wavelengths of hundreds of nanometers, in the kind of 400 to 700 nanometers range. Electron microscopy uses beams of electrons which have been way, way, way smaller wavelengths, way smaller. And this is, of course, why electron microscopy, even though it's still limited by diffraction, can get much, much higher resolution. So, with particle averaging, they're up to a few angstroms in some cases.

Now, the trade off for this kind of incredible resolution in electron microscopy is, of course, that you can't use all your labels, you can't have live samples, you've got a dead sample that's gone through quite an extreme sample preparation process. And you've got all your lovely fluorescent proteins and dyes and labeling techniques that you have in fluorescence microscopy. So, super-resolution microscopy essentially exists to fill in this gap here. So, before I get onto that, we can have a quick comparison of fluorescence and electron microscopy. So, as I said, fluorescence microscopy, you've got lots of labels, it's live-cell compatible, and sample preparation is pretty straightforward, we can get really nice, high temporal resolution as well perceived dynamic processes. But electron microscopy wins out massively when it comes to spatial resolution. We've got one, two, three orders of magnitude, perhaps, higher resolution with electron microscopy.

Okay, so what I was going to say was that super-resolution essentially exists to fill in this gap here. And here we go. So super-resolution techniques basically try to extend the resolving capabilities of fluorescence microscopy down to what you can achieve with electron microscopy, or kind of part of the way there. And importantly, super-resolution microscopy techniques want to preserve all the benefits associated with fluorescence microscopy. So it's trying to make fluorescence microscopy approach the resolutions of electron microscopy without getting rid of all these benefits. So if we now have a look at this table and if we replace this with super-resolution, do we still have lots of labels we can use? Yep. Is it live-cell compatible? Yes, but in theory, and this is something that we'll come on to quite a bit throughout the next couple of talks. Sample preparation is still pretty straightforward. And the temporal resolution can suffer a bit. And, of course, we'll talk about why this is. But you'll still get quite good temporal resolution in super-resolution microscopy, down to kind of seconds maybe, which is still pretty good.

But of course, the thing we're all kind of looking for here is the spatial resolution. And super-resolution techniques can now get us resolutions of 20 to 150 nanometers, depending on what technique we're using. So that's pretty good. We're not down to that kind of real nanometer scale of electron microscopy, but we're definitely better than conventional fluorescence microscopy.

So, hopefully, from that intro section, you should be comfortable with why the resolution of fluorescence microscopy is limited, what limits it, and various comparisons between fluorescence microscopy and electron microscopy. Okay, great. So let's kind of get into super-resolution now. So there are three major super-resolution techniques that exist, there is a Structured Illumination Microscopy, or SIM, Stimulated Emission Depletion microscopy, STED, and Single Molecule Localization Microscopy, SMLM. And there are loads more acronyms that we will come across within the next however long this takes. But these are the kind of real big three that you need to remember.

And they were kind of the first inklings of these kind of started in the early '90s. And they really kind of took off since then. And in 2014, I believe, the Nobel Prize in Chemistry was awarded to some of the pioneers of particularly Single Molecule Localization Microscopy, Stefan Hell, William Moerner, and Eric Betzig. So, this is kind of all to say that these super-resolution microscopy techniques are kind of a big deal. So we're going to go through them one-by-one, and talk about how they work, what's interesting about them, when we can use them, what kind of conditions are the operating parameters of these.

We're going to start with structured illumination microscopy. And I'm going to kind of, not warn you, but admit more kind of yeah, own up to the fact that this is the one that I personally struggled a lot to understand. It's the least intuitive to understand in my opinion. So kind of don't worry if it doesn't make sense. I've used this technique very often, and I still sometimes sit around and think, "Do I really understand this?" It's a bit of a brain tease, this one. But we'll do our best. And if it doesn't make sense, don't worry, I've provided some reviews and some other resources where you can go if you do want to kind of go over it a bit slower. Okay?

So, before we get onto structured illumination microscopy at all, I just want to make sure that we're familiar of the concept of frequencies in images. So here's an image of, I think, Mitotracker in a fixed cell. And this is the kind of image that we're very familiar with as microscopists, this is our structure in the spatial domain. So each pixel is a physical distance. And we could associate the size to every single structure in that image. An alternative way of representing exactly the same information is in a frequency domain. So this is in the inverse or reciprocal to me. And we do that by something called a Fourier transform. And if we Fourier transform our image, and if you use image j, or Fiji, for example, you can actually do this to images, it's called FFT, Fast Fourier Transform, if you're interested. If you do that to our structure in spatial domain, then you'll get this really weird-looking plot.

And this is representing the same information but in the frequency domain, okay? And in this plot, and of course, you can get back to your original structure by performing an inverse Fourier transform on something that looks like this. So, with these Fourier transforms, you can skip back and forward between the spatial domain and the frequency domain. And what this plot shows us is, essentially, it's plotting the different spatial frequencies in our image. So higher frequencies are towards the edge and lower frequencies are towards the middle. So a lower frequency is the inverse of distance or size. So, the center of this plot contains our large structures, and smaller structures, the finer detail, is out towards the edge. And then beyond this kind of central ring, that's where we have high frequency noise beyond our diffraction limit.

And, like I said, we've got this central circle where all our kind of spatial information is really contained. And that is the frequency cutoff of the microscope or the diffraction limit. Okay? So, outside of this high frequency noise, right in the middle, low frequency, all kind of big, fat structures, and towards the edge, that's all really fine detail. And to show you this, what I'm going to do is I'm actually going to show you a movie, where we make images just from different parts of our Fourier domain, just to get you comfortable with the idea that it's the same information. And different parts of our Fourier transform represent different size structures in real life.

So here I have our Fourier image, and what I've done to begin with is I've done a inverse Fourier transform, the other F is for fast. The inverse fast Fourier transform of only the frequencies within this circle. So, at highest frequency, 6.51 of a micron. And if I do that, you can see, yeah, indeed, this is the big fat structures, this is the low frequency information. And if I play this movie, what I do is I add more and more frequencies each time. And as I add those frequencies, what you should have seen is, you get more and more detail in your image, okay? So the further out we go Fourier space, the more detail we get in our image. And that isn't part of SIM necessarily, this is the fundamental property of any fluorescence imaging, okay? Just to get you comfortable with the idea that images and frequencies are the same thing. They're just kind of the inverse of each other. Okay, great.

What does this have to do with SIM? Oh, there we go, just play it again, why not? Now that we know what to look for. Okay. So, this is from one of my favorite comics, which is XKCD. And don't worry what it's saying because this is being recorded, and I don't think I need that kind of incriminating evidence in my life. But our chap over here is saying, "I took a picture of my computer screen. Why is the photo covered in these weird rainbow patterns?" And the very smart woman in her armchair is saying, singing, "When a grid's misaligned with another behind, that's a moiré."

Okay. So if you're not familiar what these two people in the comic are talking about, then it's this. So here we've got two photos of a man in a kind of quite nutty [SP] striped shirt, and a corrugated kind of barn door. And what you can see is these weird rainbow patterns, which are these kind of strange almost like an oil spill-type pattern over the top of this shirt and this barn door. And you don't see them in other parts of this barn image, for example. And what these kinds of weird loopy kind of fat patterns are, are moiré interference patterns. And these moiré patterns are the result of two patterns that are of a higher frequency interfering.

So, what are the two patterns in each case? In this case, we have the kind of stripy pattern of the shirt, and that's one high-frequency pattern. And when we take a picture, you've also got another pattern, right? You've got the array of pixels inside our camera. So that's our second high frequency pattern. So the structure of the shirt stripes interferes with the structure of the pixel grid in our camera, and you get these lower frequency moiré fringes, they're called, this interference pattern. And the same with this barn door. Again, we've got high frequency pattern in this image, which is the corrugated door, we've got another high frequency pattern, which is our pixels and our array in our camera, we take our photo, and as well as getting our other spatial information, we also get interference between these two high frequency patterns on moiré fringes.

And here's a little video just to kind of, again, just show that moiré fringes in these cases are annoying. But you can cleverly exploit them to find more information about your structures. So here I've got two high frequency patterns. One has some kind of structure inside it, but you can't really see what it is, and the other is just a straight grid, okay? It's just grid lines. And you can already see where I've overlapped them, you can see the moiré fringes, this lower frequency information. Okay. Let's see what happens if I pass one high frequency pattern over the other. You can see that in these moiré fringes, you're actually beginning to get information about one of our high frequency patterns. And in this case, because I am a crazy cat lady, it's a picture of a cat. And this is really cool.

So you've got our unknown high frequency pattern, we move a known high frequency pattern, just to stripe over it, and we suddenly get all this interesting information from the interference of the two. So, low frequency moiré patterns contain information about both of the underlying components, okay? So that's really important. It's the product of two frequencies interfering in space. So how on earth do we use this in SIM to get us resolution? What you do is you think, okay, patterns. I've got one unknown high frequency pattern that I want to interrogate. And that is the distribution of fluorescent molecules within my sample. How do I generate some moiré fringes that could tell us more about that? Well, let's illuminate it with a known high frequency pattern, for example, a striped illumination field. So, instead of illuminating the flat field, we illuminate our sample with a striped field. And this results in a kind of image that looks like this, which are like, ugly, why would I do that? But this contains moiré information. This contains that interference information. And if you have a little look at the Fourier transform over here, you can see something we didn't see earlier, which are these little spots in the frequency space. And this is from the moiré interference, and this contains information about both patterns, on the kind of high-frequency level.

But importantly, remember, we're not detecting outside the resolution support of our microscope. So brought this moiré pattern in to our resolvable field, okay? And that, even though it's low frequency information, it contains information about the higher frequencies within our sample. So this is how SIM acquisition works. It's a widefield microscopy technique, and you pass your illumination through some kind of pattern maker, okay? So, that could be a diffraction grating, it could be a spatial light modulator, you can do really kind of fun things with these interference to generate some kind of regular pattern. And you basically move the phase and the rotation of this gridded pattern and take a series of images of your structure.

And so, for each rotation and phase of your illumination, you get different moiré patterns, and you can actually see them rotating around in Fourier space. So that's cool. How do we use this to increase our resolution? Now we've got all these moiré patterns that are the interference of our diffraction of the image of something that's a higher frequency, and our grid. Well, what we can do is you can think about this as like a series of simultaneous equations. And in reality, in the same algorithms, this is all done in frequency space, all kind of SIM calculations are done in frequency space, but you don't need to worry about that because that's all done for you most of the time.

So, to understand it, we're just going to think about this in real space. So let's imagine we've got our first moiré pattern, we know that that is our true structure that we don't know a very high resolution convolved with a known illumination pattern. Okay? So we've got one equation here. So illumination pattern one. If you convolve that with the structure, then you get your moiré pattern one. So you know two things in this equation. Then, let's say you rotate your illumination, you get a new moiré interference pattern, and you, again, know your illumination pattern, and your structure is the same, okay? So, essentially, you do this for maybe 25 images, if you combine all your rotations and your phases. And essentially, you get a big simultaneous equation kind of matrix, where you've got your observed moiré interference patterns with low frequency information, you've got one high frequency pattern that you know, and now you can use this combination to back calculate your actual structure with higher resolution than you had before. Okay?

And this is what it looks like if I run all those moiré patterns, those illuminated grid patterns through the reconstruction algorithm, then we go from our diffraction limited image here to our SIM reconstruction in the middle. And you can see visually, if we just look at these two images to begin with, you get a lovely increase in resolution. For example, we can have a look at the fine detail here, you can see, we're beginning to see the lumen of these mitochondria, for example. Then if we have a look at the Fourier transform, so frequency domain image of our SIM reconstruction, you can see that we are now filling up much more frequency space, right? So this dashed circle with the original resolution limit of our image, that's where the Fourier transform information of our diffraction limit image would live. We've actually now got frequencies from further out. So we're kind of reaching further into higher frequency land. And those come out in our image as higher resolution.

And our SIM circle, as it were here, is double the radius of our diffraction limited kind of circle. So the SIM has doubled the resolution, which is very cool. So, to recap, the key features of SIM are, it's a wide field technique, it can work pretty well with live cells. And that's because it doesn't really care what fluorescent molecule you use. You can use fluorescent proteins very rapidly with SIM. And the grid pattern doesn't take too long to rotate. So it doesn't take a huge amount of time to create this raw data for a SIM image, which means you've got quite a good temporal resolution of about one second normally.

And multi-colour imaging, for example, is also quite straightforward. And, again, it's not doing anything weird with fluorescence. It's not doing anything weird types of fluorophores. All it's doing is generating interference patterns and then reconstructing those, okay? So those are the kind of key features of SIM. Just a quick look at some of the applications of SIM microscopy. There are SIM microscopes available from lots of commercial manufacturers, for example, ZEISS have the Elyra, and Nikon have N-SIM, GE/Applied Precision, I think their name is, have the DeltaVision OMX. There's lots of commercial solutions out there for SIM, or you can build your own if you're feeling very bold. But this is just an example of how SIM has been used.

So you can extend SIM into three dimensions. I'm not going to talk about that now for the sake of time, but it's doable. And here's a couple of examples of how three-dimensional and multi-coloured SIM has been used for looking at nuclear organization. So, this is something from Lothar Schermelleh's group. And it's really beautiful imaging. And you can see here, what they've done is they've imaged DAPI, so, staining the chromatin and the nuclear lamina. And you can see in the confocal microscopy image it is quite cool. You can see the Lamin kind of between some of the chromatin. If we then go to SIM, you see much higher resolution information, and you can actually see the lamina form these little tubules between chromatin globules. So that's very cool.

And a more recent example, this is 3D SIM imaging showing a DNA kind of break repair protein, forming a shell around a break site. So this is our little mark of a break site. This is the DNA repair protein. And to kind of protect that site from damage while it gets fixed and to make sure that mutations don't get propagated, etc. 3D SIM is used to say, actually, yeah, these are the proteins. And this 53 bp forms an actual protective shell in three dimensions around this tiny DNA break site, which is very cool.

So, like I said, SIM can be used pretty readily for live-cell imaging. On the left here, we have an example of imaging an immunological synapse in the T-cell. And what we're looking at is stabley formed immunological synapse where actin is labeled with GFP. And this is SIM imaging every 800 milliseconds, so that's pretty fast. And what you can see is you can see for example, the flow and quantified speed and direction of this actin within this immunological synapse.

On the right here, this is pretty cool. This is SIM hardware that's been adapted for imaging dendritic spines in vivo. So, as you can imagine, that's pretty challenging. But again, you can see difference between the deconvolved widefield and the SIM imaging of fluorescently labeled dendritic spines in a living mouse, again, which is very, very cool. Okay, so, what are the limitations of SIM microscopy?

Well, one big limitation is you can only double the resolution relative to diffraction limit. And the reason for this is because that pattern that we're illuminating with is also diffraction limited. If we could image with a beyond diffraction limited pattern, we could extend the resolution even further, but we can't, diffraction always limits us. So, if you through all the SIM max, which is a bit painful, then you come out the other end saying, okay, you can maximum double the resolution in SIM.

Now, in theory, you can go further with techniques such as non-linear SIM, where you use saturated fluorescence to generate harmonics, and you can use photoswitchable fluorescent proteins, again, to get more information. But that's not plastic SIM as it were. And of course, once you start saturating fluorescence with high intensities, then you start losing the benefits of live cell imaging, for example. Another big limitation or kind of just warning with SIM is that the reconstructions, you put your raw images through a reconstruction algorithm to get your final image, those can be very prone to artifacts. So this is an example of a good SIM reconstruction on the left here from the data we looked at earlier. And on the right, I deliberately made some of the reconstruction parameters bad, I made bad choices.

And you can see it's done kind of suspicious things to the image that you might not notice if you're not a SIM user, like regularly. So, for example, this mitochondrial structure here, you can see in the reconstruction, it's just one kind of tubule. Whereas in our balancing reconstruction, that same structure suddenly looks like two thinner, intertwining structures. And that's worrying, that's what you need to be really careful of. There's another example of this here. So, if you have out-of-focus fluorescence, for example, SIM is only kind of really good at pretty flat samples, it's not great for thick tissue, and out-of-focus fluorescence is one thing that can really ruin your day in terms of artifacts.

And you can see in this example, all these kind of little lattice artifacts appearing in the image. So, that's kind of as far as we're going to go with SIM. Hopefully, you should be comfortable with what happens if you eliminate a sample with patterned information and generation of moiré fringes, the basics of how SIM uses that to increase the resolution, and some of the main advantages and disadvantages of SIM, and where it is a tricky thing to get your head around. So here's kind of a few resources that might help if you want to kind of go back and sit down and look through the maths and some more in-depth about how SIM works.

Okay. So the next technique we're going to talk about is Stimulated Emission Depletion microscopy or STED. So STED is the only confocal super-resolution microscopy technique of the kind of main three, SIM and SMLM are both widefield, STED is confocal. So let's have a little refresher of confocal microscopy. In confocal microscopy, you have a laser spot that scans across our labeled sample. So let's say this it's a simulation. Let's say this is our labeled sample, we'll have an excitation beam that scans across our area, it'll be much faster done in this simulation, but you get the idea.

Every time our beam passes over a labelled pattern sample, there's going to be fluorescence emitted, and you can see that here. So there's no structure there. As our beam passes over the structure, we get some emitted fluorescence. What happens to this emitted fluorescence? Well, we don't get this structure, we just get the total number of photons that came from that beam position, which go into our PMT. So, low number, high number, etc. And this number gets assigned to each beam position i.e. each pixel in your final edited image. Great. Hopefully, that wasn't too shocking, and that was nice and familiar.

So, what affects the resolution of confocal microscopy? Now, our scanning spot, of course, has a finite diameter, because of diffraction through the optics. We think back to that kind of slit video that we showed, that's what's happening in our microscope, we can't make our scanning spot smaller by just making our slit smaller or our objective smaller.

So, what does this do to our resolution? Well, for example, when our scanning beam is parked on this pixel here, even though there aren't any fluorophores beneath that position in the image, the edges of the beam are still actually exciting molecules that are quite far away. And so when we look on our image, it doesn't have zero fluorescence, it's actually got a fluorescence value. So our finite size spot means that fluorophores quite a long way from the spot center are excited, which leads to not a great resolution.

Now, one of my favorite things to do in microscopy is pretend I'm the god of physics. And if I'm the god of physics, I'm not going to get rid of diffraction. And I'm going to say, "What happens if we just make our spots smaller, for example?" So, that's our limit, that's the physical limit, spot diameter from diffraction. Okay, back to the god of physics, i.e. me for the moment. What happens if I can just make that spot smaller? What does that look like? Well, of course, if you have a smaller spot, then, for the same position for that same pixel, spot is in the same place. But because it isn't as fat, you're not exciting fluorescent molecules a long way away, and you get nice higher resolution image.

Okay. So, what STED does is, of course, we can't just make the spot smaller, diffraction says no. What STED does is it creates the effect of a smaller spot using some clever photo physics. So, in STED microscopy, we have a Gaussian shaped excitation beam, as in confocal microscopy, so, intense in the middle, quite fat and dimmer towards the periphery. And always going to think about what's happening at this position here, for example. What STED does is it puts in a second laser beam that's overlapped with your confocal spot. And that's shaped like a donut, if we call it the depletion beam. That STED beam, the depletion beam, same thing, it's this donut shaped beam. And what this does is it prevents fluorescence in the intense part of this beam. Okay? So we're going to overlap these two beams, like this.

And so, fluorescence is prevented in the intense areas of our donut beam. So, in this beam periphery, we're still getting excitation from our Gaussian shaped excitation beam at that RA position. But then the STED beam is saying, "Nope, no fluorescence allowed." So you don't actually get fluorescence in that part. Of course, the only pathway where fluorescence is allowed from is this dim center of the donut, i.e. the middle here, which looks a lot like a smaller spot that scans through your sample. So you only get photons detected from the central region of the scanning beam path as it moves through the sample. So it's kind of similar to having a smaller scanning spot.

So, how does STED do this? How does this work at all? What's this magic that means that you can just turn off your fluorescence or suppress your fluorescence? So we're going to just have a quick look at the photo physics underlying this. To begin with, we're just going to think about what's happening in the middle of our donut. So where our excitation beam is intense, but our STED beam is dim.

So, what these abstract lines meant to represent are the energy levels of our fluorescent molecule. So down here we've got the ground state of the molecule with little sub-vibrational levels. And here we've got the first excited state of our molecule, again, with its sub-vibrational levels. And the molecules are lazy, so, before the beam hits them, they're going to be hanging out in that front state, in the lowest energy state. So our excitation light comes in in the center of the donut, and that energy is absorbed by a molecule and it's promoted to the excited state. After a very short amount of time, you know, we're talking kind of picoseconds or shorter, we get vibrational relaxation. And then, after a while, we get spontaneous emission, or fluorescence from these lower vibrational levels of the excited state back down to the ground state.

And the energy that's released during that transition from high energy to low energy comes out as a photon. So that's your fluorescence that you can detect. Then you get vibrational relaxation, and we're ready for another cycle. So, hopefully, that isn't unfamiliar. And of course, if we think about the spectrum of our fluorescent molecule, if it's a green molecule will excite around the excitation, the absorption peak, for example. And then we'll detect all of this red fluorescence, all this longer wavelength fluorescence into our PMT to make our image.

Okay. So, that's exactly the same thing as happens in normal confocal fluorescence microscopy at the spot center. All right. What about at the beam periphery? So now we have excitation light, and we have STED light. Okay? So our STED beam is now intense. To begin with, it's exactly the same. So our molecule in the ground state absorbs the excitation light, and it's promoted to the first excited state. The STED light doesn't interact with the energy levels at this stage, okay? But what it does is essentially sets up a standing wave between two energy levels. Normally, one of the lower vibrational levels at the first excited state, and one of the higher vibrational levels of the ground state. So it's not absorbed by the molecule, it's not doing anything kind of in terms of energy change at this stage, but it's there in the background, it's set up a standing wave in the background.

So, our excitation light is absorbed, a molecule becomes excited, our vibrational relaxation is standard. Now, this is where the STED light kind of has its effect, once we've got to this stage. The STED light is basically a catalyst for one energy transition. Okay? So, normally, if the STED light wasn't there, the fluorescence molecule, the molecule would, at random, fall back down to the ground state. And it does that at kind of quite a random wavelength. You know, that's why we've got a broad spectrum, it can happen at many different wavelengths, and it's a pretty stochastic process.

When the STED light is present, it's really kind of catalyzing the molecule to fall down one specific energy transition. And that energy transition perfectly matches the energy of a STED photon. So, this is a process called stimulated emission. Our excited molecule basically resonates with our STED light, okay? So, the STED light isn't absorbed, it stays there the whole time, it actually comes out the other side, so you get your original donut photon back. And it basically forces your molecule to come back down to the ground state at the same energy as your STED photon.

And of course, that means that energy still needs to be emitted. So you get a second photon out from your simulated mission that looks identical to your original STED photon. So, excitation goes in, that's absorbed. Once that photon comes in, essentially two STED photons come out. So you get your original photon, and the one from forcing your fluorescence molecule to go down this very specific transition. And then by vibrational relaxation, again.

So the only kind of big difference here is, fluorescence microscopy can happen at any...fluorescence happens at any old wavelength. Stimulated emission only happens at the wavelength of your STED beam. And so you can filter that out really well. So you're still getting light out, but you know what wavelength it's at exactly. So, if we have a little spectra, you'll have your excitation light at your excitation peak, and then you'll have your STED light somewhere in the emission spectrum tail.

So your fluorescence will be between your C wavelengths, and then you'll filter out any wavelengths that are redder than your STED light, or at your STED light and beyond, because that will be basically this donut coming back out the other side. Okay, cool.

So the only fluorescence you're detecting is light in the center that hasn't undergone this STED process, the stimulated emission. What determines the resolution in STED microscopy? Well, the size of the hole in the center of the donut. So, what's important to know is the optics that generate your donut in STED microscopy, they're still diffraction limited, okay? You're not doing anything to break the diffraction limit in the way you make your donut.

What you actually do is you make this hole in the middle smaller by just increasing the intensity of your laser that's making up the donut. So these rings are all exactly the same size, but I've just increased the intensity. So if you imagine a Gaussian or kind of a bell-shaped curve, if you increase the intensity, you're also increasing the peak, but also his wings. And those wings are encroaching into the center of that donut, basically. So we get higher and higher intensity, which squeezes the small hole smaller and smaller. And you can see this in this little animation from the paper, where 24-nanometer diameter fluorescence beam's being imaged. And if you don't have your STED on, so zero STED intensity, then you get your diffraction into blur.

If you increase the intensity of your STED beam, you make your donut hole smaller and smaller, and you increase your resolution, so you can see all these individual little beams, really cool.

So the key features of STED are, it's a confocal technique, and it has all those benefits of confocal microscopy. Optical sectioning, you can get very high temporal resolution if you scan a very small area. And it should work with any fluorophore, as long as you've got the right wavelengths for excitation and depletion. All fluorophores are capable of undergoing STED, it's like just a fundamental photophysical process stimulated emission. Ironically, it's actually what a laser does. Acronym LASER stands for light amplification by stimulated emission of radiation.

So anything can undergo stimulated emission, this isn't just a fluorescent molecule thing, anything will STED if you try hard enough. Spatial resolution is technically unlimited. So, in theory, you can make that hole infinitely small by making your STED beam infinitely intense. And it doesn't require any computational post-processing in the same way that SIM did, for example. So, basically, this is a kind of turn the STED beam on, get your image, it's already at super resolution.

And there are a couple of commercial implementations. So, Leica make a STED microscope, and Iberia Instruments [SP] also do various STED microscopy platforms. Lots of the STED microscopy, in reality, has been done in home built systems. The Iberia ones ones becoming very popular now. But the home build kind of homebrew STED has classically got the best resolution so far.

So let's have a look at a couple of applications of STED microscopy. So, this is an example of kind of structural discovery in STED microscopy. This is two-colour STED microscopy of these two structural proteins at the neuromuscular junction in Drosophila. In the confocal image, you can see that our magenta kind of protein, this Drosophila RIM-binding protein is kind of co-localized with a larger blob of this green Bruchpilot protein. If we look with STED, what you actually see is it's forming a ring around the central protein. And this could be used for actually making really exquisite kind of three dimensional models of this protein molecular structure at the neuromuscular junction, very cool.

And here's an example of some two-colour live STED microscopy. So this is a very cell biology application. This is looking at the G-protein of one and how it is involved in forming Golgi tubules. So, again, our kind of Golgi apparatus is in green, and this R-form protein is in magenta. And you can see nice high resolution, and here it's every 10 seconds imaging of that structure.

STED is also really popular for neuroscience in particular. And that's, of course, because we love our, you know, we really like these advanced confocal microscopy, we can penetrate quite deep into tissue. And here's an example of actually in vivo STED microscopy. So this is in a living mouse. And this is looking at dendritic spine morphology. So, the mouse will be expressing some kind of fluorescent protein within these kind of dendrites. And here you can see really fast resolution, really fast temporal resolution, really high spatial resolution to capture actually living animal changes in dendritic spine morphology, very cool.

Limitations of STED microscopy, so, as you might be able to imagine, multi-colour imaging can get quite difficult. Remember, normal confocal microscopy, you need just one excitation beam per fluorophore. In STED microscopy, you need an excitation beam and a depletion beam for each fluorophore, okay? And what you don't want to happen, if you don't want your depletion beam of your first fluorophore to start exciting fluorescence from another fluorophore in your sample.

And although STED has technically unlimited resolution, that requires very high laser intensities. So, this is just a little example from paper of increasing laser the intensity in STED microscopy. And what this kind of ominous yellow thing is, it shows that the sample has actually ruptured, and there's leakage of the contents of the sample outside, actually causing physical damage to the sample, because you're trying to increase the resolution, depositing huge amounts of energy into the sample. And so, that can be a bit tricky.

I already did some STED microscopy during my PhD looking from a quite photophysical angle, and here's just something I did. So, one of these fluorescing crystals, one over here was imaged using STED. So you get these nice fluorescing crystals in a nice and structured image in STED. Hopefully my face isn't in the way. It is on this screen, but not on my other screen. So, apologies if you can't see this, but there's a melted crystal underneath where my face is that literally exploded because I was doing STED microscopy on it.

So, that's all for the first talk, hopefully, regarding STED. You should be comfortable with why a smaller beam means better resolution in general. How STED uses a combination of beam shaping and the photophysical process of stimulated emission to increase the resolution, and the advantages and disadvantages of STED microscopy. And this is really nicely viewed in this paper here. So, thank you for joining me for the first half of this talk. And I'll see you next time to look at single molecule localization microscopy, and kind of future avenues of the super-resolution.

Lesson 10 Transcript

Welcome back. This is the second super resolution microscopy lecture that I'm giving and, in this lecture,...we've already covered in lecture one...we covered the diffraction limit and SIM microscopy and STED microscopy and in this talk, we're going to be covering single molecule localization microscopy and more advanced techniques, where the field is going, what's the state of the art, what's cutting-edge. So single molecule localization microscopy, also called SMLM...and there are going to be some other acronyms which I'll, kind of, grapple within a little bit.

So to begin with, let's think, what's an image of a single fluorescence molecule look like, okay? So here is a real image of a real molecule. This is an image of one molecule of Alexa Fluor 647, okay? One molecule that I've imaged. Cool. Boring, but cool. So I want you to look at that image and I want you to think, do I know where the center of that molecule is and how confident am I about where the center of that molecule is? So if I look at this I'm like, "Well, I'm pretty sure the center of the molecule is somewhere around here." You know, it's not going to be over here. It's not going to be over here. It's not even going to be here really, maybe here, but most likely here.

So we can already infer higher resolution information from this image of a single molecule than is actually in the image. We can pinpoint that center, kind of, mentally better than we can see in the image, and a lot of the time if you can do something mentally you can do it computationally. So what I can actually do is fit the two-dimensional Gaussian to find the center of this molecule very accurately. So this is just exactly the same information where intensity is on the Z-axis here and X, Y are just here, and this is a fit for the intensity of this molecule. And from the parameters of that fit, I can say, "Okay, cool. The center is there." And I can relate that back to my original image to find the center. Okay.

So if we have an image of one single molecule then we can pretty accurately work out where the center is pretty easily from the Gaussian fit, and the quality of this Gaussian fit, so how, kind of, confident we are in the fit can then be related to our confidence in where that center is. So, you know, this, kind of, represents the error in our localization, basically. So if we can fit a Gaussian function to an image of a single molecule, we can obtain the center and essentially a confidence interval on where that center actually is. So something like this. So it can turn our raw data into that higher resolution. Great.

So now if you think about a widefield fluorescence microscopy image. For example, this is some microtubules labeled with Alexa Fluor 647, so again, same dye as we looked at before. And all this image is lots and lots and lots of those single-molecule images on top of each other, okay? So there's one image containing really complex spatial information. What we do in single molecule localization microscopy is we try to find those single molecules again. And to do that, instead of taking one image where all of our molecules are emitting fluorescence at the same time, we instead take a really large number of images where a very small number of fluorescent small molecules are emitting each frame.

So we go from one image containing very complex spatial information to a large number of images each containing much simpler spatial information, and you can see this looks like...this is called, kind of, blinking because it really looks like blinking. The way this is done, and we'll talk about how we generate the blinking, don't worry, is we actually get the fluorescent molecules turning on and off. And if we look between the difference between these two images, each of these individual blinking images, I reckon you could even just, you know, if you had a pen and paper, just pinpoint the center of each of those molecules and start to build up a really high-resolution image of our data. So we don't do it with a pen and paper, obviously.

We do it with this Gaussian fitting method and we fit Gaussians all over the image to find single molecules and then we render that as an image. So this is the image building up as it finds more and more molecules. Cool. And you can see I've got a much higher resolution here than we did before. So to, kind of, very briefly summarize single molecule localization microscopy, you spread the fluorescence out in time so we can see single molecules and then we can accurately localize the center of each of those single molecules and build a map of where they are inside the sample. Great.

So as you can imagine, one of the most important bits of this whole technique is how do we actually achieve that blinking? You know, if we go to the microscope and we put our sample in the microscope we just acquire a long-time series, it doesn't just magically start blinking. To do this, you need to be able to have a fluorescent labeling system with an on-state and and off-state, okay? So our on-state is fluorescence and emitting light and we can see it. Our off-state is dark, okay? And we need to be able to at least transition in one direction between these. And there's this, kind of, off-switching rate and on-switching rate. So determining how fast you go between these fluorescent on-states and non-fluorescent off-states. And the difference between a lot of single molecule localization techniques comes in how you generate the blinking.

So one of these is called PALM, which is photoactivatable localization microscopy, and in PALM you achieve the blinking by using fluorescent proteins which have a dark state and a light state or some kind of on-state and off-state you can transition between. Another method is STORM, stochastic optical reconstruction microscopy, and in this case, you don't have fluorescent proteins with the on-state and off-state. You have organic dyes such as the Alexa Fluor dyes that can go between off-states and on-states. And the final, kind of, popular way of achieving blinking is DNA-PAINT, which is points accumulation for imaging in nanoscale topography, and in this case, this uses the melting of DNA to bring short-labeled oligonucleotides closer and further away to a sample, so that's your on-state and your off-state, the distance from your DNA molecule to the structure of interest.

So I'm going to go through each of these individually and just get a flavor of what they're like. So how do I achieve blinking in PALM? This is where we use fluorescent proteins. So the first method is using photoactivatable fluorescent proteins such as photoactivatable GFP. So photoactivatable GFP is, kind of, a broken GFP. So if you just label a sample with photoactivatable GFP, you've got this, kind of, broken chromophore in the middle of your fluorophore, and it's dark, okay? So if I try and image it like normal GFP, it won't get any fluorescence out. So that's my off-state. If, however, I shine some ultraviolet light, some activation light onto my GFP, so shine some on some of my molecules, they randomly undergo this rearrangement of hydrogen bonds to activate the chromophore, okay?

So if I apply UV activation then some of my GFP molecules will become active. So they'll turn into, kind of, a functional GFP which can then be excited and emit fluorescence. Great. And then after a while this can actually go back to the dark state or the on-state can be permanently bleached, so we can have these on and off transitions. And so you can imagine if you're imaging, you have UV activation at a level that continually puts fluorophores into the on-state and your excitation which essentially gets you a fluorescence and then converts back to the off-state. So there's this balancing act of having your activation light and your excitation light to get your blinking rate nice so you get individual molecules.

Another class of fluorescent proteins that are used in PALM are photoswitchable fluorescent proteins, for example Dendra2. So Dendra2, the off-state is quite a bad green fluorescent protein. So if Dendra2 is in its off-state you try and image it with 490-nanometer excitation and you'll get some really, kind of, bad fluorescence, low-intensity fluorescence out. But the other thing that happens when you excite this off-state of Dendra2 is, again, it can cause a change in the chromophore structure in the center. So in this case, carbon-carbon double bond formation. And then Dendra2, illuminating with 490 nanometers, can actually turn this green fluorophore into a red fluorophore, okay? So in that case, if we then also put on some excitation like 561 nanometers then we'll get lots of red fluorescence.

So in PALM, our blinking is mediated by the laser intensities and the wavelengths in our photoactivatable proteins UV and our excitation light balancing those to get the right ratio of on-state to off-state and the photoswitchable fluorescent proteins will have our essential conversion light which makes it into the red state simultaneously with our excitation light which generates the fluorescence that we actually detect. So that's how PALM works.

STORM is slightly different. So this is using similar organic fluorophores instead of fluorescent proteins. So the on-state in STORM is basically the same state that we use in all conventional fluorescence microscopy where we have our excitation and our emission, and we're just aiming to go around this cycle lots of times, generate quite a lot of emission, image our structure. Now, these aren't the only states that a fluorescent molecule has. This is just a representation of the singlet state. There's also a state called the triplet state in molecules, and technically, because according to mechanics, transitions between singlet and triplet states are forbidden. It's a quantum mechanical, like, forbidden transition, but, again, because quantum mechanics is wacky, you can break that law. You can break that forbidden transition if you try hard enough. What does that mean if we try hard enough?

Well, let's say we really increase the intensity of our excitation. What does that mean for this molecule? Well, as soon as it will get up to the excited state, it will emit fluorescence and then it'll immediately be driven back into the excited state. So when we really strongly increase the intensity of our excitation, our molecules end up spending way longer in our excited state than they would with a lower intensity. And if you're in the excited state, there's a small but non-negligible probability that you'll slip into the triplet state, okay? So you can't go there from the ground state. You can only go there from the excited state. It's a very low probability process and you increase the probability by essentially making sure your molecules are up in the excited state as much as possible.

Okay, so we go into the triplet state. Cool. So this will happen, like, randomly, low probability, and occasionally you'll just fall into there as a molecule. If you're in the triplet state, you can't do fluorescence, so you're dark, okay? And that's great, right? We've got a small number of on-molecules and you end up with quite a large molecule of off-state molecules because, again, we can't do fluorescence in the triplet state and the triplet state has a much longer lifetime compared to the singlet state. So once it's in the triplet state, even though it's quite a low probability this happens, it stays there for ages, okay? And that can be up to seconds, you know. One fluorescent cycle takes a few milliseconds. The triplet state lifetime can be in excess of seconds. So that's how the blinking is generated in STORM. We knock our molecules into triplet state and from here they can undergo some more transitions. So within the triplet state, they can then go to a radical state and from that radical state, they can actually undergo permanent bleaching. You've broken your molecule.

Alternatively, you can try and bring that molecule back to the singlet state and that's via chemically-mediated transition, so normally with buffering. So also, you can mediate this, kind of, transition back from the triplet state to the singlet state with UV light. So again, technically transitions between singlet state and triplet state are forbidden but you can engineer conditions where actually they're slightly more likely than you'd expect them to be, okay? So blinking in STORM is mediated by several things. Firstly, the excitation intensity. So this is the thing that shoves stuff over into our off-state, our triplet state, our buffering, which is chemically mediated transitions which can bring things back from the off-state to the on-state, so fluorescence again, and any UV light that you put on the sample because that can also help recovery from the off-state to the on-state. So again, if you have the right combination of excitation intensity, correct buffering conditions, and UV light illumination then you can actually quite nicely finetune the blinking statistics of your molecule.

Okay. The final way of generating blinking is PAINT, DNA-PAINT. So DNA-PAINT...this is our off-state. So instead of labeling our target molecule that we want to image with a primary antibody that's, say, got a fluorescent dye conjugated to it or it's, you know, fused to a fluorescent protein, we do antibody labeling but instead of a fluorophore, you actually put a short strand of DNA called the docking strand, okay? So it's a single strand of DNA. That's how we label our sample apart from as you can see there's no label there. You then also put into your sample imager DNA strands. Now, this is the complementary sequence to our docking DNA strand with a fluorophore attached to it. And the idea is you can design these DNA strands in such a way that you'll get random binding and unbinding to really exploit the, kind of, melting of DNA to do this, kind of to bring your fluorescent molecule closer and then further away from your target.

And say it happens randomly, right? Every so often you'll have your DNA strand diffusing by and it'll bind to your docking strand and then that's your on-state emitting fluorescence after a while because of that sequence and the base content of your DNA strand that'll unmelt and diffuse away and you get this on-off, on-off imager DNA strands that are in your, kind of, imaging solution transit you binding and unbinding to your target of interest. And so the things that mediate your blinking probability, your blinking rate in this case are the sequence of the DNA strand and also the concentration of imager strands. So higher frequency put more and more and more imager strands in, the more likely it is that one will come close enough to your target strand to bind and make a blink, basically.

But basically, if you have the algorithm, you don't care how it's generated, okay? So if you have a part of the process, if you...you know, computational processes looking for single molecules, normally you're pretty agnostic to how it's generated. So the key thing is that no matter how you actually get your blinking, at the end of the day, it always looks pretty similar. It always looks like little short flashes of light in your sample. So regardless of how you label your sample, it's normally analyzed in exactly the same way. The only thing that makes a real difference is how dense your blinking is. So if you have lots of molecules overlapping, so more towards the center regime, things can get different. So if your algorithm can't actually accurately discern one molecule or two molecules overlapping, you do start to get problems, but ideally, you'll have nice, sparse blinking where it's really easy to see individual fluorescent molecules blinking however you generate that.

So the key features of single molecule localization microscopy, it's a widefield microscopy technique and the optics are pretty simple. So for SIM, we needed to put in some kind of grating or a pattern generator to make the high frequency pattern then wire interference. Instead, you need to put in a second laser beam with optics to make that donut. In single molecule localization microscopy, you just need a widefield microscope with high intensity lasers so we can use STORM and PALM and make sure that we're mediating those transitions correctly. And the spatial resolution of single molecule localization microscopy is also technically unlimited. The only thing that limits your resolution is how good your fit is to a single molecule. So again, you have infinitely bright blink that you could fit really, really, really accurately, you'd have infinitely high resolution.

So what are some applications of single molecule localization microscopy? This is one of the coolest things that has been found using single molecule localization microscopy and this is the, kind of, cytoskeletal substructure of neurons, and again, this can be done in three dimensions. I won't talk about how you're going to image three dimensions in single molecule localization microscopy, but it can be done and I'll provide some reviews if you want to learn more about that. But this is three-dimensional single molecule localization microscopy, STORM in this case.

So what we've seen was that if you did STORM on various cytoskeletal components such as actin, spectrin, and adducin, what you see is you see this really beautiful striped pattern along axons in this very periodic manner, and looking at different proteins, you see they actually, kind of, intercalate as this very regular, repeatable spacing in this, kind of, pattern that forms along axons. And from this, they can actually make a model showing how cytoskeletons organize within the axon. Each of these, kind of, actin rings is about 190 nanometers separated, so they couldn't be seen with conventional microscopy, and I think this serves, like, a really important role in compartmentalization of things like sodium channels to specific regions of axons. So this is really cool. This is like one of the most beautiful super resolution studies in my opinion. It's a real structure discovery.

More recently, this is an example of STORM being used for, kind of, structure discovery in virology. So this is Vaccinia virus. It's a very very large virus. It's about 300 nanometers in size, but in, kind of, the context of resolution, problematic, right? And what was seen here was that different...when you label different membrane proteins in Vaccinia with Alexa 647 or STORM dyes and image them, you can actually see that there's some kind of organization of different proteins within the virus itself in the virus membrane. For example, these proteins here polarized to the tips of the membrane, and this played quite an important role in its ability to fuse to the host cell. So that's, kind of, mad, again, structure discovery on a virus membrane has been one of the uses of STORM.

You can do live cell single molecule localization microscopy but it's not easy. Here's an example of focal adhesions using live-cell PALM. So if you're doing live-cell super resolution you're almost always going to be using PALM because fluorescent proteins, unless you're doing something on the cell surface and you can use some kind of surface receptor antibodies, etc. But here you can actually see these focal adhesions as the cell is migrating using live-cell PALM.

So what are the limitations of single molecule localization microscopy? Well, the big one is you need to really get your sample preparation right. You need to do very specialized fluorescence labels. You need to make sure you're buffered correctly. If you use the wrong label, it just won't work at all. Some fluorescent proteins are really difficult to make them blink. Also, the buffers you use and the laser powers that you need to mediate switching events often are really not live-cell friendly at all, and the buffer is often a redox buffer so they're controlling oxidation states. Cells don't really like them. And the laser powers can again get very very intense very quickly. You also need a long acquisition time, right?

So if you want nice...the thing you need to do, if you've got very few fluorescent molecules emitting their fluorescence per frame, you really large number of frames, right? You want to make sure you capture each molecule in the sample at least once. If you don't take enough frames then you're going to get incomplete structures. You're not going to actually capture the blinking from every single molecule so you need really low acquisition times, and that's bad for live-cell imaging, of course, if you're already using quite high laser intensities, but it's also a bit problematic for dynamic structures. Say you've got a sample that's moving then, you know, your fluorophore location has, kind of, moved before you can actually capture it, for example.

There's an example of this here. So this is from that same data set as in the last slide and it's showing that depending on how long you've spent...how much time you, kind of, allocated for a single PALM image can actually change what it looks like. So this was the focal adhesions made from 30 seconds worth of imaging. This is what it looked like from 180 seconds worth of imaging for a single frame. So this isn't a time series, this is just through a single frame, or for 270 seconds worth of imaging. And they're overlayed here in three different colors, in cyan, yellow, and magenta, and you can see that your structure is going to be different depending on how many frames you acquire to make your single super resolution frame, okay?

And finally, you can get quite bad artifacts when you're trying to localize the molecules. Again, I mentioned briefly, for example, if your blinking is quite dense, so if you're not very sparse, if you occasionally get overlapping fluorophores that can make the reconstruction go a bit haywire. The algorithms don't really know what to do with those overlapping molecules a lot of the time. Imagine you've got one molecule here, one molecule here, very easy to localize those, it's two separate objects. If they overlap, then the algorithm often just thinks it's one big fat molecule and so you actually lose information, and there's an example of this here.

So this is a good single molecule reconstruction of some microtubules. You can see nice, clear individual filaments. On the right, it's the same data set but analyzed differently and analyzed in such a way that it's a real problem if there are overlapping molecules, and you can see you get this blurring and this merging of structures. You get incomplete structures, etc. So you can get some really nasty artifacts in single molecule localization microscopy.

In terms of manufacturers, all the major...all the major? Nikon definitely does single molecule localization microscope. Zeiss do. There's a company called Oxford Nanoimaging which does like a lot of tabletop single molecule localization microscope, and again, pretty easy to find in this case because you don't need any additional optics. You just need powerful lasers. So yeah, that's single molecule localization microscopy.

So hopefully, after that, you're now comfortable with why we can accurately determine the center of a single emitting fluorophore. You know, they always look the same each time so you can accurately work out where the center is, how you can generate blinking in different ways, and the various advantages and disadvantages of single molecule localization microscopy. This is the one I use the most so there's a really nice review of it here but there's also some slides and some demonstrations of the analysis that I've done for other, kind of, talks that might be useful if you want to look at this further.

So that's the main super resolution techniques. I think hopefully as you'll...hopefully one thing that you'll, kind of, realize from these two talks is there's no one best technique, right? They all have advantages and disadvantages, and for example, single molecule localization microscopy has reliably the highest resolution but it's probably most likely to kill your sample. SIM, on the other hand, you know, it doesn't have great spatial resolution but it's actually pretty straightforward to do. So it's really what do you want out of your sample? What kind of information are you trying to see? And a lot of time diffraction limited is a very good way to go.

And now I've, kind of, come into these talks saying super resolution is great and it fills this, kind of, gap in resolution between fluorescence and electron microscopy, but all of these methods do have quite strong drawbacks. So if you're struggling to, kind of, get microscopy working in a diffraction-limited regime don't go to super resolution, okay? Super resolution should only really be done once you've optimized and pushed your diffraction-limited regime as far as you can take it. If you're going to...never go into an imaging problem just straight up super resolution because you'll cause yourself more problems than you will solve.

There are two really nice resources here for, kind of, comparing different super resolution methods and when you might use them. There's this review in "Nature Cell Biology" and there's this really nice poster which actually, kind of, plots out the advantages and disadvantages and show you example datasets.

Okay. So what's going on in super resolution microscopy at the moment? You have SIM, STED, and single molecule localization microscopy, they're all pretty established. They've all been used for a long time. Everyone's very familiar with them. So let's think back to the beginning of the last, kind of, talk. Why aren't we just doing electron microscopy, right? That's got really, really high resolution. And the answer is super resolution should work with live cells, right? You know, otherwise we might as well do electron microscopy. So what I'm going to show you here is this is a fluorescent and transmitted light image of some cells that are expressing tubulin and GFP. You've got two interface cells that are nicely spread and two mitotic cells which are rounded up, and I've got the intensity of the laser illumination up the corner here.

Now, what I'm going to do in this movie, I'm just going to increase the intensity of our laser illumination. I'm going to see what happens. So increasing the intensity and bad things start happening quite soon. You could see way bad blebbing on these mitotic cells. You can see shrinkage of these interface cells and, you know, as we increase that intensity things get very very bad for our live cells. And the position where the, kind of, intensity where this starts happening, where bad things start happening to our cells is lower than you might think. And even though techniques such as SIM are, kind of, touted as, you know...SIM in particular. People say, "Oh, it's very light and very live-cell friendly." It's actually quite substantially into the regime where you start seeing these bad, kind of, phototoxicity artifacts. So once again, here is about where you use SIM and that's just when your cells look, kind of, starting to really show signs of utter distress. And this last frame here would be a typical STORM imaging intensity and you can see you just don't wanna be doing that with live cells, okay?

And, you know, the reason these cells aren't happy is because phototoxicity, and so there are various things that can cause reasons why light can be phototoxic to cells. So if you're using UV illumination, remember we're talking about single molecule localization microscopy, you can actually, you know...we needed that for facilitating some of the blinking transitions. You can actually induce an apoptotic UV response. You can directly damage DNA strands, but the most common reason for phototoxicity and super resolution microscopy, in general, is that you can generate reactive oxygen species, and that's just a biproduct of exciting lots of fluorescence from the thing you're trying to image and other fluorophores and molecules in your sample.

Whenever you're using high-intensity illumination, you're generating reactive oxygen species which then gather around the cell and oxidize everything and make your cell very unhappy. Okay. So oxidizing the cell is a problem, right? We want to be doing live cell super resolution microscopy because, you know, if we didn't care about resolution, we'd use normal fluorescence. If we didn't care about the cell being alive, we'd use electron microscopy.

So there are various strategies that we can use to tackle phototoxicity and super resolution microscopy. Obviously, you can try and deliver less light to the sample. So here's an example of three imaging regimes. Widefield, where we illuminate everything, confocal where you illuminate everything within a thin beam of light, and two-photon where you only excite molecules at the focal plane. You can use inhomogeneous illumination to try and reduce the light dose to your sample, for example, multi-focal illumination, or delivering your light in, kind of, temporal chunks. We can also do things like spatially adaptive illumination where depending on what the fluorescence is in your sample, you can either make your, you know, light more intense or less intense, so only delivering light to excite molecules where it's needed, for example.

And you can also use light sheet illumination regimes. So Gaussian light sheets, for example, you only illuminate the molecules that you're imaging. So again, if you illuminate parallel to your imaging plane, then you get nice flat plane of light that you can just collect straight in, or lattice light sheet microscopy where you can actually illuminate a really skinny plane of light so you don't even get the problems of illuminating and exciting molecules above and below the focal plane.

So we can think about the spatially adaptive techniques, and this is something that's been done a lot in STED microscopy. Oxford Instruments, for example, have now got commercial systems that can do this. Say you have things like RESCue STED where you just...in this case, you scan with your confocal spot, your excitation laser, and you only turn on your STED beam when there's fluorescence. So you don't want to be actually delivering your STED beam into your sample for places where there aren't fluorescent molecules. You only want to be putting the STED beam on and having the intense where you actually have molecules within your sample. Okay. So this is cool, uses the excitation laser scanning through...Oh, there's fluorescence, turn STED on. Scanning, oh, no fluorescence, turn the STED off. So that reduces the amount of STED light that goes into a sample.

And a similar kind of technique is DyMin, so this is actually modulating the power of the STED beams, so the intensity of the STED beam until you either reach some kind of resolution threshold, for example, because remember, let's say we're scanning along and we're like, "Okay. We found a molecule but we want to try and increase the resolution this much." We've done it. Let's move on. So increasing the power of the STED beam until you've got just enough because resolution scales with beam intensity instead.

Lattice light sheet microscopy is, kind of, very cool. This is what a lattice light sheet microscope looks like. It doesn’t look like a microscope and it allows you to illuminate a sample with very thin sheets of light now shown here and then detect perpendicular. And this can basically be combined with other widefield techniques. So you can treat each of these sheets of light as a potential plane of imaging for SIM or single molecule localization microscopy. So that's pretty cool. Less light delivery not going through the whole sample but then still doing super resolution within those thin sheets. This is an example of lattice light sheet SIM. This is for actin in a living HeLa cell. So you can see very, very high temporal resolution and, kind of, very nice spatial resolution as well.

And here on the right, we're going to see an example of using lattice light sheet microscopy PALM. So in this case, it's Dendra2, that photoconvertible fluorescent protein we talked about earlier, the one that goes from green to red conjugated to Lamin A on the nuclear lamina, and you can see building up this really nice high-resolution image of nuclear lamina. And again, you can do it in three dimensions as you take that light sheet through your sample, and you get this absolutely incredibly beautiful three-dimensional resolution. I'm just going to say, I watched this for a while because I think this is absolutely gorgeous. And again, we're coming back through the nucleus various slices compared to the, kind of, diffraction-limited on the right. Okay. So lattice light sheet microscopy is pretty cool. It delivers less light to your sample because you don't actually illuminate big fat chunks of sample, just skinny little sheets, and you can do super resolution imaging within a single sheet of that sample.

So SIM has a lot of other developments hardware-wise as well, and this is just an example because I think the data is just so beautiful. This is a technique called grazing incidence SIM, which again, normal SIM, you'd put your light sheet into your sample but you'd still get out-of-focus fluorescence and you'd get, again, photodamage throughout the, kind of, column of light as your beam is going through your sample. Grazing incidence SIM, again, allows you to restrict your SIM imaging to a thin plane around the focal plane. And again, you can have this all with really beautiful, very fast double-resolution multi-color SIM imaging, which is absolutely beautiful for this modality. And again, really lovely cell dynamics which...again, helped by the fact that you're not damaging your sample as much because you're not inputting light into the whole thickness of the sample. You're just restricting it to a little strip.

One thing that's quite interesting is the advent of pixel reassignment methods, and these are things like Airyscan and iSIM, and people will often say that pixel reassignment methods aren't super resolution methods because you don't really get a huge increase in resolution. You're going to get maybe a 1.4-fold increase in resolution compared to resolution, which is-2 fold resolution, and then the much higher resolutions with STED and single molecule localization microscopy. However, these are really popular at the moment because they're not as phototoxic and not as photo-damaging. They're also really easy to use, and so if you're thinking about what super resolution technique to use for my imaging, these are often a very good place to start because they can start to tell you, you know, what a small amount of resolution increase can do for your imaging. That might be enough, and it does so without complicating your life to the extent that some of the other techniques do. So here's an example of...I think this is iSIM, and you can see you get really nice dynamics in this image, and there's a, kind of, really lovely review here of current and future development for SIM in particular which include a lot of pixel reassignment methods. Play the movie again.

So finally, another way of tackling phototoxicity in super resolution microscopy...you know, I've shown you all these fancy things. There's adaptive illumination. There's light sheet illumination for reducing the dose of light into a sample. But what if we just turn down the laser intensity? What's the worst that can happen? So this is an example of what would happen for Alexa Fluor 647. So this is some real data, got some blinking data where the incident illumination intensity was about 2 kilowatts per square centimeter. For a, kind of, comparison, for benchmark, the illumination of the sun over the Earth...the illumination of the sun at the Earth's surface is about 0.1 watts per square centimeter, so this is a high-intensity situation. What would this data set look like, same example, same imaging, if I'd just turned down the laser intensity?

This is the super resolution rendering of that data set. Well, it would look like this. So this is at 40 milliwatts of square centimeter and, as you'd expect, it kind of looks like regular fluorescence imaging. You can't really see that blinking at all. And if I tried to just do something quite silly and just put this data set through my single molecule localization algorithm, trying to find individual molecules, then, of course, we'll get a terrible image. It can't find anything that looks like single molecules in that data set. So one thing that you can do is there's a family of, kind of, computational techniques that actually try and get computational super resolution from low-intensity data.

One of these is one that was developed in my current lab, Ricardo Henrique's Lab, called SRRF, which is super resolution radio fluctuations, and this is a method of trying to find the fluorescent molecules without using this, kind of, binary...is this molecule yes? Is this molecule no? These, kind of, probabilities to say where is it most likely that there are molecules in the image? And for this image here, you can see we actually get back nice high-resolution image without all the missing information in a standard single molecular localization reconstruction.

One of the nice things about SRRF, for example, is that it can be used with any fluorophore or any microscope. So this is an example of utrophin GFP image in a confocal microscope. SRRF uses 10 to 100, normally, frames per single SRRF image so it analyzes the fluorescence in series images, compresses it into one in the same way that single molecule localization algorithms do but without trying to find single molecules. And again, you can see this is live-cell data and the cells are pretty happy in there and we've got nice high temporal resolution. Again, the more fluctuations, the more intensity variations, the more blinking you have, the higher resolution you'll get with something like SRRF, but again, as a starting point, you know, this is quite a good place to start. You're not going to get the highest resolution but you're going to start seeing where you could potentially go without needed to buy a new microscope or without needing to do some incredibly fancy sample preparation.

And to finish off on the software developments in the cutting-edge of super resolution, one thing that's, kind of, coming to the fore are deep learning-based methods. So these used, kind of, neural networks and powerful, kind of, artificial-intelligence-style computing to infer super resolution from maybe substandard data sets. So here's an example of a neural network-based technique that uses neural network to reconstruct SIM data. So this is data that was reconstructed with a standard SIM reconstruction and the same data that went through this scU-Net reconstruction. So it looks pretty good. Looks like you've got much more resolution there.

And there's other kind of...again, this is a really, really growing field. But another example is ANNA-PALM, and this learns how to make good super resolution reconstructions with much fewer frames than you'd normally put into a single molecule reconstruction. So this is an example here of a widefield image. If you use 300 frames to make your PALM image, then it would look like this. If you use 30,000 frames to make your PALM image, it would look like this. And the cool thing with ANNA-PALM is it learns how to make an image that looks like it was acquired with 30,000 frames from data that's only 300 frames. So again, that's pretty cool. A word of warning with anything deep learning is you're never really sure what it's doing, okay? So you could get artifacts and not be aware of them. Artifacts in super resolution microscopy, in general, they're present but you normally know why they're there. In deep learning, you're never really sure they're there or where they came from, so that's just a word of warning.

Okay. So hopefully by the end of that, you're now comfortable with how light can cause damage to living cells and how we can use hardware to minimize phototoxicity in super resolution imaging. So changing the way we deliver light to the sample and also how we can use software to maybe deal with, like, less good data and try and get more resolution out of that data. And we did a review very recently about phototoxicity in a super resolution microscopy, which, again, goes into much more detail on all of this.

Just to wrap up, this is something that you should always bear in mind if you're doing basically any microscopy that isn't just taking an image including any kind of processing or any kind of super resolution. So how do you assess your super resolution data and what it's actually telling you? So for example, what does the intensity mean in the super resolution image? What's the resolution of your image and how do you measure it? And then how can you tell if you have artifacts in your image? So be very, very, very careful about inferring intensity-based information like protein counts or, you know, any kind of absolute intensity quantification from super resolution images.

SIM, instead, intensity should be representative of the true number of molecules, but single molecule localization microscopy intensities can be really misleading. So for example, let's say we've got two molecules that we're imaging, and the way we image it, just by pure chance, our left molecule blinks three times more than our right molecule. If you don't analyze that data correctly and if you don't appreciate what an intensity in a single molecule localization microscopy image means then you might end up with an image like this where you say, "Oh, there's more protein or there's more fluorophore on the left than on the right." When in reality, that molecule just blinked more than the one on the right. So that's something you need to, kind of, just bear in mind and be really critical when you're looking at super resolution data. Think, you know, what's the author saying this tells us and is this actually what the information is telling us?

And again, the big goal of super resolution microscopy is to increase the resolution as high as possible but just be wary that high resolution doesn't necessarily equal high quality. So here's a pair of conventional and super resolution images in the same structure. And I made some software a couple of years ago now. I'm still developing a version two but it's called SQUIRREL. It's a method for quantifying errors in super resolution microscopy, and this will give you an error map in this case saying where the super resolution image is performing well in comparison to a diffraction-limited image. You can quantify resolution in various different ways. One popular way, and this is inside SQUIRREL as well, is Fourier ring correlation, but there's also newer techniques such as image decorrelation microscopy.

But what you can also do in SQUIRREL, for example, is actually measure the resolution in different parts of the image, which it gives you a map that looks like this. So in SQUIRREL is a, kind of, software tool that can be used to assess the resolution and quality of your images. Cool, great. Useful thing, I hope, because it's a shameless plug for my own research, but be really careful about in your mind, kind of, equating high resolution with high quality. In some cases, higher resolution imaging is high-quality imaging. So we have a look in this, kind of, yellow ring. We've got really nice high-resolution imaging. That's the widefield equivalent, and that error map isn't really saying that there's anything particularly wrong. But if we look at this gray ring here, you know, that's quite a good resolution, 66 nanometers. That's pretty similar to our resolution in this part of the image, but actually, the intensities have gone pretty mad compared to our widefield image and it's showing up in the error map.

So again, this is more a kind of lesson in being critical of data. If you're looking at a super resolution paper and they're saying, "Oh, we got 30 nanometers resolution," you know, how did they measure that resolution? What's that resolution telling you? And most importantly, that high resolution doesn't mean it's a very high quality. It could do but that's not necessarily the same thing. Okay, then on that, kind of, cautionary note, thank you very much for listening to these seminars. I hope they are interesting and informative and if not, I hope that at least there's some references that you can go and find more information about the various things that we've talked about and good luck on your microscopy journeys.

Related assets