Over the last half a decade, super-resolution microscopy has matured and moved from specialist optics labs into the hands of the biologist. Commercial microscope solutions exist for the three main variants for achieving optical super-resolution: single molecule localization microscopy (SMLM), stimulated emission depletion microscopy (STED), and structured illumination microscopy (SIM)1,2. SMLM such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) have been the most popular techniques, largely due to the simplicity of the optical setup and the promise of high spatial resolution, readily down to 20 nm. However super-resolution microscopy via single molecule localization comes with an intrinsic trade-off: the spatial resolution attainable is dependent on accumulating a sufficient number of individual fluorophore localizations, hence limiting the temporal resolution. Imaging dynamic processes in live cells therefore becomes problematic as one must adequately sample the movement of the structure of interest to prevent motion artifacts while also acquiring enough localization events in that time to reconstruct an image. In order to meet these requirements, live cell SMLM demonstrations have obtained the required increase in fluorophore photoswitching rates by greatly increasing the excitation power, and this leads in turn to phototoxicity and oxidative stress, thereby limiting sample survival times and biological relevance3.
A clear advantage of STED over both SIM and SMLM is that it can image with super-resolution in thick samples, for example lateral resolution of around 60 nm was achieved in organotypic brain slices at depths up to 120 µm4. Imaging at such depths with single objective implementations of SMLM or SIM is unfeasible, but becomes possible with either single-molecule light sheet or lattice light sheet microscopy5. Video-rate STED has also been demonstrated and used to map synaptic vesicle mobility, although so far this has been limited to imaging small fields of view6.
For applications in cell biology and molecular self-assembly reactions7-12 that require imaging with high temporal resolution over many time points, structured illumination microscopy (SIM) can be well-suited as it is not dependent on the photophysical properties of a particular fluorescent probe. Despite this inherent advantage of SIM, up to now its use has been mainly confined to imaging fixed cells or slow moving processes. This is due to the limitations of commercially available SIM systems: the acquisition frame rate of these instruments was limited by the rotation speed of the gratings used to generate the required sinusoidal illumination patterns as well as the polarization maintaining optics. The newest generation of commercial SIM instruments are capable of fast imaging but they are prohibitively expensive to all but central imaging facilities.
This protocol presents a guide to the construction of a flexible SIM system for imaging fast processes in thin samples and near the basal surface of living cells. It employs total internal reflection fluorescence (TIRF) to generate an illumination pattern which penetrates no deeper than approximately 150 nm into the sample13 which vastly reduces the out of focus background signal. The idea of combining SIM with TIRF is almost as old as SIM itself14 but was not realized experimentally before 200615. The first in vivo images obtained with TIRF-SIM were reported in 200916 achieving frame rates of 11 Hz to visualize tubulin and kinesin dynamics, and two color TIRF-SIM systems have been presented17,18. Most recently, a guide for the construction and use of a single color two-beam SIM system was presented featuring frame-rates of up to 18 Hz19,20.
The set-up presented here is capable of SIM super-resolution imaging at 20 Hz in three colors, two of which can be operated in TIRF-SIM. The whole system is constructed around an inverted microscope frame and uses a motorized xy translation stage with a piezo-actuated z stage. To generate the sinusoidal excitation patterns required for TIRF-SIM, the system presented uses a ferroelectric spatial light modulator (SLM). Binary grating patterns are displayed on the SLM and the resulting ±1 diffraction orders are filtered, relayed and focused into the TIR ring of the objective lens. The necessary phase shifts and rotations of the gratings are applied by changing the displayed SLM image. This protocol describes how to build and align such an excitation path, details the alignment of the emission path, and presents test samples for ensuring optimal alignment. It also describes the issues and challenges particular to high speed TIRF-SIM regarding polarization control and synchronization of components.
Design Considerations and Constraints
Before assembling the TIRF-SIM system presented in this protocol, there are several design constraints to consider which determine the choice of optical components. All abbreviations of optical components refer to Figure 1.
Spatial Light Modulator (SLM)
A binary ferroelectric SLM is used in this setup as it is capable of sub-millisecond pattern switching. Grayscale nematic SLMs may be used but these offer greatly reduced switching times. Each on or off pixel in a binary phase SLM will impart either a π or 0 phase offset to the incident plane wavefront, therefore if a periodic grating pattern is displayed on the SLM it will operate as a phase diffraction grating.
Total internal reflection (TIR)
To achieve TIR and produce an evanescent field, the incident angle of the excitation beams at the glass-sample interface must be greater than the critical angle
. This sets the minimum incident angle required, and hence also the maximum spacing, or period, of the evanescent illumination pattern. The maximum incident angle
(the acceptance angle) is limited by the numerical aperture (NA) of the objective lens which can be calculated from the definition
. This determines the minimum pattern spacing achievable according to the Abbe formula
which links NA and wavelength
to the minimum pattern spacing
. In practice, a 1.49 NA oil immersion TIRF objective yields a maximum angle of incidence of around 79° and a minimum pattern period on the sample of 164 nm using an excitation wavelength of 488 nm. These two angles define a ring in the back aperture of the objective over which the instrument achieves TIR illumination (i.e. the TIR ring) and in which the two excitation foci must be accurately positioned and precisely rotated to generate each illumination pattern.
Reconstruction of TIRF-SIM images requires the acquisition of a minimum of three phase shifts per pattern rotation therefore the SLM pattern period must be divisible by 3 (see Fig 1). For example, a period of 9 pixels for 488 nm illumination and 12 pixels for 640 nm illumination. For a comprehensive discussion of SLM pattern design, including sub-pixel optimization of pattern spacing using sheared gratings, see the previous work of Kner et al16 and Lu-Walther et al20. The position of the two excitation foci must be inside the TIR ring for all wavelengths, however the diffraction angle of the ±1 orders from the SLM is wavelength dependent. For standard SIM, multicolor imaging can be achieved by optimizing the grating period for the longest wavelength, and tolerating a loss in performance for the shorter channels. For TIRF-SIM however, optimizing for one wavelength means that the other wavelength foci are no longer within the TIR ring. For example, using a grating period of 9 pixels is sufficient to provide TIRF for 488 nm, as the foci are at 95% of the diameter of the back aperture and within the TIR ring, but for 640 nm this period would position the foci outside the aperture. For this reason different pixel pattern spacings must be used for each excitation wavelength.
The alignment of the TIRF-SIM excitation path is extremely sensitive to small changes in the position of the dichroic mirror (DM4 in Fig 1) in the microscope body, much more so than in conventional SIM. Use of a rotating filter cube turret is not recommended, instead use a single, multi-band dichroic mirror, which is kept in a fixed position and designed specifically for the excitation wavelengths used. It is essential that only the highest quality dichroic mirrors are used. These require thick substrates of at least 3 mm, and are often designated as "imaging flat" by manufacturers. All other substrates lead to intolerable aberration and image degradation in TIRF-SIM.
Polarization control
To achieve TIRF-SIM it is essential to rotate the polarization state of the excitation light in synchronicity with the illumination pattern such that it remains azimuthally polarized in the objective pupil plane with respect to the optical axis (i.e. s-polarized). Alignment of the polarization control optics will depend on the specific optical element employed, for example a Pockels cell21, or a half wave plate in a motorized rotation stage22. In this protocol a custom liquid crystal variable retarder (LCVR) is used, designed to provide full-wave (2π) retardance over the wavelength range 488 to 640 nm as it allows fast (~ ms) switching. If using a liquid crystal retarder it is essential to use a high quality component: standard components are typically not stable enough to give a constant retardance over the length of the camera exposure time which leads to a blurring out of the illumination pattern and low modulation contrast. Liquid crystal retarders are also strongly temperature dependent and require built in temperature control.
Synchronization
The lasers must be synchronized with the SLM. Binary ferroelectric SLMs are internally balanced by switching between an on state and off state. The pixels only act as half wave plates in either their on or off state, but not during the interframe switching time. Therefore the lasers should only be switched on during on/off states via the LED Enable signal from the SLM to prevent a reduction in pattern contrast due to the intermediate state of the pixels. An acousto-optic modulator (AOM) could alternatively be used as a fast shutter if the lasers cannot be digitally modulated.
Choice of lenses
Based on these constraints, the required demagnification of the SLM plane onto the sample plane to produce the desired illumination patterns can be determined. This allows calculation of the focal lengths of the two lenses L3 and L4 in the image relay telescope and the excitation tube lens L5. In this system a 100X/1.49NA oil immersion objective lens is used with 488 nm and 640 nm excitation, hence uses focal lengths of 300 and 140 mm for L4 and L3, and 300 mm for L5, giving a total demagnification of 357x, equivalent to an SLM pixel size of 38 nm at the sample plane. Using this combination of lenses, SLM grating periods of 9 for 488 nm illumination and 12 pixels for 640 nm give pattern spacings of 172 and 229 nm at the sample, corresponding to angles of incidence of 70° and 67° respectively. For a glass-water interface, the critical angle is 61°, and is independent of wavelength, therefore these two pattern spacings allow TIRF excitation for both wavelengths. An objective lens equipped with a correction collar is useful for correction of spherical aberrations introduced by variations in coverslip thickness, or if operating at 37°C.
Image Reconstruction
Once raw SIM data has been acquired it is a matter of computational effort to generate super-resolved images in a two-step process. Firstly, the illumination pattern has to be determined for every image and secondly, the components of the SIM spectrum must be separated and recombined appropriately as to double the effective OTF support (see Figure 6 insets).
Precise knowledge of the projected illumination patterns is paramount, as the super-resolved frequency components have to be unmixed as accurately as possible to prevent artifacts caused by the residual parts of overlapping components. We determine the illumination pattern parameters a posteriori from the raw image data following the procedure introduced by Gustafsson et al.23. In short, a set of illumination parameters that describes a normalized two-dimensional sinusoid has to be found for each of the excitation patterns :
Hereby and describe the fringe contrast and the pattern starting phase of each individual image respectively. The components of the wave vector, and , only change with different orientations of the pattern and can assumed to be otherwise constant. To coarsely determine the components of the wave vector a cross correlation of raw image spectra is performed, which is refined by applying subpixel shifts to one of the cross-correlated images as to optimize the overlap. This is done via multiplication of real-space phase gradients that induce a subpixel shift in frequency-space. Note that it is useful to have a good estimate of the wave-vectors prior to the actual pattern estimation and this can be found by imaging a fluorescent bead layer.
As the phase step between shifted patterns is , i.e. , the separation of frequency components can be performed by a Fourier transform along the "phase axis". The global phase and the fringe contrast can then be determined using complex linear regression of different components. The individual separated components are then combined using a generalised Wiener filter. For a detailed description of both parameter extraction and implementation of the generalized Wiener filter we refer the reader to Gustafsson et al.23 where the same algorithm is used.