This page provides a practical guide for microscope quality control. By following the outlined steps, utilizing the provided template files, and running the included scripts, you will have everything needed to easily generate comprehensive report on your microscope's performance.

Equipment used

  • Thorlabs Power Meter (PM400) and sensor (S170C)
  • Thorlabs Fluorescent Slides (FSK5)
  • TetraSpeck™ Fluorescent Microspheres Size Kit (mounted on slide) ThermoFisher (T14792)

Software used

Excel Templates and Scripts


Please note that during quality control, you may, and likely will, encounter defects or unexpected behavior. This practical guide is not intended to assist with investigating or resolving these issues. With that said, we wish you the best of luck and are committed to providing support. Feel free to reach out to us at microscopie@cib.umontreal.ca



Illumination Warmup Kinetic

When starting light sources, they require time to reach a stable and steady state. This duration is referred to as the warm-up period. To ensure accurate performance, it is essential to record the warm-up kinetics at least once a year to precisely define this period. For a detailed exploration of illumination stability, refer to the Illumination Power, Stability, and Linearity Protocol by the QuaRep Working Group 01.

Acquisition protocol

  • Place a power meter sensor on the microscope stage.

  • Center the sensor and the objective.

  • Zero the sensor to ensure accurate readings.

  • Select the wavelength of the light source you want to monitor, using your power meter controller or associated software.

  • Turn on the light source and immediately record the power output over time until it stabilizes.

    I personally use the Thorlab Powermeter Software in Monitoring mode and record every 10 seconds for 24 hours and stops (recording) when it has been stable for 1 h.

  • Save the recorded data as a CSV file.
  • Repeat for each light source.

    Keep the light source turned on at all times. Depending on your hardware, the light source may remain continuously on or be automatically shutdown by the software when not in use.

Results

  • Use the provided spreadsheet template, Illumination_Warmup Kinetic_Template.xlsx
  • Copy and paste the data from the recorded CSV file into the highlighted cells of the template to visualize your results.
  • For each light source, plot the measured power output (in mW) against time to analyze the data.

  • Calculate the relative power using the formula: Relative Power = (Power / Max Power) for each wavelength. Then, plot the Relative Power (%) against Time to visualize the data.

We observe some variability in the power output during the first 20 min. It is especially noticeable for the 385 nm light source.

To assess the stability:

  • Define a Stability Duration Window: This is a time period (e.g., 10 minutes) during which the power output should remain stable.

  • Specify a Maximum Coefficient of Variation (CV) Threshold: This a limit acceptable for the CV (e.g., 0.01%).

  • Calculate the Coefficient of Variation (CV): CV = (Standard Deviation / Mean)

  • Plot the calculated CV over time to analyze the stability of the power output.

We observe that most of light sources stabilize quickly, within less than 10 minutes, while the 385 nm light source takes approximately 41 minutes to reach the stability threshold. The template also calculates Stability Factor (S) using the formula: S (%) = 100 × (1 - (Pmax - Pmin) / (Pmax + Pmin))

  • Report the results in a table

385nm475nm555nm630nm

Stabilisation time (Max CV 0.01% for 10 min)

41338
Stability Factor (%) Before Warmup99.7%99.9%100.0%100.0%
Stability Factor (%) After Warmup100.0%100.0%100.0%99.9%

 

Metrics

  • The Stability Factor indicates a higher stability the closer to 100% and focuses specifically on the range of values (difference between max and min) relative to their sum, providing an intuitive measure of how tightly the system's behavior stays within a defined range.
  • The Coefficient of Variation focuses on the dispersion of all data points (via the standard deviation) relative to the mean. Lower Coefficient indicates a better stability around the mean.

Conclusion

The illumination warmup time for this instrument is approximately 40 minutes. This duration is essential for ensuring accurate quantitative measurements, as the Coefficient of Variation (CV) threshold is strict, with a maximum allowable variation of 0.01% within a 10-minute window.

Illumination Maximum Power Output

This measure assesses the maximum power output of each light source, considering both the quality of the light source and the components along the light path. Over time, we expect a gradual decrease in power output due to the aging of hardware, including the light source and other optical components. These measurements will also be used to track the performance of the light sources over their lifetime (see Long-Term Illumination Stability section). For a detailed exploration of illumination properties, refer to the Illumination Power, Stability, and Linearity Protocol by the QuaRep Working Group 01.

Acquisition protocol

  • Warm up the light sources for the required duration (see previous section).
  • Place the power meter sensor on the microscope stage.
  • Center the sensor and the objective.
  • Zero the sensor to ensure accurate readings.
  • Select the wavelength of the light source you want to monitor, using your power meter controller or associated software.
  • Turn the light source on to 100% power.
  • Record the average power output over 10 seconds.
    I personally re-use the data collected during the warm-up kinetic experiment (when the power has been stable for 1h) for this purpose.
  • Repeat for each light source.

Results

These results are informative on their own but could become even more meaningfull when comapred to the manufacturer’s specifications.

  • Calculate the Relative PowerSpec using the formula: Relative PowerSpec = (Measured Power / Manufacturer Specifications) and plot the Relative Power for each light source.


  • Report the results in a table

Manufacturer
Specifications (mW)
Measurements
2024-11-22 (mW)
Relative PowerSpec (%)
385nm150.25122.281%
470nm110.495.987%
555nm31.92475%
630nm5239.2676%

Metrics

  • The Maximum Power indicates how much light is provided by the instrument.
  • The Relative PowerSpec indicates how much power is provided compared to the specifications.

Conclusion

This instrument provides 80% of the power specified by the manufacturer. These results are consistent, as the manufacturer’s specifications are based on a different objective, and likely different filters and mirrors, which can affect the measured power output.

Illumination Stability

The light sources used on a microscope should remain constant or at least stable over the time scale of an experiment. For this reason, illumination stability is recorded across four different time scales:

  • Real-time Illumination Stability: Continuous recording for 1 minute. This represents the duration of a z-stack acquisition.
  • Short-term Illumination Stability: Recording every 1-10 seconds for 5-15 minutes. This represents the duration needed to acquire several images.
  • Mid-term Illumination Stability: Recording every 10-30 seconds for 1-2 hours. This represents the duration of a typical acquisition session or short time-lapse experiments. For longer time-lapse experiments, a longer duration may be used.
  • Long-term Illumination Stability: Recording once a year or more over the lifetime of the instrument.

For a detailed exploration of illumination stability, refer to the Illumination Power, Stability, and Linearity Protocol by the QuaRep Working Group 01.

Real-time Illumination Stability

Acquisition protocol

  • Warm up the light sources for the required duration (see previous section).
  • Place the power meter sensor on the microscope stage.
  • Center the sensor and the objective.
  • Zero the sensor to ensure accurate readings.
  • Select the wavelength of the light source you want to monitor, using your power meter controller or associated software.
  • Turn the light source on to 100% power.
  • Record the power output every 100 ms for 1 minute. For some microscope dedicated to a fast imaging it might be required to record stability at a faster rate. The Thorlab S170C sensor can record at 1 kHz !

    I personally acquire this data immediately after the warm-up kinetic experiment, without turning off the light source.

  • Repeat for each light source.

 Results

  • Use the provided spreadsheet template Illumination_Stability_Template.xlsx
  • Copy and paste the data from the recorded CSV file into the highlighted cells of the template to visualize your results.
  • For each light source, plot the Measured Power Output (in mW) over Time.

  • Calculate the Relative Power using the formula: Relative Power = (Power / Max Power). Then, plot the Relative Power (%) over Time.

  • Calculate the Stability Factor (S) using the formula: S (%) = 100 × (1 - (Pmax - Pmin) / (Pmax + Pmin)).
  • Calculate the Coefficient of Variation (CV) using the formula: CV = Standard Deviation / Mean.
  • Reports the results in a table.

Stability Factor Coefficient of Variation
385nm99.99%0.002%
475nm99.99%0.002%
555nm99.97%0.004%
630nm99.99%0.002%

From the Stability Factor results, we observe that the difference between the maximum and minimum power is less than 0.03%. Additionally, the Coefficient of Variation indicates that the standard deviation is less than 0.004% of the mean value, demonstrating excellent realtime power stability.

Conclusion

The light sources exhibit a very high stability, >99.9% during a 1-minute period.

Short-term Illumination Stability

Acquisition protocol

  • Warm up the light sources for the required duration (see previous section).
  • Place the power meter sensor on the microscope stage.
  • Center the sensor and the objective.
  • Zero the sensor to ensure accurate readings.
  • Select the wavelength of the light source you want to monitor, using your power meter controller or associated software.
  • Turn the light source on to 100% power.
  • Record the power output every 10 seconds for 15 minutes.

    I personally re-use the data collected during the warm-up kinetic experiment (when the power has been stable for 1h) for this purpose.

  • Repeat for each light source you.

Results

  • Use the provided spreadsheet template Illumination_Stability_Template.xlsx
  • Copy and paste the data from the recorded CSV file into the highlighted cells of the template to visualize your results.
  • For each light source, plot the Measured Power Output (in mW) over Time.

  • Calculate the Relative Power using the formula: Relative Power = (Power / Max Power). Then, plot the Relative Power (%) over Time.

  • Calculate the Stability Factor (S) using the formula: S (%) = 100 × (1 - (Pmax - Pmin) / (Pmax + Pmin)).
  • Calculate the Coefficient of Variation (CV) using the formula: CV = Standard Deviation / Mean.
  • Reports the results in a table.


Stability Factor Coefficient of Variation
385nm100.00%0.000%
475nm100.00%0.002%
555nm100.00%0.003%
630nm99.99%0.004%

From the Stability Factor, we observe that the difference between the maximum and minimum power is less than 0.01%. Additionally, the Coefficient of Variation indicates that the standard deviation is less than 0.004% of the mean value, demonstrating excellent short-term power stability.

Conclusion

The light sources exhibit high stability, maintaining >99.9% stability during a 15-minute period.

Mid-term Illumination Stability

Acquisition protocol

  • Warm up the light sources for the required duration (see previous section).
  • Place the power meter sensor on the microscope stage.
  • Center the sensor and the objective.
  • Zero the sensor to ensure accurate readings.
  • Select the wavelength of the light source you want to monitor, using your power meter controller or associated software.
  • Turn the light source on to 100% power.
  • Record the power output every 10 seconds for 1 hour.

    I personally re-use the data collected during the warmup kinetic experiment.

  • Repeat for each light source

Results

  • Use the provided spreadsheet template Illumination_Stability_Template.xlsx
  • Copy and paste the data from the recorded CSV file into the highlighted cells of the template to visualize your results.
  • For each light source, plot the Measured Power Output (in mW) over Time

  • Calculate the Relative Power using the formula: Relative Power = (Power / Max Power). Then, plot the Relative Power (%) over Time.

  • Calculate the Stability Factor (S) using the formula: S (%) = 100 × (1 - (Pmax - Pmin) / (Pmax + Pmin)).
  • Calculate the Coefficient of Variation (CV) using the formula: CV = Standard Deviation / Mean.
  • Reports the results in a table.

Stability Factor Coefficient of Variation
385nm99.98%0.013%
475nm99.98%0.011%
555nm99.99%0.007%
630nm99.97%0.020%

From the Stability Factor, we observe that the difference between the maximum and minimum power is less than 0.03%. Additionally, the Coefficient of Variation indicates that the standard deviation is less than 0.02% of the mean value, demonstrating excellent mid-term power stability.

Conclusion

The light sources exhibit exceptional stability, maintaining a performance of >99.9% during a 1-hour period.

Long-term Illumination Stability

Long-term illumination stability measures the power output over the lifetime of the instrument. Over time, we expect a gradual decrease in power output due to the aging of hardware, including the light source and other optical components. These measurements are not an experiment per se but it is the measurement of the maximum power output over time.

Acquisition protocol

  • Warm up the light sources for the required duration (see previous section).
  • Place the power meter sensor on the microscope stage.
  • Center the sensor and the objective.
  • Zero the sensor to ensure accurate readings.
  • Select the wavelength of the light source you want to monitor, using your power meter controller or associated software.
  • Turn the light source on to 100% power.
  • Record the average power output over 10 seconds.
    I personally re-use the data collected for the maximal power output section and plot it over time.
  • Repeat for each light source.

Results

  • Use the provided spreadsheet template Illumination_Long-Term_Stability_Log.xlsx
  • Copy and paste the data from the recorded CSV file into the highlighted cells of the template to visualize your results.
  • For each light source, plot the Measured Power Output (in mW) over Time

  • Calculate the Relative Power using the formula: Relative Power = (Power / Max Power). Then, plot the Relative Power (%) over Time.

  • Calculate the Relative PowerSpec by comparing the measured power to the manufacturer’s specifications using the following formula: Relative PowerSpec = Power / PowerSpec
  • Plot the Relative PowerSpec (% Spec) over Time.

We expect a gradual decrease in power output over time due to the aging of hardware. Light sources should be replaced when the Relative PowerSpec falls below 50%.

  • Reports the results in a table.

Relative PowerSpec
385nm80.53%
475nm83.61%
555nm65.83%
630nm67.12%
  • Keep the Log file to append future measurements

Conclusion

The light sources are somehow stable over the last 2 years but a decrease in the maximum power output is seen.

Illumination Stability Conclusions

Stability Factor

Real-time

1 min

Short-term

15 min

Mid-term

1 h

385nm

99.99%100.00%99.98%

475nm

99.99%100.00%99.98%

555nm

99.97%100.00%99.99%

630nm

99.99%99.99%99.97%

The light sources are highly stable (Stability >99.9%).

Metrics

  • The Stability Factor indicates a higher stability the closer to 100% and focuses specifically on the range of values (difference between max and min) relative to their sum, providing an intuitive measure of how tightly the system's behavior stays within a defined range.
  • The Coefficient of Variation focuses on the dispersion of all data points (via the standard deviation) relative to the mean. Lower Coefficient indicates a better stability around the mean.


Illumination Input-Output Linearity

This measure compares the power output as the input varies. A linear relationship is expected between the input and the power output. For a detailed exploration of illumination linearity, refer to the Illumination Power, Stability, and Linearity Protocol by the QuaRep Working Group 01.

Acquisition protocol

  • Warm up the light sources for the required duration (see previous section).
  • Place the power meter sensor on the microscope stage.
  • Center the sensor and the objective.
  • Zero the sensor to ensure accurate readings.
  • Select the wavelength of the light source you want to monitor, using your power meter controller or associated software.
  • Adjust its intensity to 100%, then incrementally decrease the intensity input increase to 90%, 80%, 70%, and so on, until 0%.
    I typically collect this data immediately after the warm-up kinetic phase and once the real-time power stability data has been recorded.
  • Record the power output corresponding to each input level.
  • Repeat for each light source

Results

  • Use the provided spreadsheet template Illumination_Linearity_Template.xlsx
  • Enter your measurement into the highlighted cells of the template to visualize your results.
  • For each light source, plot the Measured Power output (in mW) as a function of the Input (%).

  • Calculate the Relative Power using the formula: Relative Power = Power / MaxPower.
  • Plot the Relative Power (%) as a function of the Input (%).

  • Determine the equation for each curve, which is typically a linear relationship of the form: Output = K × Input
  • Report the Slope (K) and the Coefficient of Determination (R²) for each curve in a table.

Illumination Input-Output Linearity


Slope

R2

385nm

0.9969

1

475nm

0.9984

1

555nm

1.0012

1

630nm

1.0034

1

The slopes demonstrate a nearly perfect linear relationship between the input and the measured output power, with values very close to 1. The coefficient of determination (R²) indicates a perfect linear fit, showing no deviation from the expected relationship.

Metrics

  • The Slope indicates the rate of change between Input and Ouput.
  • The Coefficient of Determination indicates how fitted is the data to a linear relationship.

Conclusion

The light sources are highly linear with an average Slope=0.999 and a perfect fit R2=1

Objectives and Cubes Transmittance

Since we are using a power meter, we can easily assess the transmittance of the objectives and filter cubes. This measurement compares the power output when different objectives and filter cubes are in the light path. It evaluates the transmittance of each objective and compares it with the manufacturer’s specifications. This method can help detect defects or dirt on the objectives. It can also verify the correct identification of the filters installed in the microscope.

Objectives Transmittance

Acquisition protocol

  • Warm up the light sources for the required duration (see previous section).
  • Place the power meter sensor on the microscope stage.
  • Center the sensor and the objective.
  • Zero the sensor to ensure accurate readings.
  • Select the wavelength of the light source you want to monitor, using your power meter controller or associated software.
  • Turn on the light source to 100% intensity.
  • Record the power output for each objective, as well as without the objective in place.

    I typically collect this data after completing the warm-up kinetic phase, followed by the real-time power stability measurements, and immediately after recording the power output linearity.

  • Repeat for each light source and wavelength

Results

  • Use the provided spreadsheet template Objective and cube transmittance_Template.xlsx
  • Enter your measurement into the highlighted cells of the template to visualize your results.
  • For each light source, plot the Measured Power output (in mW) as a function of the wavelength (in nm).

  • Calculate the Relative Transmittance using the formula: Relative Transmittance = Power / PowerNoObjective.
  • Plot the Relative Transmittance (%) as a function of the wavelength (in nm).

  • Calculate the average transmittance for each objective
  • Compare the average transmittance to the specifications provided by the manufacturer
  • Report results in a table.

Average
Transmittance
Specifications
[470-630]

Average Transmittance
[470-630]

2.5x-0.07577%>90%84%
10x-0.25-Ph160%>80%67%
20x-0.5 Ph262%>80%68%
63x-1.429%>80%35%

The measurements are generally close to the specifications, with the exception of the 63x-1.4 objective. This deviation is expected, as the 63x objective has a smaller back aperture, which reduces the amount of light it can receive..

Conclusion

The objectives are transmitting light properly.

Cubes Transmittance

Acquisition protocol

  • Warm up the light sources for the required duration (see previous section).
  • Place the power meter sensor on the microscope stage.
  • Center the sensor and the objective.
  • Zero the sensor to ensure accurate readings.
  • Select the wavelength of the light source you want to monitor, using your power meter controller or associated software.
  • Turn on the light source to 100% intensity.
  • Record the power output for each filter cube.

    I typically collect this data after completing the warm-up kinetic phase, followed by the real-time power stability measurements, and immediately after recording the power output linearity.

  • Repeat for each light source and wavelength

Results

  • Use the provided spreadsheet template Objective and cube transmittance_Template.xlsx
  • Enter your measurement into the highlighted cells of the template to visualize your results.
  • For each light source, plot the Measured Power output (in mW) as a function of the wavelength (in nm)..

  • Calculate the Relative Transmittance using the formula: Relative Transmittance = Power / PowerMaxFilter.
  • Plot the Relative Transmittance (%) as a function of the wavelength (in nm).

  • Calculate the Average Transmittance for each filter at the appropriate wavelengths
  • Report the results in a table.

385475555590630
DAPI/GFP/Cy3/Cy5100%100%100%100%100%
DAPI14%0%0%8%0%
GFP0%47%0%0%0%
DsRed0%0%47%0%0%
DHE0%0%0%0%0%
Cy50%0%0%0%84%
  • The DAPI cube transmits only 14% of the excitation light compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5. While it is still usable, it will provide a low signal. This is likely because the excitation filter within the cube does not match the light source properly. Since an excitation filter is already included in the light source, the filter in this cube could be removed.

  • The GFP and DsRed cubes transmit 47% of the excitation light compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5, and they are functioning properly.

  • The DHE cube does not transmit any light from the Colibri. This cube may need to be removed and stored.

  • The Cy5 cube transmits 84% of the excitation light compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5, and it is working properly.

Conclusion

Actions to be Taken:

  • Remove the excitation filter from the DAPI cube, as it does not properly match the light source and is redundant with the excitation filter already included in the light source.
  • Remove and store the DHE cube, as it does not transmit any light from the Colibri and is no longer functional.

We are done with the powermeter (wink) .

Field Illumination Uniformity

Having confirmed the stability of our light sources and verified that the optical components (objectives and filter cubes) are transmitting light effectively, we can now proceed to evaluate the uniformity of the illumination. This step assesses how evenly the illumination is distributed. For a comprehensive guide on illumination uniformity, refer to the Illumination Uniformity by the QuaRep Working Group 03.

Acquisition protocol 

  1. Place a fluorescent plastic slide (Thorlabs FSK5) onto the stage.
  2. Center the slide and the objective.
  3. Adjust the focus to align with the surface of the slide.

    I typically use a red fluorescent slide and focus on a scratch mark on its surface for alignment.

  4. Slightly adjust the focus deeper into the slide to minimize the visibility of dust, dirt, and scratches.

  5. Modify the acquisition parameters to ensure the image is properly exposed.

    I typically use the auto-exposure feature, aiming for a targeted intensity of 30%.

  6. Capture a multi-channel image.
  7. Repeat steps 5 and 6 for each objective and filter combination.

Processing

You should have acquired several multi-channel images that now need processing to yield meaningful results.

I initially was using the Field Illumination analysis function of the MetroloJ_QC plugin for FIJI but eventually branched away to write my own processing plugin named QC ScopeFor more information about the QC Scope please refer to the QC Scope repository on Github. For more information about MetroloJ_QC plugin please refer to manual available on the MontpellierRessourcesImagerie repository on GitHub.

  1. Download FIJI for your operating system
  2. Download QC Scope (aka QC_Scope.jar)
  3. Unzip FIJI.app to your favorite location on your computer (usually your Desktop but not in a System Directory).
  4. Start FIJI (aka FIJI.app, ImageJ-win.exe)
  5. Drag and drop QC_Scope.jar into the FIJI toolbar
  6. Click Save
  7. Click OK
  8. Close FIJI
  9. Reopen FIJI

After installation, QC Scope will be available in the FIJI menu under Plugins>QC Scope. The QC Scope floating toolbar is available for rapid and convenient access. To load it, select the QC Scope Toolbar in the FIJI menu under Plugins>QC Scope>QC Scope ToolbarAdditionally and for an even more convenient usage, it is possible to have the QC Scope Toolbar automatically loaded when FIJI is started. Click the Autostart button  or select the QC Scope Toolbar Autostart in the FIJI menu under Plugins>QC Scope>QC Scope Toolbar Autostart.

  1. Open FIJI.
  2. Launch the QC Scope Toolbar by navigating to Plugins>QC Scope>QC Scope Toolbar.
  3. Click on Uniformity.
    1. If one or more images are already opened QC Scope will process them.
    2. If no image is open, QC Scope will prompt to select a folder and process all images withing the folder (and subfolders)

      QCSCope file format compatibility

      For now, QC Scope only processes images with the following extensions  ".tif", ".tiff", ".jpg", ".jpeg", ".png", ".czi", ".nd2", ".lif", ".lsm", ".ome.tif", ".ome.tiff"

  4. QC Scope will try to read the Metadata from the first image and pre-process all the channels with default or the last used processing settings
  5. It will display the metadata, the initial results and the processing options in a dialog
    1. Microscope Metadata: Objective Magnification, NA, and Immersion media
    2. Image Calibration status, Pixel Width, Height, Voxel Size, Unit
    3. For each channel: Name and Emission Wavelength
    4. Processing Settings:
      1. Binning Method:
        1. Iso-Density (preferred): This method divides the image into 10 bins of equal Nb of Pixels. Nb Pixel Per Bin = (Width x Height) / 10 and assign a new pixel value of 25 for all the Nb Pixel Per Bin darkest pixels, 50 for the next darkest Nb Pixel Per Bin etc... until 250 for the brightest Nb Pixel Per Bin pixels.
        2. Iso-Intensity: This method divides the image into 10 bins of equal bin intensities. Bin Width = (Max - Min) / 10 and assign a new pixel value of 25 for all the pixels with an intensity between Min and Min + Bin With, 50 for intensities between Min + Bin Width and Min + 2 x Bin Width etc... until 250 for intensities between Min + 9 x Bin Width and Max.
      2. Gaussian Blur: Apply a gaussian blur with the given Gaussian Blur Sigma before processing the image channel
      3. Channel: The selected channel will be processed with the entered processing parameters and displayed as part of the testing process to define optimal processing parameters.
      4. Batch Mode: If activated, QC Scope will re-use the settings without displaying the dialog unless metadata differs
      5. Save Individual Files: For each image, QC Scope will save the individual processed images (1 per channel) and a CSV file with all measured parameters.
      6. Prolix Mode: Display all the QC Scope actions in the Log
      7. Test Processing: When selected the Dialog will keep appearing. This is useful to test the Processing Settings
































  6. QC Scope will save files in a folder named Output on your desktop:

    1. At least 2 CSV files:

      1. Field Uniformity_All-Data_Merged.csv gathers all the measured parameters
      2. Field Uniformity_Essential-Data_Merged.csv gathers only the essential information

    2. Optionnally, if Save Individual Files is selected QC Scope will also save:
      1. 1 CSV file per image NameOfYourImage_Uniformity-Data.csv with one row per channel containing all the measured parameters
      2. 1 TIF file per channel for every processed image showing the binned (Iso-density or Iso-Intensity) image map with the Reference Center indicated as an overlay

Note: QC Scope never overwrites files. It will check for the existence of files and increment a number until it can safely write the output file.




Description of QC Scope Field Uniformity Results (in bold the results included in Essential Data)

Key OrderField NameData ExampleData TypeDescription
1Filename10x_Quad_Exp-01.cziStringName of the processed image
2Channel Nb4IntegerNumber of the Channel from 1 to n
3Channel NameDAPIStringName of the Channel
4Channel Wavelength EM (nm)465IntegerChannel Emission Wavelength
5Objective Magnification10xStringObjective Magnification
6Objective NA0.25FloatObjective Numerical Aperture
7Objective Immersion MediaAirStringObjective Immerion Media
8Gaussian Blur AppliedTRUEBooleanIf Gaussian Blur was applied
9Gaussian Sigma10IntegerSigma of the Gaussian Blur
10Binning MethodIso-DensityStringBinning Method used
11Batch ModeTRUEBooleanBoolean key to process images in batch mode (no Dialog)
12Save Individual FilesFALSEBooleanBoolean key to save individual files (1 csv data per file with 1 row per channel, 1 tif binned image par channel)
13Prolix ModeFALSEBooleanBoolean key display detailed plugin actions in the log
14Image Min Intensity856IntegerRaw Image Minimum of Pixel Intensities
15Image Max Intensity1080IntegerRaw Image Maximum of Pixel Intensities
16Image Mean Intensity959.1FloatRaw Image Mean of Pixel Intensities
17Image Standard Deviation Intensity24.3FloatRaw Image Standard Deviation of Pixel Intensities
18Image Median Intensity959IntegerRaw Image Median Pixel Intensity
19Image Mode Intensity110IntegerRaw Image Mode Pixel Intensity
20Image Width (pixels)1388IntegerImage Width in pixels
21Image Height (pixels)1040IntegerImage Height in pixels
22Image Bit Depth16IntegerImage Bit Depth
23Pixel Width (um)0.645FloatImage pixel width (unit/px)
24Pixel Height (um)0.645FloatImage pixel height (unit/px)
25Pixel Depth (um)1FloatImage voxel depth (unit/voxel)
26Space UnitmicronStringRaw Image Space Unit
27Space Unit StandardumStringStandardize Space Unit (nm, um, cm, m)
28Calibration StatusTRUEBooleanBoolean key displaying the calibration status
29Standard Deviation (GV)24.3FloatRaw Image Standard Deviation of Pixel Intensities
30Uniformity Standard (%)79.3FloatUniformity as calculated by MetroloJ_QC. Uniformity_Standard = 100 * (Min / Max)
31Uniformity Percentile (%)95.8FloatUniformity calculated with the average of the 5% and 95% pixel intensities.
Uniformity_Percentile = (1 - (Avg_Intensity95 - Avg_Intensity5) / (Avg_Intensity95 + Avg_Intensity5) ) * 100
32Coefficient of Variation0.0253FloatCoefficient of variation. CV = (Std_Dev / Mean)
33Uniformity CV based97.5FloatUniformity calculated from the Coefficient of variation. Uniformity_CV = (1 - CV) * 100
34X Center (pixels)694IntegerCoordinate in pixel of the center of the Image (Ideal centering). Image Width (pixels) / 2
35Y Center (pixels)520IntegerCoordinate in pixel of the center of the Image (Ideal centering). Image Height (pixels) / 2
36X Ref (pixels)230.4FloatCoordinate in pixels of the centroid of the largest particule identified in the last bin. Used to caculate the Centering Accuracy
37Y Ref (pixels)876.4FloatCoordinate in pixels of the centroid of the largest particule identified in the last bin. Used to caculate the Centering Accuracy
38X Ref (um)148.6FloatCoordinate in scaled unit of the centroid of the largest particule identified in the last bin.
39Y Ref (um)565.3FloatCoordinate in scaled unit of the centroid of the largest particule identified in the last bin.
40Centering Accuracy (%)32.6FloatCentering Accuracy = 100 - 100 * (2 / sqrt(Image Width**2 + Image Height**2)) * sqrt ( (X_Ref_Pix - Image Width/2)**2 + (Y_Ref_Pix - Image Height/2)**2)


  1. Open FIJI.
  2. Load your image by dragging it into the FIJI bar.
  3. Launch the MetroloJ_QC plugin by navigating to Plugins > MetroloJ QC.
  4. Click on Field Illumination Report.
  5. Enter a Title for your report, I typically use the date
  6. Type in your Name.
  7. Click Filter Parameters and input the filter's names, excitation, and emission wavelengths.
  8. Check Remove Noise using Gaussian Blur.

  9. Enable Apply Tolerance to the Report and reject uniformity and accuracy values below 80%.

  10. Click File Save Options.

  11. Select Save Results as PDF Reports.

  12. Select Save Results as spreadsheets.

  13. Click OK.

  14. Repeat steps 4 through 13 for each image you have acquired.

This will generate detailed results stored in a folder named Processed. The Processed folder will be located in the same directory as the original images, with each report and result saved in its own sub-folder.

The following script for R will process the Illumination Uniformity Results generated by MetroloJ_QC Process Illumination Uniformity Results.R. To use it, simply drag and drop the file into the R interface. You may also open it with RStudio and Click the Source button. The script will:

  • Prompt the user to select an input folder.
  • Load all _results.xls files located in the selected folder and subfolders
  • Process and merge all results generated by the MetroloJ_QC Field Illumination.
  • Split the filenames using the _ character as a separator and create columns named Variable-001, Variable-002, etc. This will help in organizing the data if the filenames are formatted as Variable-A_Variable-B, for example.
  • Save the result as Field-Unifromity_Merged-Data.csv in an Output folder on the user's Desktop 


This script will generate a csv file that can be saved as an XLSX and manipulated with a pivot table to generate informative graphs and statistics. From the Pivot table copy and paste your data into the orange cells of the following spreadsheet Illumination_Uniformity_Template.xlsx

Results

Plot the uniformity and centering accuracy for each objective.

Metrics

  • The Uniformity indicates the range between the minimum and maximum intensities in the image. U=(Min/Max)*100. 100% Uniformity indicates a perfectly homogeneous image. 50% Uniformity indicates the minimum is half the maximum.
  • The Centering Accuracy indicates how far from the center of the image is the center of the illumination (centroid of the max illumination bin). 100% indicates a perfectly aligns with the center of the image. 0% centering accuracy indicates that the center of the illumination is the farthest from the center of the image.



ObjectiveUniformityCentering Accuracy
2x97.5%92.7%
10x97.0%94.5%
20x97.3%97.1%
63x96.6%96.7%


 Plot the uniformity and centering accuracy for each filter set. 

FilterUniformityCentering Accuracy
DAPI98.3%99.4%
DAPIc95.8%84.9%
GFP98.1%99.1%
GFPc96.5%93.3%
Cy397.6%96.5%
Cy3c96.8%97.9%
Cy597.0%99.6%
Cy5c96.7%91.3%


This specific instrument has a quad-band filter as well as individual filter cubes. We can plot the uniformity and centering accuracy per filter types. 

Filter TypeUniformity

Centering Accuracy

Quad band

97.7%98.7%
Single band96.5%

91.8%

Store the original field illumination images to be able to perform shading corrections after acquisition.

Conclusion

The uniformity and centering accuracy are excellent across all objectives and filters, consistently exceeding 90%. However, the single-band filter cubes exhibit slightly lower uniformity and centering accuracy compared to the quad-band filter cube.

Store the original field illumination images to be able to perform shading corrections after acquisition.

XYZ Drift

This experiment evaluates the stability of the system in the XY and Z directions. As noted earlier, when an instrument is started, it requires a warmup period to reach a stable steady state. To determine the duration of this phase accurately, it is recommended to record a warmup kinetic at least once per year. For a comprehensive guide on Drift and Repositioning, refer to the Stage and Focus Precision by the QuaRep Working Group 06.

Acquisition protocol 

  1. Place 4 µm diameter fluorescent beads (TetraSpec Fluorescent Microspheres Size Kit, mounted on a slide) on the microscope stage.

  2. Center an isolated bead under a high-NA dry objective.

  3. Crop the acquisition area to only visualize one bead but keep it large enough to anticipate a potential drift along the X and Y axes (100 um FOV should be enough)
  4. Select an imaging channel appropriate for the fluorescent beads

    I typically use the Cy5 channel which is very bright and resistant to bleaching, yet this channel has a lower resolution, but it does not really matter here

  5. Acquire a large Z-stack at 1-minute intervals for a duration of 24 hours.

    To ensure accurate measurements, it is essential to account for potential drift along the Z-axis by acquiring a Z-stack that is substantially larger than the visible bead size. I typically acquire a 40 µm Z-stack.

Processing

  1. Open your image in FIJI
  2. If necessary, crop the image to focus on a single bead for better visualization.
  3. Use the TrackMate plugin included in FIJI to detect and track spots over time Plugins\Tracking\TrackMate.
  4. Apply Difference of Gaussians (DoG) spot detection with a detection size of 4 µm
  5. Enable sub-pixel localization for increased accuracy
  6. Click Preview to visualize the spot detection
  7. Set a quality threshold (click and slide) high enough to detect a single spot per frame
  8. Click Next and follow the detection and tracking process
  9. Save the detected Spots coordinates as a CSV file for further analysis

Results

  • Open the spreadsheet template XYZ Drift Kinetic_Template.xlsx and fill in the orange cells.
  • Copy and paste the XYZT and Frame columns from the TrackMate spots CSV file into the corresponding orange columns in the spreadsheet.
  • Enter the numerical aperture (NA) and emission wavelength used during the experiment.
  • Calculate the relative displacement in X, Y, and Z using the formula: Relative Displacement = Position - PositionInitial.
  • Finally, plot the relative displacement over time to visualize the system's drift.

We observe an initial drift that stabilizes over time in X (+2 um), Y (+1.3 um) and Z (-10.5 um). Calculate the displacement 3D Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2) + (Z2-Z1)2 ) and plot the displacement over time. Calculate the resolution of your imaging configuration, Lateral Resolution = LambdaEmission / 2*NA and plot the resolution over time (constant).

Identify visually the time when the displacement is lower than the resolution of the system. On this instrument it takes 120 min to reach its stability. Calculate the velocity, Velocity = (Displacement2-Displacement1)/T2-T1) and plot the velocity over time.

Calculate the average velocity before and after stabilisation and report the results in a table.

Objective NA0.5
Wavelength (nm)705
Resolution (nm)705
Stabilisation time (min)122
Average velocity Warmup (nm/min)113
Average velocity System Ready (nm/min)14

Metrics

  • The Stabilisation Time indicates the time in minutes necessary for the instrument to have a drift lower than the resolution of the system.
  • The Average Velocity indicates the speed of drift in all directions XYZ in nm/min.

Conclusion

The warmup time for this instrument is approximately 2 hours. After the warmup period, the average displacement velocity is 14 nm/min, which falls within an acceptable range.

Stage Repositioning Dispersion

This experiment evaluates how accurate is the system in XY by measuring the dispersionof repositioning. Several variables can affect repositioning: i) Time, ii) Traveled distance, iii) Speed and iv) acceleration. For a comprehensive guide on Stage Repositioning, refer to the Stage and Focus Precision by the QuaRep Working Group 06 and the associated XY Repositioning Protocol.

Acquisition protocol

  1. Place 4 µm diameter fluorescent beads (TetraSpec Fluorescent Microspheres Size Kit, mounted on a slide) on the microscope stage.

  2. Center an isolated bead under a high-NA dry objective.

  3. Crop the acquisition area to only visualize one bead but keep it large enough to anticipate a potential drift along the X and Y axes (100 um FOV should be enough)
  4. Select an imaging channel appropriate for the fluorescent beads

    I typically use the Cy5 channel which is very bright and resistant to bleaching, yet this channel has a lower resolution.

  5. Acquire a Z-stack at positions separated by 0 um (Drift), 1000 um and 10 000 um in both X and Y direction.

  6. Repeat the acquisition 20 times

  7. Acquire 3 datasets for each condition

  • Your stage might have a smaller range!
  • Lower the objectives during movement to avoid damage

Processing

  1. Open your image in FIJI
  2. If necessary, crop the image to focus on a single bead for better visualization.
  3. Use the TrackMate plugin included in FIJI to detect and track spots over time Plugins\Tracking\TrackMate.
  4. Apply Difference of Gaussians (DoG) spot detection with a detection size of 4 µm
  5. Enable sub-pixel localization for increased accuracy
  6. Click Preview to visualize the spot detection
  7. Set a quality threshold (click and slide) high enough to detect a single spot per frame
  8. Click Next and follow the detection and tracking process
  9. Save the detected Spots coordinates as a CSV file for further analysis

Results

  • Open the spreadsheet template Stage Repositioning_Template.xlsx and fill in the orange cells.
  • Copy and paste the XYZT and Frame columns from the TrackMate spots CSV file into the corresponding orange columns in the spreadsheet.
  • Enter the numerical aperture (NA) and emission wavelength used during the experiment.
  • Calculate the relative position in X, Y, and Z using the formula: Relative PositionRelative = Position - PositionInitial.
  • Finally, plot the relative position over time to visualize the system's stage repositioning dispersion.


We observe an initial movement in X and Y that stabilises. Calculate the displacement 2D Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2) ) and plot the 2D displacement over time. Calculate the resolution of your imaging configuration, Lateral Resolution = LambdaEmission / 2*NA and plot the resolution over time (constant).


This experiment shows a significant initial displacement between Frame 0 and Frame 1, ranging from 1000 nm to 400 nm, which decreases to 70 nm by Frame 2. To quantify this variation, calculate the dispersion for each displacement using the formula: Dispersion = StandardDeviation(Displacement). Report the results in a table.

Traveled Distance (mm)0 mm1 mm10 mm
X Dispersion (nm)4188121
Y Dispersion (nm)414148
Z Dispersion (nm)103453
Repositioning Dispersion 3D (nm)622791
Repositioning Dispersion 2D (nm)222690

Conclusion

The system has an effective Stage Repositioning Dispersion of 230 nm. The results are higher than expected because of the initial shift in the first frame that eventually stabilizes. Excluding the first frame significantly improves the measurements, reducing the repositioning dispersion to 40 nm. Further investigation is required to understand the underlying cause.

Traveled Distance (um)0 mm1 mm10 mm
X Dispersion (nm)32852
Y Dispersion (nm)36835
Z Dispersion (nm)102640
Repositioning Dispersion 3D (nm)64336
Repositioning Dispersion 2D (nm)24036

Metrics

The Repositioning Dispersion indicates how spread is the repositioning in nm. The lower, the more accurate.


Further Investigation


We observed a significant shift in the first frame, which was unexpected and invites further investigation. These variables can affect repositioning dispersion: i) Traveled distance, ii) Speed, iii) Acceleration, iv) Time, and v) Environment. We decided to test the first three.

Methodology

To test if these variables have a significant impact on the repositioning, we followed the XYZ repositioning dispersion protocol with the following parameters:

  • Distances: 0um, 1um, 10um, 1000um, 10 000um, 30 000um 
  • Speed: 10%, 100%
  • Acceleration: 10%,100%
  • For each conditions 3 datasets were acquired

 Processing

This experimental protocol generated a substantial number of images. To process them automatically in ImageJ/FIJI using the TrackMate plugin, we use the following script Stage Repositioning with Batch TrackMate-v7.py

This script automates the process of detecting and tracking spots using the TrackMate plugin for ImageJ/FIJI. To use it:

  • Drop the script into the FIJI toolbar and click Run.

If images are already opened:

  1. Prompt User for Configuration:

    • The user is prompted to configure settings such as enabling subpixel localization, adjusting spot diameter, setting the threshold, and applying median filtering. These settings can be loaded from previous configurations stored in ImageJ’s preferences or set to default values. It’s recommended to run TrackMate manually on a representative image to fine-tune detection parameters (ideally detecting one spot per frame).
  2. User Selection for Image Processing:

    • The user can choose to process all open images or just the active one.
  3. For Each Image:

    • Configure TrackMate Detector and Tracker: The TrackMate detector is configured using the Difference of Gaussians method, and the tracker is set to SparseLAPTracker.
    • Analyze and Filter Tracks: Features are analyzed, and tracks are filtered according to user-defined settings.
    • Process Image: The image is processed, spots are tracked, and results are visualized in the active window.
    • Export Spot Data: Detected spot data is exported to a uniquely named CSV file stored in an "Output" directory on the desktop.
    • Save Settings: User-defined settings are saved to ImageJ’s preferences for future use.
  4. Summary:

    • A dialog is displayed summarizing the number of processed images and the location of the saved output.

If no image is opened:

  1. Prompt User to Select Folder:

    • If no image is open, the user is prompted to select a folder containing images for processing.
  2. Process the First Image:

    • The first image is opened, and the processing workflow is the same as described for open images. The user can choose to process just this image or all images in the folder.
  3. For Subsequent Images:

    • Batch Mode Handling:
      • If batch mode is enabled, images are processed without additional user input, and the settings are applied automatically.
      • If batch mode is not enabled, the user is prompted to configure settings for each subsequent image.
  4. Summary:

    • A dialog is displayed summarizing the number of processed images and the location of the saved output.

Post-Processing:

  • A summary message is logged, detailing the number of processed images and the location where the CSV files are saved.
  • The script merges and processes CSV files into a single "Merged-Data" file stored in the output directory.
  • The user is notified when the merging and processing are completed successfully.

From v7, the FIJI script performs all the necessary tasks.

Prior v7, it was generating a CSV file for each image, which can be aggregated for further analysis using the accompanying R script, Process Stage Repositioning Results.R. This R script processes all CSV files in a selected folder and save the file as Stage-Repositioning_Merged-Data.csv in an "Output" folder on the user desktop for streamlined data analysis.

This R code automates the processing of multiple CSV files containing spot tracking data.

  1. Installs and loads required libraries
  2. Sets directories for input (CSV files) and output (merged data) on the user's desktop.
  3. Lists all CSV files in the input directory and reads in the header from the first CSV file.
  4. Defines a filename cleaning function to extract relevant metadata from the filenames (e.g., removing extensions, and extracting variables).
  5. Reads and processes each CSV file:
    • Skips initial rows and assigns column names.
    • Cleans up filenames and adds them to the dataset.
    • Calculates displacement in the X, Y, and Z axes relative to the initial position with the formulat  PositionRelative= Position - PositionInitial, and computes both 3D and 2D displacement values with the following formulas:  2D_Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2)); 3D_Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2) + (Z2-Z1)2 )
  6. Merges all the processed data into a single dataframe.
  7. Saves the results as Stage-Repositioning_Merged-Data.csv located in the selected input folder.

This script generate a single CSV File that can be further processed and summarized with a pivot table as shown in the following spreadsheet Stage-Repositioning_Template.xlsx

Using the first frame as a reference we can plot the average XYZ position for each frame.

 

As observed earlier, there is a significant displacement between Frame 0 and Frame 1, particularly along the X-axis. For this analysis, we will exclude the first two frames and focus on the variables of interest: (i) Traveled distance, (ii) Speed, and (iii) Acceleration and will come back to the initial shift later.

 Repositioning Dispersion: Impact of Traveled Distance

Results

Plot the 2D displacement versus the frame number for each condition of traveled distance.


The data looks good now with the two first frames ignored. Now, we can calculate the average of the standard deviation of the 2D displacement and plot these values against the traveled distance..


 We observe a power-law relationship, described by the equation: Repositioning Dispersion = 8.2 x Traveled Distance^0.2473

Traveled Distance (um)Repositioning Dispersion (nm)
04
16
1020
10019
100076
1000056
30000107

Conclusion

In conclusion we observe that the traveled distance significantly (one-way ANOVA) affects the repositioning dispersion. However, this dispersion is much lower than the lateral resolution of the system (705nm).

Repositioning Dispersion: Impact of Speed and Acceleration

Results

Generate a plot of the 2D displacement as a function of frame number for each combination of Speed and Acceleration conditions. This visualization will help assess the relationship between displacement and time across the different experimental settings.


As noted earlier, there is a significant displacement between Frame 0 and Frame 1, particularly along the X-axis (600 nm) and, to a lesser extent, the Y-axis (280 nm). To refine our analysis, we will exclude the first two frames and focus on the key variables of interest: (i) Speed and (ii) Acceleration. To better understand the system's behavior, we will visualize the average standard deviation of the 2D displacement for each combination of Speed and Acceleration conditions.

Our observations indicate that both Acceleration and Speed contribute to an increase in 2D repositioning dispersion. However, a two-way ANOVA reveals that only Speed has a statistically significant effect on 2D repositioning dispersion. Post-hoc analysis further demonstrates that the dispersion for the Speed-Fast, Acc-High condition is significantly greater than that of the Speed-Low, Acc-Low condition.


2D Repositioning Dispersion (nm)
Speed-Slow Acc-Low32
Speed-Slow Acc-High49
Speed-Fast Acc-Low54
Speed-Fast Acc-High78

Conclusion

In conclusion we observe that Speed but not Acceleration increase 2D Repositioning Dispersion.

What about the initial shift ?

Right, I almost forgot about that. See below.

Results

Ploting the 3D displacement for each tested conditions from the preivous data.

We observe a single floating point that corresponds to the displacement between Frame 0 and Frame 1. This leads me to hypothesize that the discrepancy may be related to the stage's dual motors, each controlling a separate axis (X and Y). Each motor operates in two directions (Positive and Negative). Since the shift occurs only at the first frame, this likely relates to how the experiment is initiated.

To explore this further, I decided to test whether these variables significantly impact the repositioning. We followed the XYZ repositioning dispersion protocol, testing the following parameters:

  • Distance: 1000 µm
  • Speed: 100%
  • Acceleration: 100%
  • Axis: X, Y, XY
  • Starting Point: Centered (on target), Positive (shifted positively from the reference position), Negative (shifted negatively from the reference position) 
  • For each condition, three datasets were acquired.

Data Stage-Repositining_Diagnostic-Data.xlsx was processed as mentioned before and we ploted the 2D displacement function of the frame for each condition.

When moving along the X-axis only, we observe a shift in displacement when the starting position is either centered or positively shifted, but no shift occurs when the starting position is negatively shifted. This suggests that the behavior of the stage’s motor or the initialization of the experiment may be affected by the direction of the shift relative to the reference position, specifically when moving in the positive direction.

When moving along the Y-axis only, we observe a shift in displacement when the starting position is positively shifted, but no shift occurs when the starting position is either centered or negatively shifted. This indicates that the stage's motor behavior or initialization may be influenced by the direction of the shift, particularly when starting from a positive offset relative to the reference position.

When moving along both the X and Y axes simultaneously, a shift is observed when the starting position is centered. This shift becomes more pronounced when the starting position is positively shifted in any combination of the X and Y axes (+X+Y, +X-Y, -X+Y). However, the shift is reduced when the starting position is negatively shifted along both axes.

Conclusion

In conclusion, the observed shifts in repositioning dispersion are influenced by the initial starting position of the stage. When moving along the X and Y axes simultaneously, the shift is most significant when the starting position is centered and increases with positive shifts in both axes. Conversely, when the starting position is negatively shifted along both axes, the shift is reduced. These findings suggest that the initialization of the stage's position plays a crucial role in the accuracy of movement.

Channel Co-Alignment

Channel co-alignment or co-registration refers to the process of aligning image data collected from multiple channels. This ensures that signals originating from the same location in the sample are correctly overlaid. This process is essential in multi-channel imaging to maintain spatial accuracy and avoid misinterpretation of co-localized signals. For a comprehensive guide on Channel Co-Registration, refer to the Chromatic aberration and Co-Registration the QuaRep Working Group 04.

Acquisition protocol

  1. Place 4 µm diameter fluorescent beads (TetraSpec Fluorescent Microspheres Size Kit, mounted on a slide) on the microscope stage.

  2. Center an isolated bead under a high-NA dry objective.

  3. Crop the acquisition area to only visualize one bead but keep it large enough to anticipate a potential chromatic shift along the XY and Z axes
  4. Acquire a multi-channel Z-stack

    I usually acquire all available channels, with the understanding that no more than seven channels can be processed simultaneously. If you have more than 7 channels you can either split them into sets smaller or equal to 7 keeping one reference channel in all sets.

  5. Acquire 3 datasets

  6. Repeat for each objective

 Processing

You should have acquired several multi-channel images that now need processing to yield meaningful results. To process them, use the Channel Co-registration analysis feature of the MetroloJ_QC plugin for FIJI. For more information about the MetroloJ_QC plugin please refer to manual available at the MontpellierRessourcesImagerie on GitHub.

  1. Open FIJI.
  2. Load your image by dragging it into the FIJI bar.
  3. Launch the MetroloJ_QC plugin by navigating to Plugins > MetroloJ QC.
  4. Click on Channel Co-registration Report.
  5. Enter a Title for your report.
  6. Type in your Name.
  7. Click Microscope Acquisition Parameters and enter:
    1. The imaging modality (Widefield, Confocal, Spinning-Disk, Multi-photon) 
    2. The Numerical Aperture of the Objective used.
    3. The Refractive index of the imersion media. (Air: 1.0; Water 1.33; Oil: 1.515)
    4. The Emission Wavelength in nm for each channel.
  8. Click OK

  9. Click Bead Detection Options
    1. Bead Detection Threshold: Legacy
    2. Center Method: Legacy Fit ellipses
  10. Enable Apply Tolerance to the Report and Reject co-registration ratio above 1.

  11. Click File Save Options.

  12. Select Save Results as PDF Reports.

  13. Select Save Results as spreadsheets.

  14. Select Save result images.
  15. Select Open individual pdf report(s).
  16. Click OK.

  17. Repeat steps 4 through 16 for each image you have acquired

 Good to know: Settings are preserved between Channel Co-alignment sessions. You may want to process images with the same objective and same channels together.

This will generate detailed results stored in a folder named Processed. The Processed folder will be located in the same directory as the original images, with each report and result saved in its own sub-folder.

The following script for R processes channel co-alignment results generated by MetroloJ_QC Channel Co-registration function Process Chanel Co-Registration Results.R.

To use it, simply drag and drop the file into the R interface. You may also open it with RStudio and Click the Source button. The script will:

  • Prompt the user to select an input folder.
  • Load all _results.xls files located in the selected folder and subfolder
  • Process and merge all results generated by the MetroloJ_QC Channel Co-registration.
  • Split the filenames using the _ character as a separator and create columns named Variable-001, Variable-002, etc. This will help in organizing the data if the filenames are formatted as Variable-A_Variable-B, for example.
  • Save the result as Channel_Co-Registration_Merged-Data.csv in an Output folder on the user's Desktop

 

After much work, I ended up writing my own script for ImageJ/FIJI to detect and compute Channel Registration Data. This script is available here Channel_Registration_Batch-TrackMate_v12.py

To use it, simply drop the script into the FIJI toolbar and click Run.

  1. If images are already opened it will process them

  2. If no images are opened it will prompt to select a folder and will process all images available in the folder and subfolders

  3. It reads files metadata or load information from previously run instances

  4. The first image is pre-detected and setttings are displayed in a dialog

  5. The dialog will keep being displayed until the detection is properly done (1 spot per channel)

  6. Once the detection is correct it will compute the channel registration information
  7. Data is saved as CSV files Channel-Registration_Essential-Data_Merged and Channel-Registration_Full-Data_Merged in an Output directory on the user desktop
  8. In batch mode the settings set up for the first image are re-used for subsequent images unless discrepency is found or detection fails

Importantly this script does not use the same exact method than MetroloJ_QC Channel Registration algorithm:

The Nyquist sampling Ratio and the Resolution are caculated for both Lateral and Axial axes the same way than MetroloJ_QC does. However, if the Nyquist sampling Ratio is above 1 we compute an Maximal Achievable Resolution (named Lateral Practical Resolution and Axial Practical Resolution) by multiplying the Theoretical Resolution by the Nyquist Ratio.

We use the Pratical Resolutions to compute the Colocalization Ratios. We calculate 3 Ratios:

  • Lateral Colocalization Ratio: Distance in the XY plan between the spots idenfitifed in Ch2 - Ch1 divided by half of the Practical Lateral Resolution.
    RatioLateral = Dxy / ( PraticalResolutionLateral / 2 )
  • Axial Colocalization Ratio: Distance in the Z plan between the spots idenfitifed in Ch2 - Ch1 divided by half of the Practical Axial Resolution.
    RatioAxial = Dz / ( PraticalResolutionAxial / 2 )
  • 3D Colocalization Ratio: The Ratio is calculated by dividing the 3D Distance betwen the spots identified in Ch2 - Ch1 divided by a Reference Distance. The Reference Distance is the length between the spot 1 and a point projected from the line formed by Spot 1 and Spot 2 on the ellipse centered on Spot 1 with semi-minor axis being half of the practical lateral resolution and semi major axis as half of the practical axial resolution. The coordinates of the Reference Point is computed iteratively by moving along the Spot1-Spot2 line and minimizing the distance to the ellipse surface. It stops when it is on the surface.
    Ratio3D = Distance3D / ( DistanceRef)

Metrics

  • 3D Colocalization Ratio: A ratio above 1 indicates that the Spot of Channel 2 is further away than the effective resolution of the system. Lateral and Axial Colocalization ratios should be checked.
  • Lateral and Axial Colocalization Ratios: A ratio above 1 indicates that the Spot of Channel 2 is further away than the effective resolution of the system in the respective plan
    • A Lateral Colocalization Ratio above 1 indicates that the Ch2 image should be shifted in X and Y by the values indicated by the Pixel Shift Table.
    • An Axial Colocalization Ratio above 1 indicates that the Ch2 image should be shifted in Z by the values indicated by the Pixel Shift Table.

This method is the approach I am using here.

Results

The following spreadsheet provides a dataset that can be manipulated with a pivot table to generate informative graphs and statistics Channel_Co-registration_Template.xlsx.

Metrics

  • The Nyquist ratio evaluates how closely the images align with the Nyquist sampling criterion. It is calculated as: Nyquist Ratio = Pixel Dimension / Nyquist Dimension
    • A ratio of 1 indicates that the image acquisition complies with the Nyquist criterion.
    • A ratio above 1 signifies that the pixel dimensions of the image exceed the Nyquist criterion.
    • A ratio below 1 is the desired outcome, as it ensures proper sampling.
  • The Co-Registration Ratios measure the spatial alignment between two channels by comparing the distance between the centers of corresponding beads in both channels to a reference distance. The reference distance is defined as the size of the fitted ellipse around the bead in the first channel.
    • A ratio of 1 means the center of the bead in the second channel is located on the edge of the ellipse fitted around the bead in the first channel.
    • A ratio above 1 indicates the center of the bead in the second channel lies outside the ellipse around the first channel's bead center.
    • A ratio below 1 is the desired outcome, indicating that the center of the bead in the second channel is within a range smaller than the system's 3D resolution.

This method is the approach used in the MetroloJ QC Channel Co-Registration function.

Let's look at the 3D Colocalization Ratio for all pairs of channels.

For the 2x Objective we see that the 3D Colocalization Ratio is above 1 for the DAPI x GFP and DAPI x Cy5 pairs. This indicates that the chromatic shift is higher than the effective resolution of the system. Correction should be applied to images after acquisition. It somethimes possible to correct it before acquisition directly in the acquisition software. The correction values are provided by the Pixel Shift tables. Values highlighted correspond to a 3D Colocalization Ratio above 1.

 

These results shows a widefield instrument using a quadband pass filter: A single cube filtering 4 wavelengths. This instrument also possess individual filter cubes. Obviously the Colocalization Ratio are higher because of the mechanical shift induced by the filter turret.

With the corresponding Pixel Shift Table





Why should you care? Well when you are acquiring a multi-channel image you might see a significant shift between the two channels. This is particularly true for the combination of DAPI and Cy3 channels with the 10x Objective.


Report the Pixel Shift Table For each objective and each filter combination. This table can (should) be used to correct a multi-channel image by displacing the Channel 2 relative to the Channel 1 by the XYZ pixel coordinates indicated.




Channel_2
ObjectiveChannel_1AxisDAPIGFPCy3Cy5
2xDAPIX
0.89-0.14-0.35
Y
0.191.632.00
Z
0.893.671.58
GFPX-0.89
-1.04-1.25
Y-0.19
1.441.81
Z-0.89
2.780.70
Cy3X0.141.04
-0.21
Y-1.63-1.44
0.37
Z-3.67-2.78
-2.08
Cy5X0.351.250.21
Y-2.00-1.81-0.37
Z-1.58-0.702.08
10xDAPIX
0.46-0.85-1.16
Y
0.501.792.27
Z
4.224.441.91
GFPX-0.46
-1.31-1.61
Y-0.50
1.291.77
Z-4.22
0.22-2.31
Cy3X0.851.31
-0.30
Y-1.79-1.29
0.48
Z-4.44-0.22
-2.53
Cy5X1.161.610.30
Y-2.27-1.77-0.48
Z-1.912.312.53
20xDAPIX
0.58-0.77-1.06
Y
0.131.231.54
Z
3.313.952.09
GFPX-0.58
-1.35-1.64
Y-0.13
1.101.41
Z-3.31
0.64-1.22
Cy3X0.771.35
-0.29
Y-1.23-1.10
0.31
Z-3.95-0.64
-1.86
Cy5X1.061.640.29
Y-1.54-1.41-0.31
Z-2.091.221.86
63xDAPIX
0.13-1.52-2.03
Y
0.131.191.66
Z
0.791.310.93
GFPX-0.13
-1.65-2.16
Y-0.13
1.061.53
Z-0.79
0.520.13
Cy3X1.521.65
-0.51
Y-1.19-1.06
0.47
Z-1.31-0.52
-0.39
Cy5X2.062.220.51
Y-1.73-1.59-0.47
Z-0.92-0.120.39


Conclusion


Legend (Wait for it)...

For a comprehensive guide on Detectors, refer to the Detector Performances of the QuaRep Working Group 02.

Acquisition protocol 


 Results


Conclusion


Legend (Wait for it...) dary

For a comprehensive guide on Lateral and Axial Resolution, refer to the Lateral and Axial Resolution of the QuaRep Working Group 05.

Acquisition protocol 


 Results


Conclusion




List of Templates

  File Modified
Microsoft Excel Spreadsheet Channel_Co-registration_Data_Template.xlsx Dec 25, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Channel_Co-registration_Tempalte.xlsx Dec 20, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Channel_Co-registration_Template.xlsx Dec 25, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Field_Uniformity_Template.xlsx Dec 18, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Illumination_Linearity_Template.xlsx Dec 19, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Illumination_Long-Term_Stability_Log.xlsx Jan 06, 2025 by Nicolas Stifani
Microsoft Excel Spreadsheet Illumination_Maximum Power Output_Template.xlsx Jan 06, 2025 by Nicolas Stifani
Microsoft Excel Spreadsheet Illumination_Stability_Template.xlsx Jan 06, 2025 by Nicolas Stifani
Microsoft Excel Spreadsheet Illumination_Uniformity_Template.xlsx Dec 18, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Illumination_Warmup Kinetic_Template.xlsx Dec 19, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Illumination Power Linearity_Template.xlsx Dec 17, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Illumination Stability_Template.xlsx Dec 18, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Illumination Warmup Kinetic_Template.xlsx Dec 17, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Maximum Illumination Power Output_Template.xlsx Dec 18, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Objective and cube transmittance_Template.xlsx Jan 06, 2025 by Nicolas Stifani
Microsoft Excel Spreadsheet Objective and cube transmittance.xlsx Dec 12, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Stage-Repositining_Diagnostic-Data.xlsx Dec 26, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Stage Repositioning_Template.xlsx Jan 06, 2025 by Nicolas Stifani
Microsoft Excel Spreadsheet Stage-Repositioning_Template.xlsx Dec 30, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet Stage Repositioning Dispersion_Template.xlsx Dec 30, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet XY Repositioning Accuracy_Template_All-Files.xlsx Dec 15, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet XY Repositioning Accuracy_Template.xlsx Dec 19, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet XYZ_Repositining_Diagnostic_Data.xlsx Dec 19, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet XYZ_Repositioning-Accuracy_Template.xlsx Dec 19, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet XYZ_Repositioning Dispersion_Template.xlsx Dec 19, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet XYZ_Repositioning Dispersion_Traveled Distance_Template.xlsx Dec 19, 2024 by Nicolas Stifani
Microsoft Excel Spreadsheet XYZ Drift Kinetic_Template.xlsx Dec 19, 2024 by Nicolas Stifani

Unable to render {include} The included page could not be found.