You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 99 Next »


Unable to render {include} The included page could not be found.




This page provides a practical guide for microscope quality control. By following the outlined steps, utilizing the provided template files, and running the included scripts, you will have everything needed to easily generate comprehensive report on your microscope's performance.

Equipment used

  • Thorlabs Power Meter (PM400) and sensor (S170C)
  • Thorlabs Fluorescent Slides (FSK5)
  • TetraSpeck™ Fluorescent Microspheres Size Kit (mounted on slide) ThermoFisher (T14792)

Software used

Please note that during quality control, you may, and likely will, encounter defects or unexpected behavior. This practical guide is not intended to assist with investigating or resolving these issues. With that said, we wish you the best of luck and are committed to providing support. Feel free to reach out to us at microscopie@cib.umontreal.ca



Illumination Warmup Kinetic

When starting light sources, they require time to reach a stable and steady state. This duration is referred to as the warm-up period. To ensure accurate performance, it is essential to record the warm-up kinetics at least once a year to precisely define this period. For a detailed exploration of illumination stability, refer to the Illumination Power, Stability, and Linearity Protocol by the QuaRep Working Group 01.

Acquisition protocol

  1. Place a power meter sensor (e.g., Thorlabs S170C) on the microscope stage.

  2. Center the sensor with the objective to ensure proper alignment.

  3. Zero the sensor to ensure accurate readings.

  4. Using your power meter controller (e.g., Thorlabs PM400) or compatible software, select the wavelength of the light source you want to monitor.

  5. Turn on the light source and immediately record the power output over time until it stabilizes.

    I personally record every 10 seconds for 24 hours and stops when it has been stable for 1 h.

  6. Repeat steps 3 to 5 for each light source you want to monitor

    Keep the light source turned on at all times. Depending on your hardware, the light source may remain continuously on or be automatically shut down by the software when not in use.

Results

Use the provided spreadsheet template, Illumination_Warmup Kinetic_Template.xlsx, and fill in the orange cells to visualize your results. For each light source, plot the measured power output (in mW) against time to analyze the data.

Calculate the relative power using the formula: Relative Power = (Power / Max Power). Then, plot the Relative Power (%) against time to visualize the data.

We observe some variability in the power output for the 385 nm light source.

To assess the stability:

  1. Define a Stability Duration Window:
    Select a time period (e.g., 10 minutes) during which the power output should remain stable.

  2. Specify a Maximum Coefficient of Variation (CV) Threshold:
    Determine an acceptable variability limit for the selected window (e.g., 0.01%).

  3. Calculate the Coefficient of Variation (CV):
    Use the formula: CV = (Standard Deviation / Mean)
    Compute the CV for the specified stability duration window.

  4. Visualize Stability:
    Plot the calculated CV over time to analyze the stability of the power output.

We observe that the light sources stabilize quickly, within less than 10 minutes, while the 385 nm light source takes approximately 41 minutes to reach the stability threshold. The template also calculates Stability Factor (S) using the formula: S (%) = 100 × (1 - (Pmax - Pmin) / (Pmax + Pmin))

Report the results in a table


385nm475nm555nm630nm
Stabilisation time (Max CV 0.01% for 10 min)41338
Stability Factor (%) Before Warmup99.7%99.9%100.0%100.0%
Stability Factor (%) After Warmup100.0%100.0%100.0%99.9%

 Selected Stability Duration Window (min): 10 min and Maximum Coefficient of Variation: 0.01%.

Metrics

  • The Stability Factor indicates a higher stability the closer to 100% and focuses specifically on the range of values (difference between max and min) relative to their sum, providing an intuitive measure of how tightly the system's behavior stays within a defined range.
  • The Coefficient of Variation focuses on the dispersion of all data points (via the standard deviation) relative to the mean. Lower Coefficient indicates a better stability around the mean.

Conclusion

The illumination warm-up time for this instrument is approximately 40 minutes. This duration is essential for ensuring accurate quantitative measurements, as the Coefficient of Variation (CV) threshold is strict, with a maximum allowable variation of 0.01% within a 10-minute window.

Illumination Maximum Power Output

This measure assesses the maximum power output of each light source, considering both the quality of the light source and the components along the light path. Over time, we expect a gradual decrease in power output due to the aging of hardware, including the light source and other optical components. These measurements will also be used to track the performance of the light sources over their lifetime (see Long-Term Illumination Stability section). For a detailed exploration of illumination properties, refer to the Illumination Power, Stability, and Linearity Protocol by the QuaRep Working Group 01.

Acquisition protocol

  1. Warm up the light sources for the required duration (see previous section).
  2. Place the power meter sensor (e.g., Thorlabs S170C) on the microscope stage.
  3. Center the sensor with the objective to ensure proper alignment.
  4. Zero the sensor to ensure accurate readings.
  5. Using your power meter controller (e.g., Thorlabs PM400) or software, select the wavelength of the light source you wish to monitor.
  6. Turn the light source on to 100% power.
  7. Record the average power output over 10 seconds.
    I personally re-use the data collected during the warm-up kinetic experiment (when the power is stable) for this purpose.
  8. Repeat steps 5 to 7 for each light source and wavelength you wish to monitor.

Results

Fill in the orange cells in the Illumination_Maximum Power Output_Template.xlsx spreadsheet to visualize your results. For each light source, plot the measured maximum power output (in mW).

Plot the measured maximum power output (in mW) and compare it to the manufacturer’s specifications. Calculate the Relative Power using the formula: Relative Power = (Measured Power / Manufacturer Specifications) and plot the relative power for each light source


Report the results in a table


Manufacturer
Specifications (mW)
Measurements
2024-11-22 (mW)
Relative Power (%)
385nm150.25122.281%
470nm110.495.987%
555nm31.92475%
630nm5239.2676%

Metrics

  • The Maximum Power indicates how much light is provided by the instrument.
  • The Relative Power to Specificaitons indicates how much power is provided compared to the specifications.

Conclusion

This instrument provides 80% of the power specified by the manufacturer. These results are consistent, as the manufacturer’s specifications are based on a different objective, and likely different filters and mirrors, which can affect the measured power output.

Illumination Stability

The light sources used on a microscope should remain constant or at least stable over the time scale of an experiment. For this reason, illumination stability is recorded across four different time scales:

  • Real-time Illumination Stability: Continuous recording for 1 minute. This represents the duration of a z-stack acquisition.
  • Short-term Illumination Stability: Recording every 1-10 seconds for 5-15 minutes. This represents the duration needed to acquire several images.
  • Mid-term Illumination Stability: Recording every 10-30 seconds for 1-2 hours. This represents the duration of a typical acquisition session or short time-lapse experiments. For longer time-lapse experiments, a longer duration may be used.
  • Long-term Illumination Stability: Recording once a year or more over the lifetime of the instrument.

For a detailed exploration of illumination stability, refer to the Illumination Power, Stability, and Linearity Protocol by the QuaRep Working Group 01.

Real-time Illumination Stability

Acquisition protocol

  1. Warm up the light sources for the required duration (see previous section).
  2. Place the power meter sensor (e.g., Thorlabs S170C) on the microscope stage.
  3. Center the sensor with the objective to ensure proper alignment.
  4. Zero the sensor to ensure accurate readings.
  5. Using your power meter controller (e.g., Thorlabs PM400) or software, select the wavelength of the light source you wish to monitor.
  6. Turn the light source on to 100% power.
  7. Record the power output every 100 ms for 1 minute. For some microscope dedicated to a fast imaging it might be required to record stability at a faster rate. The THorlab S170C sensor can record at 1 kHz

    I personally acquire this data immediately after the warm-up kinetic experiment, without turning off the light source.

  8. Repeat steps 5 to 7 for each light source and wavelength you wish to monitor.

 Results

Fill in the orange cells in the Illumination_Stability_Template.xlsx spreadsheet to visualize your results. For each light source, plot the measured power output (in mW) over time.

Calculate the relative power using the formula: Relative Power = (Power / Max Power). Then, plot the Relative Power (%) over time.

Calculate the Stability Factor (S) using the formula: S (%) = 100 × (1 - (Pmax - Pmin) / (Pmax + Pmin)). Also, calculate the Coefficient of Variation (CV) using the formula: CV = Standard Deviation / Mean.

Reports the results in a table.


Stability Factor Coefficient of Variation
385nm99.99%0.002%
475nm99.99%0.002%
555nm99.97%0.004%
630nm99.99%0.002%

From the Stability Factor results, we observe that the difference between the maximum and minimum power is less than 0.03%. Additionally, the Coefficient of Variation indicates that the standard deviation is less than 0.004% of the mean value, demonstrating excellent power stability.

Conclusion

The light sources exhibit a very high stability, >99.9% during a 1-minute period.

Short-term Illumination Stability

Acquisition protocol

  1. Warm up the light sources for the required duration (see previous section).
  2. Place the power meter sensor (e.g., Thorlabs S170C) on the microscope stage.
  3. Ensure the sensor is properly centered with the objective for accurate measurements.
  4. Zero the sensor to ensure precise readings.
  5. Using your power meter controller (e.g., Thorlabs PM400) or software, select the desired wavelength of the light source.
  6. Turn on the light source to 100% intensity.
  7. Record the power output every 10 seconds for 15 minutes.

    I personally re-use the data collected during the warm-up kinetic experiment (when the power is stable) for this purpose.

  8. Repeat steps 5 to 7 for each light source you wish to monitor.

Results

Fill in the orange cells in the Illumination_Stability_Template.xlsx spreadsheet to visualize your results. For each light source, plot the measured power output (in mW) over time.

Calculate the relative power using the formula: Relative Power = Power / Max Power. Then, plot the Relative Power (%) over time.

Calculate the Stability Factor (S) using the formula: S (%) = 100 × (1 - (Pmax - Pmin) / (Pmax + Pmin)). Also, calculate the Coefficient of Variation (CV) using the formula: CV = Standard Deviation / Mean.

Reports the results in a table.


Stability Factor Coefficient of Variation
385nm100.00%0.000%
475nm100.00%0.002%
555nm100.00%0.003%
630nm99.99%0.004%

From the Stability Factor results, we observe that the difference between the maximum and minimum power is less than 0.01%. Additionally, the Coefficient of Variation indicates that the standard deviation is less than 0.004% of the mean value, demonstrating excellent power stability.

Conclusion

The light sources exhibit high stability, maintaining >99.9% stability during a 15-minute period.

Mid-term Illumination Stability

Acquisition protocol

  1. Warm up the light sources for the required duration (see previous section).
  2. Place the power meter sensor (e.g., Thorlabs S170C) on the microscope stage.
  3. Ensure the sensor is properly centered with the objective for accurate measurements.
  4. Zero the sensor to ensure precise readings.
  5. Using your power meter controller (e.g., Thorlabs PM400) or software, select the desired wavelength of the light source.
  6. Turn on the light source to 100% intensity.
  7. Record the power output every 10 seconds for 1 hour.

    I personally re-use the data collected during the warmup kinetic experiment.

  8. Repeat steps 5 to 7 for each light source you wish to monitor

Results

Fill in the orange cells in the Illumination_Stability_Template.xlsx spreadsheet to visualize your results. For each light source, plot the measured power output (in mW) over time to assess stability.

Calculate the relative power using the formula: Relative Power = Power / Max Power. Then, plot the Relative Power (%) over time.

Calculate the Stability Factor (S) using the formula: S (%) = 100 × (1 - (Pmax - Pmin) / (Pmax + Pmin)). Also, calculate the Coefficient of Variation (CV) using the formula: CV = Standard Deviation / Mean. Reports the results in a table.


Stability Factor Coefficient of Variation
385nm99.98%0.013%
475nm99.98%0.011%
555nm99.99%0.007%
630nm99.97%0.020%

From the Stability Factor results, we observe that the difference between the maximum and minimum power is less than 0.03%. Additionally, the Coefficient of Variation indicates that the standard deviation is less than 0.02% of the mean value, demonstrating excellent power stability.

Conclusion

The light sources exhibit exceptional stability, maintaining a performance of >99.9% during a 1-hour period.

Long-term Illumination Stability

Long-term illumination stability measures the power output over the lifetime of the instrument. Over time, we expect a gradual decrease in power output due to the aging of hardware, including the light source and other optical components. These measurements are not an experiment per se but it is the measurement of the maximum power output over time.

Acquisition protocol

  1. Warm up the light sources for the required duration (see previous section).
  2. Place the power meter sensor (e.g., Thorlabs S170C) on the microscope stage.
  3. Center the sensor with the objective to ensure proper alignment.
  4. Zero the sensor to ensure accurate readings.
  5. Using your power meter controller (e.g., Thorlabs PM400) or software, select the wavelength of the light source you wish to monitor.
  6. Turn the light source on to 100% power.
  7. Record the average power output over 10 seconds.
    I personally re-use the data collected for the maximal power output section and plot it over time.
  8. Repeat steps 5 to 7 for each light source and wavelength you wish to monitor.

Results

Fill in the orange cells in the Illumination_Stability_Template.xlsx spreadsheet to visualize your results. For each light source, plot the measured power output (in mW) over time to assess the stability of the illumination.

Calculate the relative power using the formula: Relative Power = Power / Max Power. Then, plot the Relative Power (%) over time.

Calculate the Relative PowerSpec by comparing the measured power to the manufacturer’s specifications using the following formula: Relative PowerSpec = Power / PowerSpec. Then, plot the Relative PowerSpec (% Spec) over time.

We expect a gradual decrease in power output over time due to the aging of hardware. Light sources should be replaced when the Relative PowerSpec falls below 50%.

Reports the results in a table.


Stability Factor Coefficient of Variation
385nm94.51%3.49%
475nm93.59%4.42%
555nm88.96%6.86%
630nm89.46%6.71%

Conclusion

The light sources are somehow stable over the last 2 years but a decrease in the maximum power output is seen.

Illumination Stability Conclusions

Stability Factor

Real-time

1 min

Short-term

15 min

Mid-term

1 h

385nm

99.99%100.00%99.98%

475nm

99.99%100.00%99.98%

555nm

99.97%100.00%99.99%

630nm

99.99%99.99%99.97%

The light sources are highly stable (Stability >99.9%).

Metrics

  • The Stability Factor indicates a higher stability the closer to 100% and focuses specifically on the range of values (difference between max and min) relative to their sum, providing an intuitive measure of how tightly the system's behavior stays within a defined range.
  • The Coefficient of Variation focuses on the dispersion of all data points (via the standard deviation) relative to the mean. Lower Coefficient indicates a better stability around the mean.


Illumination Input-Output Linearity

This measure compares the power output as the input varies. A linear relationship is expected between the input and the power output. For a detailed exploration of illumination linearity, refer to the Illumination Power, Stability, and Linearity Protocol by the QuaRep Working Group 01.

Acquisition protocol

  1. Warm up the light sources for the required duration (refer to the previous section).
  2. Place the power meter sensor (e.g., Thorlabs S170C) on the microscope stage.
  3. Ensure the sensor is properly centered with the objective for accurate measurements.
  4. Zero the sensor to ensure precise readings.
  5. Using your power meter controller (e.g., Thorlabs PM400) or software, select the wavelength of the light source you wish to monitor.
  6. Turn on the light source and adjust its intensity to 0%, then incrementally increase to 10%, 20%, 30%, and so on, up to 100%.
    I typically collect this data immediately after the warm-up kinetic phase and once the real-time power stability data has been recorded.
  7. Record the power output corresponding to each input level.
  8. Repeat steps 5 to 7 for each light source you wish to monitor

Results

Fill in the orange cells in the Illumination_Linearity_Template.xlsx spreadsheet to visualize your results. For each light source, plot the measured power output (in mW) as a function of the input (%).

Calculate the Relative Power using the formula: Relative Power = Power / MaxPower. Then, plot the Relative Power (%) as a function of the input (%).

Determine the equation for each curve, which is typically a linear relationship of the form: Output = K × Input. Report the slope (K) and the coefficient of determination (R²) for each curve in a table.


Illumination Input-Output Linearity


Slope

R2

385nm

0.9969

1

475nm

0.9984

1

555nm

1.0012

1

630nm

1.0034

1

The slopes demonstrate a nearly perfect linear relationship between the input and the measured output power, with values very close to 1. The coefficient of determination (R²) indicates a perfect linear fit, showing no deviation from the expected relationship.

Metrics

  • The Slope indicates the rate of change  between Input and Ouput.
  • The Coefficient of Determination indicates how fitted is the data to a linear relationship.

Conclusion

The light sources are highly linear with an average Slope=0.999 and a perfect fit R2=1

Objectives and Cubes Transmittance

Since we are using a power meter, we can easily assess the transmittance of the objectives and filter cubes. This measurement compares the power output when different objectives and filter cubes are in the light path. It evaluates the transmittance of each objective and compares it with the manufacturer’s specifications. This method can help detect defects or dirt on the objectives. It can also verify the correct identification of the filters installed in the microscope.

Objectives Transmittance

Acquisition protocol

  1. Warm up the light sources for the required duration (refer to the previous section).
  2. Place the power meter sensor (e.g., Thorlabs S170C) on the microscope stage.
  3. Ensure the sensor is properly centered with the objective.
  4. Zero the sensor to ensure accurate readings.
  5. Using your power meter controller (e.g., Thorlabs PM400) or software, select the wavelength of the light source you wish to monitor.
  6. Turn on the light source to 100% intensity.
  7. Record the power output for each objective, as well as without the objective in place.

    I typically collect this data after completing the warm-up kinetic phase, followed by the real-time power stability measurements, and immediately after recording the power output linearity.

  8. Repeat steps 5 to 7 for each light source and wavelength

Results

Fill in the orange cells in the Objective and cube transmittance_Template.xlsx spreadsheet to visualize your results. For each objective, plot the measured power output (in mW) as a function of the wavelength (in nm).

Calculate the Relative Transmittance using the formula: Relative Transmittance = Power / PowerNoObjective. Then, plot the Relative Transmittance (%) as a function of the wavelength (in nm).

Calculate the average transmittance for each objective and report the results in a table. Compare the average transmittance to the specifications provided by the manufacturer to assess performance.


Average
Transmittance
Specifications
[470-630]

Average Transmittance
[470-630]

2.5x-0.07577%>90%84%
10x-0.25-Ph160%>80%67%
20x-0.5 Ph262%>80%68%
63x-1.429%>80%35%

The measurements are generally close to the specifications, with the exception of the 63x-1.4 objective. This deviation is expected, as the 63x objective has a smaller back aperture, which reduces the amount of light it can receive. Additionally, you can compare the shape of the transmittance curves to further assess performance.

Conclusion

The objectives are transmitting light properly.

Cubes Transmittance

Acquisition protocol

  1. Warm up the light sources for the required duration (refer to the previous section).
  2. Place the power meter sensor (e.g., Thorlabs S170C) on the microscope stage.
  3. Ensure the sensor is properly centered with the objective.
  4. Zero the sensor to ensure accurate readings.
  5. Using your power meter controller (e.g., Thorlabs PM400) or software, select the wavelength of the light source you wish to monitor.
  6. Turn on the light source to 100% intensity.
  7. Record the power output for each filter cube.

    I typically collect this data after completing the warm-up kinetic phase, followed by the real-time power stability measurements, and immediately after recording the power output linearity.

  8. Repeat steps 5 to 7 for each light source and wavelength

Results

Fill in the orange cells in the Objective and cube transmittance_Template.xlsx spreadsheet to visualize your results. For each filter cube, plot the measured power output (in mW) as a function of the wavelength (in nm).

Calculate the Relative Transmittance using the formula: Relative Transmittance = Power / PowerMaxFilter. Then, plot the Relative Transmittance (%) as a function of the wavelength (in nm).

Calculate the average transmittance for each filter at the appropriate wavelengths and report the results in a table.


385475555590630
DAPI/GFP/Cy3/Cy5100%100%100%100%100%
DAPI14%0%0%8%0%
GFP0%47%0%0%0%
DsRed0%0%47%0%0%
DHE0%0%0%0%0%
Cy50%0%0%0%84%
  • The DAPI cube transmits only 14% of the excitation light compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5. While it is still usable, it will provide a low signal. This is likely because the excitation filter within the cube does not match the light source properly. Since an excitation filter is already included in the light source, the filter in this cube could be removed.

  • The GFP and DsRed cubes transmit 47% of the excitation light compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5, and they are functioning properly.

  • The DHE cube does not transmit any light from the Colibri. This cube may need to be removed and stored.

  • The Cy5 cube transmits 84% of the excitation light compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5, and it is working properly.

Conclusion

Actions to be Taken:

  • Remove the excitation filter from the DAPI cube, as it does not properly match the light source and is redundant with the excitation filter already included in the light source.
  • Remove and store the DHE cube, as it does not transmit any light from the Colibri and is no longer functional.

We are done with the powermeter (wink) .

Field Illumination Uniformity

Having confirmed the stability of our light sources and verified that the optical components (objectives and filter cubes) are transmitting light effectively, we can now proceed to evaluate the uniformity of the illumination. This step assesses how evenly the illumination is distributed. For a comprehensive guide on illumination uniformity, refer to the Illumination Uniformity by the QuaRep Working Group 03.

Acquisition protocol 

  1. Place a fluorescent plastic slide (Thorlabs FSK5) onto the stage.
  2. Center the slide and the objective.
  3. Adjust the focus to align with the surface of the slide.

    I typically use a red fluorescent slide and focus on a scratch mark on its surface for alignment.

  4. Slightly adjust the focus deeper into the slide to minimize the visibility of dust, dirt, and scratches.

  5. Modify the acquisition parameters to ensure the image is properly exposed.

    I typically use the auto-exposure feature, aiming for a targeted intensity of 30%.

  6. Capture a multi-channel image.
  7. Repeat steps 5 and 6 for each objective and filter combination.

Processing

You should have acquired several multi-channel images that now need processing to yield meaningful results. To process them, use the Field Illumination analysis function of the MetroloJ_QC plugin for FIJI. For more information about the MetroloJ_QC plugin please refer to manual available at the MontpellierRessourcesImagerie on GitHub.

  1. Open FIJI.
  2. Load your image by dragging it into the FIJI bar.
  3. Launch the MetroloJ_QC plugin by navigating to Plugins > MetroloJ QC.
  4. Click on Field Illumination Report.
  5. Enter a Title for your report, I typically use the date
  6. Type in your Name.
  7. Click Filter Parameters and input the filter's names, excitation, and emission wavelengths.
  8. Check Remove Noise using Gaussian Blur.

  9. Enable Apply Tolerance to the Report and reject uniformity and accuracy values below 80%.

  10. Click File Save Options.

  11. Select Save Results as PDF Reports.

  12. Select Save Results as spreadsheets.

  13. Click OK.

  14. Repeat steps 4 through 13 for each image you have acquired.

This will generate detailed results stored in a folder named Processed. The Processed folder will be located in the same directory as the original images, with each report and result saved in its own sub-folder.

The following script for R will process the Illumination Uniformity Results generated by MetroloJ_QC Process Illumination Uniformity Results.R. To use it, simply drag and drop the file into the R interface. You may also open it with RStudio and Click the Source button. The script will:

  • Prompt the user to select an input folder.
  • Load all _results.xls files located in the selected folder and subfolders
  • Process and merge all results generated by the MetroloJ_QC Field Illumination.
  • Split the filenames using the _ character as a separator and create columns named Variable-001, Variable-002, etc. This will help in organizing the data if the filenames are formatted as Variable-A_Variable-B, for example.
  • Save the result as Field-Unifromity_Merged-Data.csv in an Output folder on the user's Desktop 
Process Illumination Uniformity Results
# Clear the workspace
rm(list = ls())

# Ensure required packages are installed and loaded
if (!require(tcltk)) install.packages("tcltk", dependencies = TRUE)
library(tcltk)

# Set default input and output directories
default_input_dir <- file.path(Sys.getenv("USERPROFILE"), "Desktop", "Input")  # Default to "Input" on Desktop
InputFolder <- default_input_dir  # Use default folder
InputFolder <- tclvalue(tkchooseDirectory(initialdir = default_input_dir, title = "Select a folder containing XLS files"))

# If no folder is selected, fall back to the default
if (is.na(InputFolder)) {
  InputFolder <- default_input_dir
}

# Specify the Output folder (this is a fixed folder on the Desktop)
OutputFolder <- file.path(Sys.getenv("USERPROFILE"), "Desktop", "Output")
if (!dir.exists(OutputFolder)) dir.create(OutputFolder, recursive = TRUE)

# List all XLS files ending with "_results.xls" in the folder and its subfolders
xls_files <- list.files(path = InputFolder, pattern = "_results.xls$", full.names = TRUE, recursive = TRUE)
if (length(xls_files) == 0) stop("No _results.xls files found in the selected directory.")

# Define the header pattern to search for
header_pattern <- "Filters combination/set.*Uniformity.*Field Uniformity.*Centering Accuracy.*Image.*Coef. of Variation.*Mean c fit value"

# Initialize a list to store tables
all_tables <- list()

# Loop over all XLS files
for (file in xls_files) {
  # Read the file as raw lines (since it's tab-separated)
  lines <- readLines(file)
  
  # Find the line that matches the header pattern
  header_line_index <- grep(header_pattern, lines)
  
  if (length(header_line_index) == 0) {
    cat("No matching header found in file:", file, "\n")
    next  # Skip this file if the header isn't found
  }
  
  # Extract the table starting from the header row
  start_row <- header_line_index[1]
  
  # The table might end at an empty line, so let's find the next empty line after the header
  end_row <- min(grep("^\\s*$", lines[start_row:length(lines)]), na.rm = TRUE) + start_row - 1
  if (is.na(end_row)) end_row <- length(lines)  # If no empty line found, use the last line
  
  # Extract the table rows (lines between start_row and end_row)
  table_lines <- lines[start_row:end_row]
  
  # Convert the table lines into a data frame
  table_data <- read.table(text = table_lines, sep = "\t", header = TRUE, stringsAsFactors = FALSE)
  
  # Add the filename as the first column (without extension)
  filename_no_extension <- tools::file_path_sans_ext(basename(file))
  table_data$Filename <- filename_no_extension
  
  # Split the filename by underscores and add as new columns
  filename_parts <- unlist(strsplit(filename_no_extension, "_"))
  for (i in 1:length(filename_parts)) {
    table_data[[paste0("Variable-", sprintf("%03d", i))]] <- filename_parts[i]
  }
  
  # Rename columns based on the exact names found in the data
  colnames(table_data) <- gsub("Filters.combination.set", "Channel", colnames(table_data))
  colnames(table_data) <- gsub("Uniformity....", "Uniformity (%)", colnames(table_data))
  colnames(table_data) <- gsub("Field.Uniformity....", "Field Uniformity (%)", colnames(table_data))
  colnames(table_data) <- gsub("Centering.Accuracy....", "Centering Accuracy (%)", colnames(table_data))
  colnames(table_data) <- gsub("Coef..of.Variation", "CV", colnames(table_data))
  
  # Remove the "Mean.c.fit.value" column if present
  colnames(table_data) <- gsub("Mean.c.fit.value", "", colnames(table_data))
  
  # Drop any columns that were renamed to an empty string
  table_data <- table_data[, !grepl("^$", colnames(table_data))]
  
  # Divide the relevant columns by 100 to convert percentages into decimal values
  table_data$`Uniformity (%)` <- table_data$`Uniformity (%)` / 100
  table_data$`Field Uniformity (%)` <- table_data$`Field Uniformity (%)` / 100
  table_data$`Centering Accuracy (%)` <- table_data$`Centering Accuracy (%)` / 100
  
  # Store the table in the list with the file name as the name
  all_tables[[basename(file)]] <- table_data
}

# Check if any tables were extracted
if (length(all_tables) == 0) {
  stop("No tables were extracted from the files.")
}

# Optionally, combine all tables into a single data frame (if the structures are consistent)
combined_data <- do.call(rbind, all_tables)

# Function to create unique filenames by incrementing the suffix
get_unique_filename <- function(base_path) {
  count <- 0
  new_file <- base_path
  while (file.exists(new_file)) {
    count <- count + 1
    new_file <- paste0(sub("\\.csv$", "", base_path), sprintf("-%03d.csv", count))
  }
  return(new_file)
}

output_combined_file <- file.path(OutputFolder, "Field-Uniformity_Merged-Data.csv")
output_combined_file <- get_unique_filename(output_combined_file)  # Ensure unique filename
write.csv(combined_data, output_combined_file, row.names = FALSE)
cat("Combined data saved to:", output_combined_file, "\n")

This script will generate a csv file that can be saved as an XLSX and manipulated with a pivot table to generate informative graphs and statistics. From the Pivot table copy and paste your data into the orange cells of the following spreadsheet Illumination_Uniformity_Template.xlsx

Results

Plot the uniformity and centering accuracy for each objective.

Metrics

  • The Uniformity indicates the range between the minimum and maximum intensities in the image. U=(Min/Max)*100. 100% Uniformity indicates a perfectly homogeneous image. 50% Uniformity indicates the minimum is half the maximum.
  • The Centering Accuracy indicates how far from the center of the image is the center of the illumination (centroid of the max illumination bin). 100% indicates a perfectly aligns with the center of the image. 0% centering accuracy indicates that the center of the illumination is the farthest from the center of the image.



ObjectiveUniformityCentering Accuracy
2x97.5%92.7%
10x97.0%94.5%
20x97.3%97.1%
63x96.6%96.7%


 Plot the uniformity and centering accuracy for each filter set. 

FilterUniformityCentering Accuracy
DAPI98.3%99.4%
DAPIc95.8%84.9%
GFP98.1%99.1%
GFPc96.5%93.3%
Cy397.6%96.5%
Cy3c96.8%97.9%
Cy597.0%99.6%
Cy5c96.7%91.3%


This specific instrument has a quad-band filter as well as individual filter cubes. We can plot the uniformity and centering accuracy per filter types. 

Filter TypeUniformity

Centering Accuracy

Quad band

97.7%98.7%
Single band96.5%

91.8%

Store the original field illumination images to be able to perform shading corrections after acquisition.

Conclusion

The uniformity and centering accuracy are excellent across all objectives and filters, consistently exceeding 90%. However, the single-band filter cubes exhibit slightly lower uniformity and centering accuracy compared to the quad-band filter cube.

Store the original field illumination images to be able to perform shading corrections after acquisition.

XYZ Drift

This experiment evaluates the stability of the system in the XY and Z directions. As noted earlier, when an instrument is started, it requires a warmup period to reach a stable steady state. To determine the duration of this phase accurately, it is recommended to record a warmup kinetic at least once per year. For a comprehensive guide on Drift and Repositioning, refer to the Stage and Focus Precision by the QuaRep Working Group 06.

Acquisition protocol 

  1. Place 4 µm diameter fluorescent beads (TetraSpec Fluorescent Microspheres Size Kit, mounted on a slide) on the microscope stage.

  2. Center an isolated bead under a high-NA dry objective.

  3. Crop the acquisition area to only visualize one bead but keep it large enough to anticipate a potential drift along the X and Y axes (100 um FOV should be enough)
  4. Select an imaging channel appropriate for the fluorescent beads

    I typically use the Cy5 channel which is very bright and resistant to bleaching, yet this channel has a lower resolution, but it does not really matter here

  5. Acquire a large Z-stack at 1-minute intervals for a duration of 24 hours.

    To ensure accurate measurements, it is essential to account for potential drift along the Z-axis by acquiring a Z-stack that is substantially larger than the visible bead size. I typically acquire a 40 µm Z-stack.

Processing

  1. Open your image in FIJI
  2. If necessary, crop the image to focus on a single bead for better visualization.
  3. Use the TrackMate plugin included in FIJI to detect and track spots over time Plugins\Tracking\TrackMate.
  4. Apply Difference of Gaussians (DoG) spot detection with a detection size of 4 µm
  5. Enable sub-pixel localization for increased accuracy
  6. Click Preview to visualize the spot detection
  7. Set a quality threshold (click and slide) high enough to detect a single spot per frame
  8. Click Next and follow the detection and tracking process
  9. Save the detected Spots coordinates as a CSV file for further analysis

Results

  • Open the spreadsheet template XYZ Drift Kinetic_Template.xlsx and fill in the orange cells.
  • Copy and paste the XYZT and Frame columns from the TrackMate spots CSV file into the corresponding orange columns in the spreadsheet.
  • Enter the numerical aperture (NA) and emission wavelength used during the experiment.
  • Calculate the relative displacement in X, Y, and Z using the formula: Relative Displacement = Position - PositionInitial.
  • Finally, plot the relative displacement over time to visualize the system's drift.

We observe an initial drift that stabilizes over time in X (+2 um), Y (+1.3 um) and Z (-10.5 um). Calculate the displacement 3D Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2) + (Z2-Z1)2 ) and plot the displacement over time. Calculate the resolution of your imaging configuration, Lateral Resolution = LambdaEmission / 2*NA and plot the resolution over time (constant).

Identify visually the time when the displacement is lower than the resolution of the system. On this instrument it takes 120 min to reach its stability. Calculate the velocity, Velocity = (Displacement2-Displacement1)/T2-T1) and plot the velocity over time.

Calculate the average velocity before and after stabilisation and report the results in a table.

Objective NA0.5
Wavelength (nm)705
Resolution (nm)705
Stabilisation time (min)122
Average velocity Warmup (nm/min)113
Average velocity System Ready (nm/min)14

Metrics

  • The Stabilisation Time indicates the time in minutes necessary for the instrument to have a drift lower than the resolution of the system.
  • The Average Velocity indicates the speed of drift in all directions XYZ in nm/min.

Conclusion

The warmup time for this instrument is approximately 2 hours. After the warmup period, the average displacement velocity is 14 nm/min, which falls within an acceptable range.

Stage Repositioning Dispersion

This experiment evaluates how accurate is the system in XY by measuring the dispersionof repositioning. Several variables can affect repositioning: i) Time, ii) Traveled distance, iii) Speed and iv) acceleration. For a comprehensive guide on Stage Repositioning, refer to the Stage and Focus Precision by the QuaRep Working Group 06 and the associated XY Repositioning Protocol.

Acquisition protocol

  1. Place 4 µm diameter fluorescent beads (TetraSpec Fluorescent Microspheres Size Kit, mounted on a slide) on the microscope stage.

  2. Center an isolated bead under a high-NA dry objective.

  3. Crop the acquisition area to only visualize one bead but keep it large enough to anticipate a potential drift along the X and Y axes (100 um FOV should be enough)
  4. Select an imaging channel appropriate for the fluorescent beads

    I typically use the Cy5 channel which is very bright and resistant to bleaching, yet this channel has a lower resolution.

  5. Acquire a Z-stack at positions separated by 0 um (Drift), 1000 um and 10 000 um in both X and Y direction.

  6. Repeat the acquisition 20 times

  7. Acquire 3 datasets for each condition

  • Your stage might have a smaller range!
  • Lower the objectives during movement to avoid damage

Processing

  1. Open your image in FIJI
  2. If necessary, crop the image to focus on a single bead for better visualization.
  3. Use the TrackMate plugin included in FIJI to detect and track spots over time Plugins\Tracking\TrackMate.
  4. Apply Difference of Gaussians (DoG) spot detection with a detection size of 4 µm
  5. Enable sub-pixel localization for increased accuracy
  6. Click Preview to visualize the spot detection
  7. Set a quality threshold (click and slide) high enough to detect a single spot per frame
  8. Click Next and follow the detection and tracking process
  9. Save the detected Spots coordinates as a CSV file for further analysis

Results

  • Open the spreadsheet template Stage Repositioning Dispersion_Template.xlsx and fill in the orange cells.
  • Copy and paste the XYZT and Frame columns from the TrackMate spots CSV file into the corresponding orange columns in the spreadsheet.
  • Enter the numerical aperture (NA) and emission wavelength used during the experiment.
  • Calculate the relative position in X, Y, and Z using the formula: Relative PositionRelative = Position - PositionInitial.
  • Finally, plot the relative position over time to visualize the system's stage repositioning dispersion.


We observe an initial movement in X and Y that stabilises. Calculate the displacement 2D Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2) ) and plot the 2D displacement over time. Calculate the resolution of your imaging configuration, Lateral Resolution = LambdaEmission / 2*NA and plot the resolution over time (constant).


This experiment shows a significant initial displacement between Frame 0 and Frame 1, ranging from 1000 nm to 400 nm, which decreases to 70 nm by Frame 2. To quantify this variation, calculate the dispersion for each displacement using the formula: Dispersion = StandardDeviation(Displacement). Report the results in a table.

Traveled Distance (mm)0 mm1 mm10 mm
X Dispersion (nm)4188121
Y Dispersion (nm)414148
Z Dispersion (nm)103453
Repositioning Dispersion 3D (nm)622791
Repositioning Dispersion 2D (nm)222690

Conclusion

The system has an effective Stage Repositioning Dispersion of 230 nm. The results are higher than expected because of the initial shift in the first frame that eventually stabilizes. Excluding the first frame significantly improves the measurements, reducing the repositioning dispersion to 40 nm. Further investigation is required to understand the underlying cause.

Traveled Distance (um)0 mm1 mm10 mm
X Dispersion (nm)32852
Y Dispersion (nm)36835
Z Dispersion (nm)102640
Repositioning Dispersion 3D (nm)64336
Repositioning Dispersion 2D (nm)24036

Metrics

The Repositioning Dispersion indicates how spread is the repositioning in nm. The lower, the more accurate.


Further Investigation


We observed a significant shift in the first frame, which was unexpected and invites further investigation. These variables can affect repositioning dispersion: i) Traveled distance, ii) Speed, iii) Acceleration, iv) Time, and v) Environment. We decided to test the first three.

Methodology

To test if these variables have a significant impact on the repositioning, we followed the XYZ repositioning dispersion protocol with the following parameters:

  • Distances: 0um, 1um, 10um, 1000um, 10 000um, 30 000um 
  • Speed: 10%, 100%
  • Acceleration: 10%,100%
  • For each conditions 3 datasets were acquired

 Processing

This experimental protocol generated a substantial number of images. To process them automatically in ImageJ/FIJI using the TrackMate plugin, we use the following script Stage Repositioning with Batch TrackMate-v7.py

Stage Repositioning with Batch TrackMate
import os  # Importing the os module to interact with the operating system
import sys  # Importing the sys module to access system-specific parameters and functions
from ij import IJ, WindowManager, Prefs  # Importing classes and methods from the ImageJ library
from ij.gui import NonBlockingGenericDialog  # Importing NonBlockingGenericDialog from ImageJ's GUI module
from java.io import File  # Importing the File class from Java's io package
from javax.swing import JOptionPane, JFileChooser  # Importing JOptionPane and JFileChooser from Java's Swing package
from fiji.plugin.trackmate import Model, Settings, TrackMate, SelectionModel, Logger  # Importing TrackMate-related classes from Fiji
from fiji.plugin.trackmate.detection import DogDetectorFactory  # Importing DogDetectorFactory for spot detection in TrackMate
from fiji.plugin.trackmate.tracking.jaqaman import SparseLAPTrackerFactory  # Importing SparseLAPTrackerFactory for tracking in TrackMate
from fiji.plugin.trackmate.features import FeatureFilter  # Importing FeatureFilter from TrackMate
from fiji.plugin.trackmate.features.track import TrackIndexAnalyzer  # Importing TrackIndexAnalyzer from TrackMate
from fiji.plugin.trackmate.gui.displaysettings import DisplaySettingsIO  # Importing DisplaySettingsIO for TrackMate display settings
from fiji.plugin.trackmate.visualization.table import AllSpotsTableView  # Importing AllSpotsTableView for displaying spots in a table
from fiji.plugin.trackmate.visualization.hyperstack import HyperStackDisplayer  # Importing HyperStackDisplayer for visualizing tracks in a hyperstack
from loci.plugins import BF  # Importing Bio-Formats from the loci.plugins package
from loci.plugins.in import ImporterOptions  # Importing ImporterOptions from loci.plugins.in

import csv  # Importing the csv module for handling CSV files
import glob  # Importing the glob module for file pattern matching
import math  # Importing the math module for mathematical functions

# Constants
DEBUG = False  # Debug mode flag
SAVE_SPOTS = False  # Flag to save detected spots
OUTPUT_DIR = os.path.join(os.path.expanduser("~"), "Desktop", "Output")  # Output directory for saving files
IMAGE_EXTENSIONS = ('.tif', '.tiff', '.jpg', '.jpeg', '.png', '.czi', '.nd2', '.lif', '.lsm', '.ome.tif', '.ome.tiff')  # Supported image file extensions
SAVE_INDIVIDUAL_CSVs=False

# Ensure UTF-8 encoding
reload(sys)  # Reload the sys module to change the default encoding
sys.setdefaultencoding('utf-8')  # Set the default encoding to UTF-8

# Template for TrackMate settings
TRACKMATE_SETTINGS_TEMPLATE = {
	'StageRepositioning.TrackMate.Subpixel_Localization': True,  # Enable subpixel localization
	'StageRepositioning.TrackMate.Spot_Diameter': 4.0,  # Spot diameter in microns
	'StageRepositioning.TrackMate.Threshold_Value': 20.904,  # Threshold value for spot detection
	'StageRepositioning.TrackMate.Median_Filtering': False,  # Enable/disable median filtering
	'StageRepositioning.TrackMate.BatchMode': False  # Batch mode flag
}

def log_message(message):
	"""Logs a message if debug mode is enabled."""
	if DEBUG:  # Check if debug mode is enabled
		IJ.log(message)  # Log the message using ImageJ's logging system

def read_preferences(settings):
	"""Reads the user preferences for TrackMate settings."""
	log_message("Reading Preferences...")  # Log the start of reading preferences
	output = {}  # Initialize an empty dictionary to store the preferences
	for key, default_value in settings.items():  # Iterate over the settings
		value = Prefs.get(key, str(default_value))  # Get the preference value
		if isinstance(default_value, bool):  # Convert to appropriate type
			value = bool(int(value))
		elif isinstance(default_value, float):
			value = float(value)
		elif isinstance(default_value, list):
			value = value.split(",")
		elif isinstance(default_value, str):
			value = str(value)
		output[key] = value  # Store the converted value in the output dictionary
	log_message("Reading Preferences: Done")  # Log the completion of reading preferences
	return output  # Return the preferences

def save_preferences(settings):
	"""Saves the user preferences for TrackMate settings."""
	log_message("Saving Preferences...")  # Log the start of saving preferences
	for key, value in settings.items():  # Iterate over the settings
		if isinstance(value, list):  # Convert to appropriate format for saving
			value = ",".join(map(str, value))
		elif isinstance(value, bool):
			value = int(value)
		Prefs.set(key, value)  # Save the preference value
	Prefs.savePreferences()  # Save all preferences
	log_message("Saving Preferences: Done")  # Log the completion of saving preferences

def get_image_info(imp):
	"""Retrieves and returns information about the image."""
	log_message("Getting Image Info...")  # Log the start of getting image info
	file_info = imp.getOriginalFileInfo()  # Get the file information of the image
	filename = file_info.fileName  # Get the filename
	basename, extension = os.path.splitext(filename)  # Split the filename into base name and extension
	input_dir = file_info.directory  # Get the directory of the file
	input_file_path = os.path.join(input_dir, filename)  # Get the full file path
	image_name = imp.getTitle()  # Get the title of the image
	width = imp.getWidth()  # Get the width of the image
	height = imp.getHeight()  # Get the height of the image
	nb_channels = imp.getNChannels()  # Get the number of channels in the image
	nb_slices = imp.getNSlices()  # Get the number of slices in the image
	nb_timepoints = imp.getNFrames()  # Get the number of timepoints in the image
	bit_depth = imp.getBitDepth()  # Get the bit depth of the image
	current_channel = imp.getChannel()  # Get the current channel being displayed
	current_slice = imp.getSlice()  # Get the current slice being displayed
	current_frame = imp.getFrame()  # Get the current frame being displayed
	calibration = imp.getCalibration()  # Get the calibration of the image
	pixel_width = calibration.pixelWidth  # Get the pixel width
	pixel_height = calibration.pixelHeight  # Get the pixel height
	pixel_depth = calibration.pixelDepth  # Get the pixel depth
	space_unit = calibration.getUnit()  # Get the unit of space
	time_unit = calibration.getTimeUnit()  # Get the unit of time
	frame_interval = calibration.frameInterval  # Get the frame interval
	
	# Store all image information in a dictionary
	image_info = {
		'InputFilePath': input_file_path,
		'InputDir': input_dir,
		'Filename': filename,
		'Basename': basename,
		'Extension': extension,
		'ImageName': image_name,
		'Width': width,
		'Height': height,
		'NbChannels': nb_channels,
		'NbSlices': nb_slices,
		'NbTimepoints': nb_timepoints,
		'BitDepth': bit_depth,
		'CurrentChannel': current_channel,
		'CurrentSlice': current_slice,
		'CurrentFrame': current_frame,
		'Calibration': calibration,
		'PixelWidth': pixel_width,
		'PixelHeight': pixel_height,
		'PixelDepth': pixel_depth,
		'SpaceUnit': space_unit,
		'TimeUnit': time_unit,
		'FrameInterval': frame_interval
	}
	
	log_message("Getting Image Info: Done.")  # Log the completion of getting image info
	return image_info  # Return the image information

def generate_unique_filepath(directory, basename, suffix, extension):
	"""Generates a unique file path to avoid overwriting existing files."""
	log_message("Generating Unique Filepath...")  # Log the start of generating unique filepath
	file_counter = 1  # Initialize a counter for file numbering
	while True:  # Loop until a unique filepath is found
		filename = "{}_{}-{:03d}{}".format(basename, suffix, file_counter, extension)  # Generate a filename
		filepath = os.path.join(directory, filename)  # Generate the full file path
		if not os.path.exists(filepath):  # Check if the file path exists
			log_message("Generating Unique Filepath: Done.")  # Log the completion of generating unique filepath
			return filepath  # Return the unique file path
		file_counter += 1  # Increment the counter if the file path exists

def select_folder(default_path):
	"""Opens a dialog for the user to select a folder."""
	log_message("Selecting Folder...")  # Log the start of selecting folder
	chooser = JFileChooser(default_path)  # Create a file chooser with the default path
	chooser.setFileSelectionMode(JFileChooser.DIRECTORIES_ONLY)  # Set the file chooser to select directories only
	chooser.setDialogTitle("Choose a directory containing the images to process")  # Set the dialog title
	return_value = chooser.showOpenDialog(None)  # Show the open dialog
	if return_value == JFileChooser.APPROVE_OPTION:  # Check if the user approved the selection
		input_dir = chooser.getSelectedFile().getAbsolutePath()  # Get the selected directory
	log_message("Selecting Folder: Done.")  # Log the completion of selecting folder
	return input_dir  # Return the selected directory

def open_image_bioformats(filepath):
	"""Opens an image using Bio-Formats."""
	log_message("Importing with Bioformats...")  # Log the start of importing with Bioformats
	options = ImporterOptions()  # Create importer options
	options.setId(filepath)  # Set the file path in the importer options
	try:
		imps = BF.openImagePlus(options)  # Open the image using Bio-Formats
		if imps and len(imps) > 0:  # Check if images were found
			log_message("Importing with Bioformats: Done.")  # Log the completion of importing with Bioformats
			return imps[0]  # Return the first image
		else:
			log_message("Importing with Bioformats: No Images found in the file: " + filepath + ".")  # Log if no images were found
			return None  # Return None if no images were found
	except Exception as e:
		log_message("Importing with Bioformats: Error opening file" + str(e) + ".")  # Log the error if an exception occurred
		return None  # Return None if an exception occurred

def run_trackmate_predetection(imp, trackmate_settings, save_spots):
	"""Runs the pre-detection step using TrackMate to detect spots in the image."""
	log_message("Running PreDetection...")  # Log the start of pre-detection
	image_info = get_image_info(imp)  # Get the image information
	
	max_quality_threshold_values = []  # Initialize a list to store max quality threshold values
	nb_detected_spots = 0  # Initialize the number of detected spots
	
	imp.setDisplayMode(IJ.COLOR)  # Set the display mode to color
	imp.setC(image_info['CurrentChannel'])  # Set the current channel
	imp.updateAndDraw()  # Update and draw the image
	model = Model()  # Create a new TrackMate model
	trackmate_settings_obj = Settings(imp)  # Create a new TrackMate settings object
	model.setPhysicalUnits(image_info['SpaceUnit'], image_info['TimeUnit'])  # Set the physical units in the model
	trackmate_settings_obj.detectorFactory = DogDetectorFactory()  # Set the detector factory
	trackmate_settings_obj.detectorSettings = {
		'DO_SUBPIXEL_LOCALIZATION': trackmate_settings['StageRepositioning.TrackMate.Subpixel_Localization'],  # Set subpixel localization
		'RADIUS': trackmate_settings['StageRepositioning.TrackMate.Spot_Diameter'] / 2,  # Set the spot radius
		'TARGET_CHANNEL': image_info['CurrentChannel'],  # Set the target channel
		'THRESHOLD': trackmate_settings['StageRepositioning.TrackMate.Threshold_Value'],  # Set the threshold value
		'DO_MEDIAN_FILTERING': trackmate_settings['StageRepositioning.TrackMate.Median_Filtering']  # Set median filtering
	}

	trackmate_settings_obj.trackerFactory = SparseLAPTrackerFactory()  # Set the tracker factory
	trackmate_settings_obj.trackerSettings = trackmate_settings_obj.trackerFactory.getDefaultSettings()  # Get the default tracker settings
	trackmate_settings_obj.trackerSettings['ALLOW_TRACK_SPLITTING'] = True  # Allow track splitting
	trackmate_settings_obj.trackerSettings['ALLOW_TRACK_MERGING'] = True  # Allow track merging
	trackmate_settings_obj.addAllAnalyzers()  # Add all analyzers

	trackmate = TrackMate(model, trackmate_settings_obj)  # Create a new TrackMate instance
	
	ok1 = trackmate.checkInput()  # Check the input
	ok2 = trackmate.process()  # Process the input
	if not ok1 or not ok2:  # Check if the input check or processing failed
		nb_detected_spots = 0  # Set the number of detected spots to 0
		max_quality_threshold_value = 100  # Set the max quality threshold value to 100
		return nb_detected_spots, max_quality_threshold_value  # Return the number of detected spots and max quality threshold value

	selection_model = SelectionModel(model)  # Create a new selection model
	ds = DisplaySettingsIO.readUserDefault()  # Read the user default display settings
	ds.setSpotDisplayRadius(0.9)  # Set the spot display radius
	ds.setSpotDisplayedAsRoi(True)  # Set the spots to be displayed as ROI
	ds.setSpotShowName(True)  # Set the spots to show names
	ds.setSpotTransparencyAlpha(0.7)  # Set the spot transparency alpha
	ds.setSpotFilled(True)  # Set the spots to be filled
	displayer = HyperStackDisplayer(model, selection_model, imp, ds)  # Create a new hyperstack displayer
	displayer.render()  # Render the displayer
	displayer.refresh()  # Refresh the displayer

	nb_detected_spots = int(model.getSpots().getNSpots(False))  # Get the number of detected spots
	
	max_quality = 0  # Initialize the max quality
	if nb_detected_spots > 0:  # Check if there are detected spots
		for spot in model.getSpots().iterable(True):  # Iterate over the detected spots
			spot_quality = spot.getFeature('QUALITY')  # Get the quality of the spot
			if max_quality is None or spot_quality > max_quality:  # Check if the current spot quality is higher than the max quality
				max_quality = spot_quality  # Update the max quality
		max_quality_threshold_values.append(max_quality)  # Append the max quality to the threshold values
		
		if save_spots:  # Check if saving spots is enabled
			all_spots_table = AllSpotsTableView(model, selection_model, ds, filename)  # Create a new all spots table view
			output_csv_path = generate_unique_filepath(OUTPUT_DIR, basename, Suffix="Spots-", Extension=".csv")  # Generate a unique file path for the CSV
			all_spots_table.exportToCsv(output_csv_path)  # Export the spots to CSV
		
	if max_quality_threshold_values:  # Check if there are max quality threshold values
		max_quality_threshold_value = max(max_quality_threshold_values)  # Set the max quality threshold value to the max of the threshold values
	else:
		max_quality_threshold_value = 100  # Set the max quality threshold value to 100
	log_message("Running PreDetection: Done.")  # Log the completion of pre-detection
		
	return nb_detected_spots, max_quality_threshold_value  # Return the number of detected spots and max quality threshold value

def display_stage_repositioning_dialog(imp, trackmate_settings, nb_detected_spots, test_detection, max_quality_threshold_value, dialog_counter, SAVE_INDIVIDUAL_CSVs):
	"""Displays a dialog for the user to adjust TrackMate settings for stage repositioning."""
	global DEBUG
	log_message("Displaying Stage Repositioning Dialog...")  # Log a message indicating that the dialog is being displayed.
	image_info = get_image_info(imp)  # Retrieve image information.
	gd = NonBlockingGenericDialog("Stage Repositioning with Batch TrackMate")  # Create a non-blocking dialog titled "Stage Repositioning with Batch TrackMate".
	gd.addMessage("=== Detection Settings ===")  # Add a message to the dialog for detection settings.
	gd.addCheckbox("Enable Subpixel Localization", trackmate_settings['StageRepositioning.TrackMate.Subpixel_Localization'])  # Add a checkbox for subpixel localization.
	gd.addNumericField("Spot Diameter (microns):", trackmate_settings['StageRepositioning.TrackMate.Spot_Diameter'], 2, 5, "")  # Add a numeric field for spot diameter.
	gd.addSlider("Threshold Value:", 0, max_quality_threshold_value, trackmate_settings['StageRepositioning.TrackMate.Threshold_Value'])  # Add a slider for threshold value.
	gd.addCheckbox("Apply Median Filtering", trackmate_settings['StageRepositioning.TrackMate.Median_Filtering'])  # Add a checkbox for median filtering.
	gd.addCheckbox("Batch Mode", trackmate_settings['StageRepositioning.TrackMate.BatchMode'])  # Add a checkbox for batch mode.

	# Handle the number of detected spots and provide feedback to the user.
	if nb_detected_spots == False:
		nb_detected_spots = 0
	if nb_detected_spots > image_info['NbTimepoints']:
		gd.addMessage("Nb of detected spots too high. Increase the threshold.")  # Add a message if detected spots are too high.
	elif nb_detected_spots < image_info['NbTimepoints']:
		gd.addMessage("No of detected spots too low. Decrease the threshold.")  # Add a message if detected spots are too low.
	elif nb_detected_spots == image_info['NbTimepoints']:
		gd.addMessage("Exactly one spot detected per Frame. You can proceed.")  # Add a message if detected spots are correct.
	
	gd.addMessage("Nb of detected spots: " + str(nb_detected_spots) + ". Nb of Frame: " + str(image_info['NbTimepoints']))  # Display the number of detected spots and frames.
	gd.addCheckbox("Test Detection", test_detection)  # Add a checkbox for test detection.
	gd.addCheckbox("Save individual CSVs", SAVE_INDIVIDUAL_CSVs)
	gd.addCheckbox("Chatty Mode", DEBUG)  # Add a checkbox for chatty mode (debug mode).
	gd.showDialog()  # Show the dialog to the user.

	if gd.wasOKed():
		user_click = "OK"  # Set user click status to "OK".
		trackmate_settings_user = {}
		trackmate_settings_user['StageRepositioning.TrackMate.Subpixel_Localization'] = gd.getNextBoolean()  # Get user input for subpixel localization.
		trackmate_settings_user['StageRepositioning.TrackMate.Spot_Diameter'] = gd.getNextNumber()  # Get user input for spot diameter.
		trackmate_settings_user['StageRepositioning.TrackMate.Threshold_Value'] = gd.getNextNumber()  # Get user input for threshold value.
		trackmate_settings_user['StageRepositioning.TrackMate.Median_Filtering'] = gd.getNextBoolean()  # Get user input for median filtering.
		trackmate_settings_user['StageRepositioning.TrackMate.BatchMode'] = gd.getNextBoolean()  # Get user input for batch mode.
		test_detection = gd.getNextBoolean()  # Get user input for test detection.
		SAVE_INDIVIDUAL_CSVs=gd.getNextBoolean()
		DEBUG = gd.getNextBoolean()  # Get user input for debug mode.
		save_preferences(trackmate_settings_user)  # Save the user preferences.
		dialog_counter += 1  # Increment the dialog counter.
	elif gd.wasCanceled():
		user_click = "Cancel"  # Set user click status to "Cancel".
	
	log_message("Displaying Stage Repositioning Dialog: Done.")  # Log a message indicating that the dialog has been displayed.
	return trackmate_settings_user, user_click, nb_detected_spots, image_info['NbTimepoints'], test_detection, dialog_counter, SAVE_INDIVIDUAL_CSVs  # Return the updated settings and status.

def run_trackmate_detection(imp, trackmate_settings_stored, SAVE_INDIVIDUAL_CSVs):
	"""Runs the TrackMate detection process on the image."""
	log_message("Running TrackMate Detection...")  # Log a message indicating that TrackMate detection is starting.
	image_info = get_image_info(imp)  # Retrieve image information.
	model = Model()  # Create a new TrackMate model.
	trackmate_settings_obj = Settings(imp)  # Create a new TrackMate settings object.
	trackmate_settings_obj.detectorFactory = DogDetectorFactory()  # Set the detector factory to DogDetectorFactory.
	model.setPhysicalUnits(image_info['SpaceUnit'], image_info['TimeUnit'])  # Set the physical units for the model.
	trackmate_settings_obj.detectorSettings = {
		'DO_SUBPIXEL_LOCALIZATION': trackmate_settings_stored['StageRepositioning.TrackMate.Subpixel_Localization'],  # Set subpixel localization.
		'RADIUS': trackmate_settings_stored['StageRepositioning.TrackMate.Spot_Diameter'] / 2,  # Set the radius for spot detection.
		'TARGET_CHANNEL': image_info['CurrentChannel'],  # Set the target channel.
		'THRESHOLD': trackmate_settings_stored['StageRepositioning.TrackMate.Threshold_Value'],  # Set the threshold value.
		'DO_MEDIAN_FILTERING': trackmate_settings_stored['StageRepositioning.TrackMate.Median_Filtering']  # Set median filtering.
	}

	trackmate_settings_obj.trackerFactory = SparseLAPTrackerFactory()  # Set the tracker factory to SparseLAPTrackerFactory.
	trackmate_settings_obj.trackerSettings = trackmate_settings_obj.trackerFactory.getDefaultSettings()  # Get the default tracker settings.
	trackmate_settings_obj.trackerSettings['ALLOW_TRACK_SPLITTING'] = True  # Enable track splitting.
	trackmate_settings_obj.trackerSettings['ALLOW_TRACK_MERGING'] = True  # Enable track merging.
	trackmate_settings_obj.addAllAnalyzers()  # Add all analyzers to the settings.

	trackmate = TrackMate(model, trackmate_settings_obj)  # Create a new TrackMate instance.
	if not trackmate.checkInput() or not trackmate.process():
		return 0, 100  # Return if TrackMate fails.

	selection_model = SelectionModel(model)  # Create a new selection model.
	ds = DisplaySettingsIO.readUserDefault()  # Read the user default display settings.
	ds.setSpotDisplayRadius(0.9)  # Set the spot display radius.
	ds.setSpotDisplayedAsRoi(True)  # Set spots to be displayed as ROI.
	ds.setSpotShowName(True)  # Show spot names.
	ds.setSpotTransparencyAlpha(0.7)  # Set spot transparency.
	ds.setSpotFilled(True)  # Set spots to be filled.
	displayer = HyperStackDisplayer(model, selection_model, imp, ds)  # Create a new hyperstack displayer.
	displayer.render()  # Render the display.
	displayer.refresh()  # Refresh the display.

	nb_detected_spots = model.getSpots().getNSpots(False)  # Get the number of detected spots.
	
	spots_data = []  
	if nb_detected_spots > 0:
		for spot in model.getSpots().iterable(True):  # Iterate over the detected spots
			spot_data = {
				'SPOT_ID': spot.ID(),
				'QUALITY': spot.getFeature('QUALITY'),
				'POSITION_X': spot.getFeature('POSITION_X'),
				'POSITION_Y': spot.getFeature('POSITION_Y'),
				'POSITION_Z': spot.getFeature('POSITION_Z'),
				'POSITION_T': spot.getFeature('POSITION_T'),
				'FRAME': spot.getFeature('FRAME'),
				'RADIUS': spot.getFeature('RADIUS'),
				'VISIBILITY': spot.getFeature('VISIBILITY'),
				'MANUAL_SPOT_COLOR': spot.getFeature('MANUAL_SPOT_COLOR'),
				'MEAN_INTENSITY_CH1': spot.getFeature('MEAN_INTENSITY_CH1'),
				'MEDIAN_INTENSITY_CH1': spot.getFeature('MEDIAN_INTENSITY_CH1'),
				'MIN_INTENSITY_CH1': spot.getFeature('MIN_INTENSITY_CH1'),
				'MAX_INTENSITY_CH1': spot.getFeature('MAX_INTENSITY_CH1'),
				'TOTAL_INTENSITY_CH1': spot.getFeature('TOTAL_INTENSITY_CH1'),
				'STD_INTENSITY_CH1': spot.getFeature('STD_INTENSITY_CH1'),
				'CONTRAST_CH1': spot.getFeature('CONTRAST_CH1'),
				'SNR_CH1': spot.getFeature('SNR_CH1'),
				'SourceFile': image_info['Filename']
			}
			spots_data.append(spot_data)
	if SAVE_INDIVIDUAL_CSVs:
		all_spots_table = AllSpotsTableView(model, selection_model, ds, image_info['Filename'])  # Create a table view of all spots.
	   	output_csv_path = generate_unique_filepath(OUTPUT_DIR, image_info['Basename'], "Stage-Repositioning", ".csv")  # Generate a unique file path for the CSV.
	   	all_spots_table.exportToCsv(output_csv_path)  # Export the spots table to CSV.
	
	log_message("Running TrackMate Detection: Done.")  # Log a message indicating that TrackMate detection is done.
	return nb_detected_spots, image_info['NbTimepoints'], spots_data   # Return the number of detected spots and the number of timepoints.

def process_image(imp, SAVE_INDIVIDUAL_CSVs):
	"""Processes the image using TrackMate with user-defined settings."""
	image_name = imp.getTitle()  # Get the title of the image.
	log_message("Processing Image:" + str(image_name) + "...")  # Log a message indicating that the image is being processed.
	test_detection = False  # Initialize test detection as False.
	dialog_counter = 0  # Initialize the dialog counter.
	user_click = None  # Initialize user click status as None.
	all_spots_data = []  # List to accumulate spots data
	while True:
		trackmate_settings_stored = read_preferences(TRACKMATE_SETTINGS_TEMPLATE)  # Read the stored TrackMate settings.
		nb_detected_spots, max_quality_threshold_value = run_trackmate_predetection(imp, trackmate_settings_stored, SAVE_SPOTS)  # Run pre-detection to get the number of detected spots and max quality threshold value.
		trackmate_settings_user, user_click, nb_detected_spots, nb_timepoints, test_detection, dialog_counter, SAVE_INDIVIDUAL_CSVs = display_stage_repositioning_dialog(imp, trackmate_settings_stored, nb_detected_spots, test_detection, max_quality_threshold_value, dialog_counter, SAVE_INDIVIDUAL_CSVs)  # Display the stage repositioning dialog.
		save_preferences(trackmate_settings_user)  # Save the user-defined TrackMate settings.
		
		trackmate_settings_stored_filtered = {key: value for key, value in trackmate_settings_stored.items() if key != 'StageRepositioning.TrackMate.BatchMode'}  # Filter out batch mode from stored settings.
		trackmate_settings_user_filtered = {key: value for key, value in trackmate_settings_user.items() if key != 'StageRepositioning.TrackMate.BatchMode'}  # Filter out batch mode from user settings.
		
		# Check conditions to break the loop or handle user cancellation.
		if user_click == "OK" and nb_detected_spots == nb_timepoints and trackmate_settings_stored_filtered == trackmate_settings_user_filtered and not test_detection:
			break
		elif user_click == "Cancel":
			log_message("Processing Image:" + str(image_name) + ": User canceled Operation.")  # Log a message indicating user cancellation.
			if image_from_folder:
				imp.close()  # Close the image if it is from a folder.
			sys.exit(1)  # Exit the program.
	
	nb_detected_spots, nb_timepoints,spots_data  = run_trackmate_detection(imp, trackmate_settings_stored, SAVE_INDIVIDUAL_CSVs)  # Run TrackMate detection with the stored settings.
	all_spots_data.extend(spots_data)  # Accumulate spots data
	return trackmate_settings_user, nb_timepoints,SAVE_INDIVIDUAL_CSVs, all_spots_data  # Return the user-defined settings and number of timepoints.
	
def process_image_batch(imp, close, SAVE_INDIVIDUAL_CSVs):
	"""Processes the image in batch mode using TrackMate with stored settings."""
	image_name = imp.getTitle()  # Get the title of the image.
	log_message("Processing in Batch Image:" + str(image_name) + "...")  # Log a message indicating the start of batch processing for the image.
	trackmate_settings_stored = read_preferences(TRACKMATE_SETTINGS_TEMPLATE)  # Read the stored TrackMate settings.
	nb_detected_spots, nb_timepoints, spots_data  = run_trackmate_detection(imp, trackmate_settings_stored, SAVE_INDIVIDUAL_CSVs)  # Run TrackMate detection with the stored settings.
	if close:
		imp.close()  # If the close flag is True, close the image.
	log_message("Processing in Batch Image:" + str(image_name) + ": Done.")  # Log a message indicating that batch processing is done for the image.
	return nb_detected_spots, nb_timepoints, spots_data  # Return the number of detected spots and the number of timepoints.

def clean_filename(filename):
	"""Cleans the filename by removing specific substrings."""
	log_message("Cleaning Filename "+str(filename)+" ...")
	filename = filename.replace(" - ", "_")
	filename = filename.replace(".czi", "")
	filename = filename.replace("_spots", "")
	filename = filename.replace(".csv", "")
	
	log_message("Cleaning Filename "+str(filename)+": Done")
	return filename

def is_float(value):
	"""Checks if a value can be converted to a float."""
	try:
		float(value)
		return True
	except ValueError:
		return False

def merge_and_process_spots_data(all_spots_data, output_csv_path):
	"""Merges, processes, and orders spots data from a list of dictionaries."""
	log_message("Merging and Processing Spots Data...")  # Log start

	if not all_spots_data:
		log_message("No spots data available to merge.")
		return

	# Define the desired column order, including placeholders for the known columns
	column_order = [
		'SPOT_ID', 'QUALITY', 'POSITION_X', 'POSITION_Y', 'POSITION_Z', 'POSITION_T', 'FRAME',
		'RADIUS', 'VISIBILITY', 'MANUAL_SPOT_COLOR', 'MEAN_INTENSITY_CH1', 'MEDIAN_INTENSITY_CH1',
		'MIN_INTENSITY_CH1', 'MAX_INTENSITY_CH1', 'TOTAL_INTENSITY_CH1', 'STD_INTENSITY_CH1',
		'CONTRAST_CH1', 'SNR_CH1', 'SourceFile', 'Time (sec)', 'Time (min)', 'X (nm)', 'Y (nm)', 'Z (nm)',
		'Displacement 3D (nm)', 'Displacement 2D (nm)'
	]

	# Ensure all data has required fields and dynamically add 'Variable-xxx' keys to each spot
	max_variables = 0
	for spot_data in all_spots_data:
		source_file = spot_data.get("SourceFile", "")
		# Clean the source file name and split into variables
		cleaned_filename = clean_filename(source_file)
		spot_data['SourceFile'] = cleaned_filename
		
		# Split the cleaned filename by underscores and add variables to the spot_data
		filename_parts = cleaned_filename.split('_')
		for i, part in enumerate(filename_parts):
			variable_key = 'Variable-%03d' % (i+1)  # String formatting for Python 2
			spot_data[variable_key] = part
		
		max_variables = max(max_variables, len(filename_parts))  # Track the maximum number of variables

	# Add 'Variable-xxx' to the column order
	for i in range(max_variables):
		column_order.append('Variable-%03d' % (i+1))

	# Group data by SourceFile
	grouped_data = {}
	for spot_data in all_spots_data:
		source_file = spot_data.get("SourceFile", "")
		cleaned_filename = clean_filename(source_file)
		spot_data['SourceFile'] = cleaned_filename
		if cleaned_filename not in grouped_data:
			grouped_data[cleaned_filename] = []
		grouped_data[cleaned_filename].append(spot_data)

	# Process and compute differences and displacements per SourceFile
	processed_data = []
	for source_file, spots in grouped_data.items():
		# Sort spots by Frame
		sorted_spots = sorted(spots, key=lambda x: int(x.get("FRAME", 0)))

		# Initialize reference and previous positions
		ref_x = ref_y = ref_z = None
		prev_x = prev_y = prev_z = None

		for i, spot in enumerate(sorted_spots):
			# Set reference positions only for the first frame (FRAME == 0)
			if i == 0 and int(spot.get("FRAME", -1)) == 0:
				if is_float(spot["POSITION_X"]):
					ref_x = float(spot["POSITION_X"])
				if is_float(spot["POSITION_Y"]):
					ref_y = float(spot["POSITION_Y"])
				if is_float(spot["POSITION_Z"]):
					ref_z = float(spot["POSITION_Z"])

			# Compute relative positions in nm (from the reference)
			x = float(spot["POSITION_X"]) if is_float(spot["POSITION_X"]) else None
			y = float(spot["POSITION_Y"]) if is_float(spot["POSITION_Y"]) else None
			z = float(spot["POSITION_Z"]) if is_float(spot["POSITION_Z"]) else None

			spot["X (nm)"] = (x - ref_x) * 1000 if x is not None and ref_x is not None else 0
			spot["Y (nm)"] = (y - ref_y) * 1000 if y is not None and ref_y is not None else 0
			spot["Z (nm)"] = (z - ref_z) * 1000 if z is not None and ref_z is not None else 0

			# Compute differences relative to the previous frame
			x_diff = y_diff = z_diff = 0  # Default differences are 0
			if prev_x is not None and x is not None:
				x_diff = (x - prev_x) * 1000  # Convert to nm
			if prev_y is not None and y is not None:
				y_diff = (y - prev_y) * 1000  # Convert to nm
			if prev_z is not None and z is not None:
				z_diff = (z - prev_z) * 1000  # Convert to nm

			# Compute 2D and 3D displacements
			spot["Displacement 2D (nm)"] = math.sqrt(x_diff ** 2 + y_diff ** 2)
			spot["Displacement 3D (nm)"] = math.sqrt(x_diff ** 2 + y_diff ** 2 + z_diff ** 2)
			spot['Time (sec)'] = float(spot["POSITION_T"]) if is_float(spot["POSITION_T"]) else None
			spot['Time (min)'] = spot['Time (sec)'] / 60

			# Update previous positions for the next iteration
			prev_x, prev_y, prev_z = x, y, z

			processed_data.append(spot)

	# Sort all processed data by SourceFile and Frame for final output
	final_data = sorted(processed_data, key=lambda x: (x.get("SourceFile", ""), int(x.get("FRAME", 0))))

	# Write the processed data to CSV (Python 2 compatible)
	with open(output_csv_path, mode='wb') as output_file:
		log_message("Writing Data to CSV...")
		writer = csv.DictWriter(output_file, fieldnames=column_order)
		writer.writeheader()
		for row in final_data:
			# Convert all values to UTF-8 if they are unicode
			writer.writerow({k: v.encode('utf-8') if isinstance(v, unicode) else v for k, v in row.items()})

	log_message("Merging and Processing Spots Data: Done.")




### Main Script here

# Create output directory on the user's desktop if it doesn't already exist
if not os.path.exists(OUTPUT_DIR):
    os.makedirs(OUTPUT_DIR)  # Create the output directory for saving files

# Initialize variables
image_from_folder = False  # Flag to track if the images are coming from a folder
open_images = WindowManager.getImageTitles()  # Get titles of all open images in ImageJ
all_spots_data = []  # Initialize an empty list to store all spot data

# Check if there are any open images in ImageJ
if open_images:
    # Loop through each open image and process it
    for index, image_title in enumerate(open_images):
        imp = WindowManager.getImage(image_title)  # Get the ImagePlus object for the current image
        image = WindowManager.getFrame(imp.getTitle())  # Get the image's frame
        image.toFront()  # Bring the image window to the front
        image_name = imp.getTitle()  # Get the title of the current image
        
        # For the first image, run the full processing pipeline
        if index == 0:
            trackmate_settings_user, nb_timepoints, SAVE_INDIVIDUAL_CSVs, spots_data = process_image(imp, SAVE_INDIVIDUAL_CSVs)  # Process the image with user settings
            IJ.log("Success processing " + str(image_name) + ".")  # Log success message
            all_spots_data.extend(spots_data)  # Add the spot data from this image to the list
        else:
            # For subsequent images, check if batch mode is enabled
            if trackmate_settings_user['StageRepositioning.TrackMate.BatchMode']:
                nb_detected_spots, nb_timepoints, spots_data = process_image_batch(imp, False, SAVE_INDIVIDUAL_CSVs)  # Process the image in batch mode
                if nb_detected_spots == nb_timepoints:  # Check if the number of detected spots matches the number of timepoints
                    IJ.log("Success in batch processing " + str(image_name) + ".")  # Log success message
                    all_spots_data.extend(spots_data)  # Add the spot data from this image to the list
                else:
                    log_message("Detection failed for " + str(image_name) + ".\nDetected spots " + str(nb_detected_spots) + ". Nb of Frame: " + str(nb_timepoints))  # Log failure message
                    trackmate_settings_user, nb_timepoints, SAVE_INDIVIDUAL_CSVs, spots_data = process_image(imp, SAVE_INDIVIDUAL_CSVs)  # Reprocess the image
                    all_spots_data.extend(spots_data)  # Add the spot data from this image to the list
                    IJ.log("Success processing " + str(image_name) + ".")  # Log success after reprocessing
            else:
                trackmate_settings_user, nb_timepoints, SAVE_INDIVIDUAL_CSVs, spots_data = process_image(imp, SAVE_INDIVIDUAL_CSVs)  # Process the image if batch mode is not enabled
                all_spots_data.extend(spots_data)  # Add the spot data from this image to the list
                IJ.log("Success processing " + str(image_name) + ".")  # Log success message
else:
    # If no images are open, prompt the user to select a folder containing images
    folder_input_path = select_folder(default_path=os.path.join(os.path.expanduser("~"), "Desktop"))  # Open folder selection dialog with default path to Desktop
    if folder_input_path:
        # Get a list of valid image files in the selected folder
        image_files = [
            f for f in os.listdir(folder_input_path)
            if f.lower().endswith(IMAGE_EXTENSIONS)  # Only include files with valid image extensions
        ]
        
        if not image_files:
            # Show error message if no valid image files were found in the selected folder
            JOptionPane.showMessageDialog(None, "No valid image files found in the selected folder.", "Error", JOptionPane.ERROR_MESSAGE)
            sys.exit(1)  # Exit the program
       
        image_from_folder = True  # Set flag indicating images are coming from a folder
        # Process each image in the folder
        for index, image_file in enumerate(image_files):
            imp = open_image_bioformats(os.path.join(folder_input_path, image_file))  # Open image using Bio-Formats
            if imp is None:
                continue  # Skip the image if it could not be opened
            image_name = imp.getTitle()  # Get the image title
            
            # For the first image, run the full processing pipeline
            if index == 0:
                trackmate_settings_user, nb_timepoints, SAVE_INDIVIDUAL_CSVs, spots_data = process_image(imp, SAVE_INDIVIDUAL_CSVs)  # Process the image with user settings
                all_spots_data.extend(spots_data)  # Add the spot data from this image to the list
                IJ.log("Success processing " + str(image_name) + ".")  # Log success message
                imp.close()  # Close the image after processing
            else:
                # For subsequent images, check if batch mode is enabled
                if trackmate_settings_user['StageRepositioning.TrackMate.BatchMode']:
                    nb_detected_spots, nb_timepoints, spots_data = process_image_batch(imp, False, SAVE_INDIVIDUAL_CSVs)  # Process the image in batch mode
                    if nb_detected_spots == nb_timepoints:  # Check if the number of detected spots matches the number of timepoints
                        all_spots_data.extend(spots_data)  # Add the spot data from this image to the list
                        IJ.log("Success batch processing " + str(image_name) + ".")  # Log success message for batch processing
                        imp.close()  # Close the image after processing
                    else:
                        IJ.log("Detection failed for " + image_name + ".\nDetected spots " + str(nb_detected_spots) + ". Nb of Frame: " + str(nb_timepoints))  # Log failure message
                        trackmate_settings_user, nb_timepoints, SAVE_INDIVIDUAL_CSVs, spots_data = process_image(imp, SAVE_INDIVIDUAL_CSVs)  # Reprocess the image
                        all_spots_data.extend(spots_data)  # Add the spot data from this image to the list
                        IJ.log("Success processing " + str(image_name) + ".")  # Log success after reprocessing
                        imp.close()  # Close the image after reprocessing
                else:
                    trackmate_settings_user, nb_timepoints, SAVE_INDIVIDUAL_CSVs, spots_data = process_image(imp, SAVE_INDIVIDUAL_CSVs)  # Process the image if batch mode is not enabled
                    all_spots_data.extend(spots_data)  # Add the spot data from this image to the list
                    IJ.log("Success processing " + str(image_name) + ".")  # Log success message
                    imp.close()  # Close the image after processing
    else:
        # Show error message if no folder is selected by the user
        JOptionPane.showMessageDialog(None, "No folder selected.", "Error", JOptionPane.ERROR_MESSAGE)
        sys.exit(1)  # Exit the program

# Determine the number of processed images
if open_images:
    Nb_Processed_Images = len(open_images)  # Count processed images if images were open in ImageJ
elif image_files:
    Nb_Processed_Images = len(image_files)  # Count processed images if images were from a folder
else:
    Nb_Processed_Images = 1  # Default to 1 if no images were processed

# Log the success message indicating the number of processed images
MessageProcessing = "Stage Repositioning with Batch TrackMate completed.\n" + format(Nb_Processed_Images) + " images have been processed successfully."
if SAVE_INDIVIDUAL_CSVs:
    MessageProcessing = MessageProcessing + "\nFiles are saved in " + str(OUTPUT_DIR)
IJ.log(MessageProcessing)  

# Generate a unique filepath for merging the CSV files
merged_data_output_path = generate_unique_filepath(OUTPUT_DIR, basename="Stage-Repositioning", suffix="Merged-Data", extension=".csv")
IJ.log("Merging and Processing CSV Files...")
merge_and_process_spots_data(all_spots_data, merged_data_output_path)  # Merge and process the collected spot data

# Notify the user of successful completion
MessageMerging = "Merging and processing completed successfully.\nMerged Data saved as {}".format(merged_data_output_path)
IJ.log(MessageMerging)
MessageFinal = MessageProcessing + "\n" + MessageMerging  # Combine messages for final notification
JOptionPane.showMessageDialog(None, MessageFinal, "Stage Repositioning with Batch TrackMate", JOptionPane.INFORMATION_MESSAGE)

This script automates the process of detecting and tracking spots using the TrackMate plugin for ImageJ/FIJI. To use it:

  • Drop the script into the FIJI toolbar and click Run.

If images are already opened:

  1. Prompt User for Configuration:

    • The user is prompted to configure settings such as enabling subpixel localization, adjusting spot diameter, setting the threshold, and applying median filtering. These settings can be loaded from previous configurations stored in ImageJ’s preferences or set to default values. It’s recommended to run TrackMate manually on a representative image to fine-tune detection parameters (ideally detecting one spot per frame).
  2. User Selection for Image Processing:

    • The user can choose to process all open images or just the active one.
  3. For Each Image:

    • Configure TrackMate Detector and Tracker: The TrackMate detector is configured using the Difference of Gaussians method, and the tracker is set to SparseLAPTracker.
    • Analyze and Filter Tracks: Features are analyzed, and tracks are filtered according to user-defined settings.
    • Process Image: The image is processed, spots are tracked, and results are visualized in the active window.
    • Export Spot Data: Detected spot data is exported to a uniquely named CSV file stored in an "Output" directory on the desktop.
    • Save Settings: User-defined settings are saved to ImageJ’s preferences for future use.
  4. Summary:

    • A dialog is displayed summarizing the number of processed images and the location of the saved output.

If no image is opened:

  1. Prompt User to Select Folder:

    • If no image is open, the user is prompted to select a folder containing images for processing.
  2. Process the First Image:

    • The first image is opened, and the processing workflow is the same as described for open images. The user can choose to process just this image or all images in the folder.
  3. For Subsequent Images:

    • Batch Mode Handling:
      • If batch mode is enabled, images are processed without additional user input, and the settings are applied automatically.
      • If batch mode is not enabled, the user is prompted to configure settings for each subsequent image.
  4. Summary:

    • A dialog is displayed summarizing the number of processed images and the location of the saved output.

Post-Processing:

  • A summary message is logged, detailing the number of processed images and the location where the CSV files are saved.
  • The script merges and processes CSV files into a single "Merged-Data" file stored in the output directory.
  • The user is notified when the merging and processing are completed successfully.

From v7, the FIJI script performs all the necessary tasks.

Prior v7, it was generating a CSV file for each image, which can be aggregated for further analysis using the accompanying R script, Process Stage Repositioning Results.R. This R script processes all CSV files in a selected folder and save the file as Stage-Repositioning_Merged-Data.csv in an "Output" folder on the user desktop for streamlined data analysis.

Process Stage Repositioning Results
# Load and install necessary libraries at the beginning
rm(list = ls())

# Check and install required packages if they are not already installed
if (!require(dplyr)) install.packages("dplyr", dependencies = TRUE)
if (!require(stringr)) install.packages("stringr", dependencies = TRUE)
if (!require(ggplot2)) install.packages("ggplot2", dependencies = TRUE)
if (!require(tcltk)) install.packages("tcltk", dependencies = TRUE)

# Load libraries
library(dplyr)
library(stringr)
library(ggplot2)
library(tcltk)

# Set default input and output directories
default_input_dir <- file.path(Sys.getenv("USERPROFILE"), "Desktop")  # Default to "Output" on Desktop
InputFolder <- tclvalue(tkchooseDirectory(initialdir = default_input_dir, title = "Select a folder containing CSV files"))

#InputFolder <- default_input_dir  # Use default folder

# Specify the Output folder (this is a fixed folder on the Desktop)
OutputFolder <- file.path(Sys.getenv("USERPROFILE"), "Desktop", "Output")
if (!dir.exists(OutputFolder)) dir.create(OutputFolder, recursive = TRUE)

# List all CSV files in the folder
csv_files <- list.files(path = InputFolder, pattern = "\\.csv$", full.names = TRUE)
if (length(csv_files) == 0) stop("No CSV files found in the selected directory.")

header <- names(read.csv(csv_files[1], nrows = 1))

# Function to clean the filename
clean_filename <- function(filename) {
  filename_parts <- strsplit(filename, " - ")[[1]]
  cleaned_filename <- sub("\\.czi", "", filename_parts[1])
  cleaned_filename <- sub("_spots", "", cleaned_filename)
  cleaned_filename <- sub("\\.csv", "", cleaned_filename)
  return(cleaned_filename)
}

# Read and merge all CSV files
merged_data <- csv_files %>%
  lapply(function(file) {
    data <- read.csv(file, skip = 4, header = FALSE)
    colnames(data) <- header
    data <- data %>% arrange(FRAME)
    
    # Clean and add source file info
    filename <- basename(file)
    data$SourceFile <- clean_filename(filename)
    
    # Extract variables from the filename
    filename_parts <- strsplit(clean_filename(filename), "_")[[1]]
    for (i in seq_along(filename_parts)) {
      variable_name <- paste("Variable-", sprintf("%03d", i), sep = "")
      data[[variable_name]] <- filename_parts[i]
    }
    
    # Add time columns if available
    if ("POSITION_T" %in% colnames(data)) {
      data$`Time (sec)` <- round(data$POSITION_T, 0)
      data$`Time (min)` <- round(data$`Time (sec)` / 60, 2)
    }
    
    # Calculate displacement columns (X, Y, Z, 3D, 2D)
    if ("POSITION_X" %in% colnames(data)) {
      first_value <- data$POSITION_X[data$FRAME == 0][[1]]
      data$`X (nm)` <- (data$POSITION_X - first_value) * 1000
    }
    if ("POSITION_Y" %in% colnames(data)) {
      first_value <- data$POSITION_Y[data$FRAME == 0][[1]]
      data$`Y (nm)` <- (data$POSITION_Y - first_value) * 1000
    }
    if ("POSITION_Z" %in% colnames(data)) {
      first_value <- data$POSITION_Z[data$FRAME == 0][[1]]
      data$`Z (nm)` <- (data$POSITION_Z - first_value) * 1000
    }
    
    # Calculate displacement (3D and 2D)
    if (all(c("X (nm)", "Y (nm)", "Z (nm)") %in% colnames(data))) {
      data$`Displacement 3D (nm)` <- sqrt(
        diff(c(0, data$`X (nm)`))^2 + 
          diff(c(0, data$`Y (nm)`))^2 + 
          diff(c(0, data$`Z (nm)`))^2
      )
      data$`Displacement 3D (nm)`[1] <- 0
    }
    if (all(c("X (nm)", "Y (nm)") %in% colnames(data))) {
      data$`Displacement 2D (nm)` <- sqrt(
        diff(c(0, data$`X (nm)`))^2 + 
          diff(c(0, data$`Y (nm)`))^2
      )
      data$`Displacement 2D (nm)`[1] <- 0
    }
    
    return(data)
  }) %>%
  bind_rows()

# Save the merged data to a new CSV file
output_file <- "Stage-Repositioning_Merged-Data.csv"
write.csv(merged_data, file = file.path(OutputFolder, output_file), row.names = FALSE)

cat("All CSV files have been merged and saved to", output_file, "\n")

This R code automates the processing of multiple CSV files containing spot tracking data.

  1. Installs and loads required libraries
  2. Sets directories for input (CSV files) and output (merged data) on the user's desktop.
  3. Lists all CSV files in the input directory and reads in the header from the first CSV file.
  4. Defines a filename cleaning function to extract relevant metadata from the filenames (e.g., removing extensions, and extracting variables).
  5. Reads and processes each CSV file:
    • Skips initial rows and assigns column names.
    • Cleans up filenames and adds them to the dataset.
    • Calculates displacement in the X, Y, and Z axes relative to the initial position with the formulat  PositionRelative= Position - PositionInitial, and computes both 3D and 2D displacement values with the following formulas:  2D_Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2)); 3D_Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2) + (Z2-Z1)2 )
  6. Merges all the processed data into a single dataframe.
  7. Saves the results as Stage-Repositioning_Merged-Data.csv located in the selected input folder.

This script generate a single CSV File that can be further processed and summarized with a pivot table as shown in the following spreadsheet Stage-Repositioning_Template.xlsx

Using the first frame as a reference we can plot the average XYZ position for each frame.

 

As observed earlier, there is a significant displacement between Frame 0 and Frame 1, particularly along the X-axis. For this analysis, we will exclude the first two frames and focus on the variables of interest: (i) Traveled distance, (ii) Speed, and (iii) Acceleration and will come back to the initial shift later.

 Repositioning Dispersion: Impact of Traveled Distance

Results

Plot the 2D displacement versus the frame number for each condition of traveled distance.


The data looks good now with the two first frames ignored. Now, we can calculate the average of the standard deviation of the 2D displacement and plot these values against the traveled distance..


 We observe a power-law relationship, described by the equation: Repositioning Dispersion = 8.2 x Traveled Distance^0.2473

Traveled Distance (um)Repositioning Dispersion (nm)
04
16
1020
10019
100076
1000056
30000107

Conclusion

In conclusion we observe that the traveled distance significantly (one-way ANOVA) affects the repositioning dispersion. However, this dispersion is much lower than the lateral resolution of the system (705nm).

Repositioning Dispersion: Impact of Speed and Acceleration

Results

Generate a plot of the 2D displacement as a function of frame number for each combination of Speed and Acceleration conditions. This visualization will help assess the relationship between displacement and time across the different experimental settings.


As noted earlier, there is a significant displacement between Frame 0 and Frame 1, particularly along the X-axis (600 nm) and, to a lesser extent, the Y-axis (280 nm). To refine our analysis, we will exclude the first two frames and focus on the key variables of interest: (i) Speed and (ii) Acceleration. To better understand the system's behavior, we will visualize the average standard deviation of the 2D displacement for each combination of Speed and Acceleration conditions.

Our observations indicate that both Acceleration and Speed contribute to an increase in 2D repositioning dispersion. However, a two-way ANOVA reveals that only Speed has a statistically significant effect on 2D repositioning dispersion. Post-hoc analysis further demonstrates that the dispersion for the Speed-Fast, Acc-High condition is significantly greater than that of the Speed-Low, Acc-Low condition.


2D Repositioning Dispersion (nm)
Speed-Slow Acc-Low32
Speed-Slow Acc-High49
Speed-Fast Acc-Low54
Speed-Fast Acc-High78

Conclusion

In conclusion we observe that Speed but not Acceleration increase 2D Repositioning Dispersion.

What about the initial shift ?

Right, I almost forgot about that. See below.

Results

Ploting the 3D displacement for each tested conditions from the preivous data.

We observe a single floating point that corresponds to the displacement between Frame 0 and Frame 1. This leads me to hypothesize that the discrepancy may be related to the stage's dual motors, each controlling a separate axis (X and Y). Each motor operates in two directions (Positive and Negative). Since the shift occurs only at the first frame, this likely relates to how the experiment is initiated.

To explore this further, I decided to test whether these variables significantly impact the repositioning. We followed the XYZ repositioning dispersion protocol, testing the following parameters:

  • Distance: 1000 µm
  • Speed: 100%
  • Acceleration: 100%
  • Axis: X, Y, XY
  • Starting Point: Centered (on target), Positive (shifted positively from the reference position), Negative (shifted negatively from the reference position) 
  • For each condition, three datasets were acquired.

Data Stage-Repositining_Diagnostic-Data.xlsx was processed as mentioned before and we ploted the 2D displacement function of the frame for each condition.

When moving along the X-axis only, we observe a shift in displacement when the starting position is either centered or positively shifted, but no shift occurs when the starting position is negatively shifted. This suggests that the behavior of the stage’s motor or the initialization of the experiment may be affected by the direction of the shift relative to the reference position, specifically when moving in the positive direction.

When moving along the Y-axis only, we observe a shift in displacement when the starting position is positively shifted, but no shift occurs when the starting position is either centered or negatively shifted. This indicates that the stage's motor behavior or initialization may be influenced by the direction of the shift, particularly when starting from a positive offset relative to the reference position.

When moving along both the X and Y axes simultaneously, a shift is observed when the starting position is centered. This shift becomes more pronounced when the starting position is positively shifted in any combination of the X and Y axes (+X+Y, +X-Y, -X+Y). However, the shift is reduced when the starting position is negatively shifted along both axes.

Conclusion

In conclusion, the observed shifts in repositioning dispersion are influenced by the initial starting position of the stage. When moving along the X and Y axes simultaneously, the shift is most significant when the starting position is centered and increases with positive shifts in both axes. Conversely, when the starting position is negatively shifted along both axes, the shift is reduced. These findings suggest that the initialization of the stage's position plays a crucial role in the accuracy of movement.

Channel Co-Alignment

Channel co-alignment or co-registration refers to the process of aligning image data collected from multiple channels. This ensures that signals originating from the same location in the sample are correctly overlaid. This process is essential in multi-channel imaging to maintain spatial accuracy and avoid misinterpretation of co-localized signals. For a comprehensive guide on Channel Co-Registration, refer to the Chromatic aberration and Co-Registration the QuaRep Working Group 04.

Acquisition protocol

  1. Place 4 µm diameter fluorescent beads (TetraSpec Fluorescent Microspheres Size Kit, mounted on a slide) on the microscope stage.

  2. Center an isolated bead under a high-NA dry objective.

  3. Crop the acquisition area to only visualize one bead but keep it large enough to anticipate a potential chromatic shift along the XY and Z axes
  4. Acquire a multi-channel Z-stack

    I usually acquire all available channels, with the understanding that no more than seven channels can be processed simultaneously. If you have more than 7 channels you can either split them into sets smaller or equal to 7 keeping one reference channel in all sets.

  5. Acquire 3 datasets

  6. Repeat for each objective

 Processing

You should have acquired several multi-channel images that now need processing to yield meaningful results. To process them, use the Channel Co-registration analysis feature of the MetroloJ_QC plugin for FIJI. For more information about the MetroloJ_QC plugin please refer to manual available at the MontpellierRessourcesImagerie on GitHub.

  1. Open FIJI.
  2. Load your image by dragging it into the FIJI bar.
  3. Launch the MetroloJ_QC plugin by navigating to Plugins > MetroloJ QC.
  4. Click on Channel Co-registration Report.
  5. Enter a Title for your report.
  6. Type in your Name.
  7. Click Microscope Acquisition Parameters and enter:
    1. The imaging modality (Widefield, Confocal, Spinning-Disk, Multi-photon) 
    2. The Numerical Aperture of the Objective used.
    3. The Refractive index of the imersion media. (Air: 1.0; Water 1.33; Oil: 1.515)
    4. The Emission Wavelength in nm for each channel.
  8. Click OK

  9. Click Bead Detection Options
    1. Bead Detection Threshold: Legacy
    2. Center Method: Legacy Fit ellipses
  10. Enable Apply Tolerance to the Report and Reject co-registration ratio above 1.

  11. Click File Save Options.

  12. Select Save Results as PDF Reports.

  13. Select Save Results as spreadsheets.

  14. Select Save result images.
  15. Select Open individual pdf report(s).
  16. Click OK.

  17. Repeat steps 4 through 16 for each image you have acquired

 Good to know: Settings are preserved between Channel Co-alignment sessions. You may want to process images with the same objective and same channels together.

This will generate detailed results stored in a folder named Processed. The Processed folder will be located in the same directory as the original images, with each report and result saved in its own sub-folder.

The following script for R processes channel co-alignment results generated by MetroloJ_QC Channel Co-registration function Process Chanel Co-Registration Results.R.

To use it, simply drag and drop the file into the R interface. You may also open it with RStudio and Click the Source button. The script will:

  • Prompt the user to select an input folder.
  • Load all _results.xls files located in the selected folder and subfolder
  • Process and merge all results generated by the MetroloJ_QC Channel Co-registration.
  • Split the filenames using the _ character as a separator and create columns named Variable-001, Variable-002, etc. This will help in organizing the data if the filenames are formatted as Variable-A_Variable-B, for example.
  • Save the result as Channel_Co-Registration_Merged-Data.csv in an Output folder on the user's Desktop
Process Channel Co-Registration Results
# Clear the workspace
rm(list = ls())

# Ensure required packages are installed and loaded
if (!require(tcltk)) install.packages("tcltk", dependencies = TRUE)
library(tcltk)
if (!require(reshape2)) install.packages("reshape2", dependencies = TRUE)
library(reshape2)


# Clear the workspace
rm(list = ls())

# Set default input and output directories
default_input_dir <- file.path(Sys.getenv("USERPROFILE"), "Desktop", "Input")  # Default to "Input" on Desktop
InputFolder <- default_input_dir  # Use default folder
InputFolder <- tclvalue(tkchooseDirectory(initialdir = default_input_dir, title = "Select a folder containing XLS files"))

OutputFolder <- file.path(Sys.getenv("USERPROFILE"), "Desktop", "Output")
if (!dir.exists(OutputFolder)) dir.create(OutputFolder, recursive = TRUE)

# List all XLS files with "_results.xls" in the folder and subfolders
xls_files <- list.files(path = InputFolder, pattern = "_results.xls$", full.names = TRUE, recursive = TRUE)
if (length(xls_files) == 0) stop("No _results.xls files found in the selected directory.")

# Initialize lists for storing data
ratio_data_tables <- list()
pixel_shift_tables <- list()

# Process each XLS file
for (file in xls_files) {
  lines <- readLines(file, encoding = "latin1")
  
  # Extract "Ratios" data
  ratios_line_index <- grep("Ratios", lines)
  if (length(ratios_line_index) == 0) {
    cat("No 'Ratios' line found in file:", file, "\n")
    next
  }
  
  start_row <- ratios_line_index[1] + 1
  relevant_lines <- lines[start_row:length(lines)]
  
  channel_start_index <- which(grepl("^Channel", relevant_lines))[1]
  if (is.na(channel_start_index)) {
    cat("No rows starting with 'Channel' found after 'Ratios' in file:", file, "\n")
    next
  }
  
  contiguous_rows <- relevant_lines[channel_start_index]
  for (i in (channel_start_index + 1):length(relevant_lines)) {
    if (grepl("^Channel", relevant_lines[i])) {
      contiguous_rows <- c(contiguous_rows, relevant_lines[i])
    } else {
      break
    }
  }
  
  if (length(contiguous_rows) > 0) {
    # Read the ratio_data table from contiguous rows
    ratio_data <- read.table(text = paste(contiguous_rows, collapse = "\n"), sep = "\t", header = FALSE, stringsAsFactors = FALSE)
    
    # Extract channel names and matrix of values
    channels <- ratio_data$V1
    values <- ratio_data[, -1]
    colnames(values) <- channels
    rownames(values) <- channels
    
    # Melt the matrix into a long-format table
    ratio_data_formatted <- melt(as.matrix(values), varnames = c("Channel_1", "Channel_2"), value.name = "Ratio")
    ratio_data_formatted$Pairing <- paste(ratio_data_formatted$Channel_1, "x", ratio_data_formatted$Channel_2)
    
    # Add filename and split into parts
    filename_no_extension <- tools::file_path_sans_ext(basename(file))
    ratio_data_formatted$Filename <- filename_no_extension
    
    filename_parts <- strsplit(ratio_data_formatted$Filename, "_")
    max_parts <- max(sapply(filename_parts, length))
    for (i in 1:max_parts) {
      new_column_name <- paste0("Variable-", sprintf("%03d", i))
      ratio_data_formatted[[new_column_name]] <- sapply(filename_parts, function(x) ifelse(length(x) >= i, x[i], NA))
    }
    
    # Add channel name pairings
    channel_names <- unlist(strsplit(ratio_data_formatted$`Variable-005`[1], "-"))
    ratio_data_formatted$Channel_Name_Pairing <- apply(ratio_data_formatted, 1, function(row) {
      ch1_index <- as.integer(gsub("Channel ", "", row["Channel_1"])) + 1
      ch2_index <- as.integer(gsub("Channel ", "", row["Channel_2"])) + 1
      paste(channel_names[ch1_index], "x", channel_names[ch2_index])
    })
    # Filter columns for ratio_data: Channel_1, Channel_2, Channel_Pairing, Ratio, Filename, Variable-001, Variable-002 ..., Channel_Name_Pairing
    ratio_data_formatted <- ratio_data_formatted[, c("Channel_1", "Channel_2", "Pairing", "Ratio", "Filename", 
                                                     paste0("Variable-", sprintf("%03d", 1:7)), "Channel_Name_Pairing")]
    
    ratio_data_tables[[basename(file)]] <- ratio_data_formatted
  } else {
    cat("No contiguous 'Channel' rows found to convert into a data frame.\n")
  }
  
  # Extract "Pixel Shifts" data
  pixelShifts_line_index <- grep("pixelShifts", lines)
  if (length(pixelShifts_line_index) > 0) {
    start_row_pixel_shifts <- pixelShifts_line_index[1] + 1
    relevant_pixel_shifts_lines <- lines[start_row_pixel_shifts:length(lines)]
    pixel_shift_rows <- character(0)
    
    for (line in relevant_pixel_shifts_lines) {
      columns <- strsplit(line, "\t")[[1]]
      if (length(columns) >= 2 && grepl("shift$", columns[2])) {
        pixel_shift_rows <- c(pixel_shift_rows, line)
      }
    }
    
    if (length(pixel_shift_rows) > 0) {
      pixel_shift_data <- read.table(text = paste(pixel_shift_rows, collapse = "\n"), sep = "\t", header = FALSE, stringsAsFactors = FALSE)
      # Set the first column name to "Channel"
      colnames(pixel_shift_data)[1] <- "Channel"
      # Extract unique channel names from the `Channel` column
      # Filter out empty strings and get unique channel values
      unique_channels <- unique(pixel_shift_data$Channel[pixel_shift_data$Channel != ""])
      
      # Update column names: Keep "Channel" and "Axis", then use `unique_channels` for the rest
      new_colnames <- c("Channel", "Axis", unique_channels)
      
      # Update the column names of the data frame
      colnames(pixel_shift_data) <- new_colnames
      
      for (i in 2:nrow(pixel_shift_data)) {
        if (is.na(pixel_shift_data$Channel[i]) || pixel_shift_data$Channel[i] == "") {
          pixel_shift_data$Channel[i] <- pixel_shift_data$Channel[i - 1]
        }
      }
      
      # Melt the pixel_shift_data into long format
      pixel_shift_long <- melt(pixel_shift_data, id.vars = c("Channel", "Axis"), variable.name = "Channel_2", value.name = "Pixel_Shift")
      colnames(pixel_shift_long)[1]<-"Channel_1"
      
      # Create the Channel Pairing column
      pixel_shift_long$Pairing <- paste(pixel_shift_long$Channel_1, "x", pixel_shift_long$Channel_2)
      
      # Filter data into separate X, Y, Z based on Axis
      x_shifts <- subset(pixel_shift_long, Axis == "X shift", select = c(Pairing, Pixel_Shift))
      y_shifts <- subset(pixel_shift_long, Axis == "Y shift", select = c(Pairing, Pixel_Shift))
      z_shifts <- subset(pixel_shift_long, Axis == "Z shift", select = c(Pairing, Pixel_Shift))
      
      # Rename Pixel_Shift columns for clarity
      colnames(x_shifts)[2] <- "X"
      colnames(y_shifts)[2] <- "Y"
      colnames(z_shifts)[2] <- "Z"
      
      # Merge X, Y, Z shifts into a single data frame
      pixel_shift_data_formatted <- Reduce(function(x, y) merge(x, y, by = "Pairing", all = TRUE),
                                           list(x_shifts, y_shifts, z_shifts))
      
      
      pixel_shift_data_formatted$Filename <- filename_no_extension
      for (i in 1:max_parts) {
        new_column_name <- paste0("Variable-", sprintf("%03d", i))
        pixel_shift_data_formatted[[new_column_name]] <- sapply(filename_parts, function(x) ifelse(length(x) >= i, x[i], NA))
      }
      
      
      # Split Variable-005 into individual channel names
      channel_names <- strsplit(pixel_shift_data_formatted$`Variable-005`, split = "-")
      
      # Create a function to match channel names to Channel_Pairing
      get_channel_pairing_name <- function(pairing, names_list) {
        # Extract the channel numbers from the pairing
        channel_indices <- as.numeric(gsub("Channel ", "", unlist(strsplit(pairing, " x "))))
        # Return the corresponding names from names_list
        paste(names_list[channel_indices + 1], collapse = " x ")
      }
      
      # Apply the function to generate the Channel_Pairing_Name column
      pixel_shift_data_formatted$Channel_Name_Pairing <- mapply(
        get_channel_pairing_name,
        pairing = pixel_shift_data_formatted$Pairing,
        MoreArgs = list(names_list = unlist(channel_names[1]))
      )
      # Split Pairing into Channel_1 and Channel_2 properly
      Single_channel_nb <- strsplit(pixel_shift_data_formatted$Pairing, split = " x ")
      
      # Assign Channel_1 and Channel_2
      pixel_shift_data_formatted$Channel_1 <- sapply(Single_channel_nb, `[`, 1)
      pixel_shift_data_formatted$Channel_2 <- sapply(Single_channel_nb, `[`, 2)
      
      
      
      pixel_shift_tables[[basename(file)]] <- pixel_shift_data_formatted
    }
  }
}

# Combine and save data
combined_ratio_data <- do.call(rbind, ratio_data_tables)
output_ratio_file <- file.path(OutputFolder, "Ratio_Data.csv")
#write.csv(combined_ratio_data, output_ratio_file, row.names = FALSE)

combined_pixel_shift_data <- do.call(rbind, pixel_shift_tables)
output_pixel_shift_file <- file.path(OutputFolder, "Pixel_Shift_Data.csv")
#write.csv(combined_pixel_shift_data, output_pixel_shift_file, row.names = FALSE)

merge_columns <- c("Filename", "Channel_Name_Pairing", "Channel_1","Channel_2","Pairing", paste0("Variable-", sprintf("%03d", 1:7)))
combined_data <- merge(combined_ratio_data, combined_pixel_shift_data, by = merge_columns, all = TRUE, suffixes = c("_Ratio", "_PixelShift"))

# Split Pairing into Channel_1 and Channel_2 properly
Single_channel_Name <- strsplit(combined_data$Channel_Name_Pairing, split = " x ")

# Assign Channel_1 and Channel_2
combined_data$Channel_1_Name <- sapply(Single_channel_Name, `[`, 1)
combined_data$Channel_2_Name <- sapply(Single_channel_Name, `[`, 2)

# Function to create unique filenames by incrementing the suffix
get_unique_filename <- function(base_path) {
  count <- 0
  new_file <- base_path
  while (file.exists(new_file)) {
    count <- count + 1
    new_file <- paste0(sub("\\.csv$", "", base_path), sprintf("-%03d.csv", count))
  }
  return(new_file)
}

output_combined_file <- file.path(OutputFolder, "Channel_Co-Registration_Merged-Data.csv")
output_combined_file <- get_unique_filename(output_combined_file)  # Ensure unique filename
write.csv(combined_data, output_combined_file, row.names = FALSE)

 

After much work, I ended up writing my own script for ImageJ/FIJI to detect and compute Channel Registration Data. This script is available here Channel_Registration_Batch-TrackMate_v12.py

To use it, simply drop the script into the FIJI toolbar and click Run.

  1. If images are already opened it will process them

  2. If no images are opened it will prompt to select a folder and will process all images available in the folder and subfolders

  3. It reads files metadata or load information from previously run instances

  4. The first image is pre-detected and setttings are displayed in a dialog

  5. The dialog will keep being displayed until the detection is properly done (1 spot per channel)

  6. Once the detection is correct it will compute the channel registration information
  7. Data is saved as CSV files Channel-Registration_Essential-Data_Merged and Channel-Registration_Full-Data_Merged in an Output directory on the user desktop
  8. In batch mode the settings set up for the first image are re-used for subsequent images unless discrepency is found or detection fails

Importantly this script does not use the same exact method than MetroloJ_QC Channel Registration algorithm:

The Nyquist sampling Ratio and the Resolution are caculated for both Lateral and Axial axes the same way than MetroloJ_QC does. However, if the Nyquist sampling Ratio is above 1 we compute an Maximal Achievable Resolution (named Lateral Practical Resolution and Axial Practical Resolution) by multiplying the Theoretical Resolution by the Nyquist Ratio.

We use the Pratical Resolutions to compute the Colocalization Ratios. We calculate 3 Ratios:

  • Lateral Colocalization Ratio: Distance in the XY plan between the spots idenfitifed in Ch2 - Ch1 divided by half of the Practical Lateral Resolution.
    RatioLateral = Dxy / ( PraticalResolutionLateral / 2 )
  • Axial Colocalization Ratio: Distance in the Z plan between the spots idenfitifed in Ch2 - Ch1 divided by half of the Practical Axial Resolution.
    RatioAxial = Dz / ( PraticalResolutionAxial / 2 )
  • 3D Colocalization Ratio: The Ratio is calculated by dividing the 3D Distance betwen the spots identified in Ch2 - Ch1 divided by a Reference Distance. The Reference Distance is the length between the spot 1 and a point projected from the line formed by Spot 1 and Spot 2 on the ellipse centered on Spot 1 with semi-minor axis being half of the practical lateral resolution and semi major axis as half of the practical axial resolution. The coordinates of the Reference Point is computed iteratively by moving along the Spot1-Spot2 line and minimizing the distance to the ellipse surface. It stops when it is on the surface.
    Ratio3D = Distance3D / ( DistanceRef)

Metrics

  • 3D Colocalization Ratio: A ratio above 1 indicates that the Spot of Channel 2 is further away than the effective resolution of the system. Lateral and Axial Colocalization ratios should be checked.
  • Lateral and Axial Colocalization Ratios: A ratio above 1 indicates that the Spot of Channel 2 is further away than the effective resolution of the system in the respective plan
    • A Lateral Colocalization Ratio above 1 indicates that the Ch2 image should be shifted in X and Y by the values indicated by the Pixel Shift Table.
    • An Axial Colocalization Ratio above 1 indicates that the Ch2 image should be shifted in Z by the values indicated by the Pixel Shift Table.

This method is the approach I am using here.

Results

The following spreadsheet provides a dataset that can be manipulated with a pivot table to generate informative graphs and statistics Channel_Co-registration_Template.xlsx.

Metrics

  • The Nyquist ratio evaluates how closely the images align with the Nyquist sampling criterion. It is calculated as: Nyquist Ratio = Pixel Dimension / Nyquist Dimension
    • A ratio of 1 indicates that the image acquisition complies with the Nyquist criterion.
    • A ratio above 1 signifies that the pixel dimensions of the image exceed the Nyquist criterion.
    • A ratio below 1 is the desired outcome, as it ensures proper sampling.
  • The Co-Registration Ratios measure the spatial alignment between two channels by comparing the distance between the centers of corresponding beads in both channels to a reference distance. The reference distance is defined as the size of the fitted ellipse around the bead in the first channel.
    • A ratio of 1 means the center of the bead in the second channel is located on the edge of the ellipse fitted around the bead in the first channel.
    • A ratio above 1 indicates the center of the bead in the second channel lies outside the ellipse around the first channel's bead center.
    • A ratio below 1 is the desired outcome, indicating that the center of the bead in the second channel is within a range smaller than the system's 3D resolution.

This method is the approach used in the MetroloJ QC Channel Co-Registration function.

Let's look at the 3D Colocalization Ratio for all pairs of channels.

For the 2x Objective we see that the 3D Colocalization Ratio is above 1 for the DAPI x GFP and DAPI x Cy5 pairs. This indicates that the chromatic shift is higher than the effective resolution of the system. Correction should be applied to images after acquisition. It somethimes possible to correct it before acquisition directly in the acquisition software. The correction values are provided by the Pixel Shift tables. Values highlighted correspond to a 3D Colocalization Ratio above 1.

 

These results shows a widefield instrument using a quadband pass filter: A single cube filtering 4 wavelengths. This instrument also possess individual filter cubes. Obviously the Colocalization Ratio are higher because of the mechanical shift induced by the filter turret.

With the corresponding Pixel Shift Table





Why should you care? Well when you are acquiring a multi-channel image you might see a significant shift between the two channels. This is particularly true for the combination of DAPI and Cy3 channels with the 10x Objective.


Report the Pixel Shift Table For each objective and each filter combination. This table can (should) be used to correct a multi-channel image by displacing the Channel 2 relative to the Channel 1 by the XYZ pixel coordinates indicated.




Channel_2
ObjectiveChannel_1AxisDAPIGFPCy3Cy5
2xDAPIX
0.89-0.14-0.35
Y
0.191.632.00
Z
0.893.671.58
GFPX-0.89
-1.04-1.25
Y-0.19
1.441.81
Z-0.89
2.780.70
Cy3X0.141.04
-0.21
Y-1.63-1.44
0.37
Z-3.67-2.78
-2.08
Cy5X0.351.250.21
Y-2.00-1.81-0.37
Z-1.58-0.702.08
10xDAPIX
0.46-0.85-1.16
Y
0.501.792.27
Z
4.224.441.91
GFPX-0.46
-1.31-1.61
Y-0.50
1.291.77
Z-4.22
0.22-2.31
Cy3X0.851.31
-0.30
Y-1.79-1.29
0.48
Z-4.44-0.22
-2.53
Cy5X1.161.610.30
Y-2.27-1.77-0.48
Z-1.912.312.53
20xDAPIX
0.58-0.77-1.06
Y
0.131.231.54
Z
3.313.952.09
GFPX-0.58
-1.35-1.64
Y-0.13
1.101.41
Z-3.31
0.64-1.22
Cy3X0.771.35
-0.29
Y-1.23-1.10
0.31
Z-3.95-0.64
-1.86
Cy5X1.061.640.29
Y-1.54-1.41-0.31
Z-2.091.221.86
63xDAPIX
0.13-1.52-2.03
Y
0.131.191.66
Z
0.791.310.93
GFPX-0.13
-1.65-2.16
Y-0.13
1.061.53
Z-0.79
0.520.13
Cy3X1.521.65
-0.51
Y-1.19-1.06
0.47
Z-1.31-0.52
-0.39
Cy5X2.062.220.51
Y-1.73-1.59-0.47
Z-0.92-0.120.39


Conclusion


Legend (Wait for it)...

For a comprehensive guide on Detectors, refer to the Detector Performances of the QuaRep Working Group 02.

Acquisition protocol 


 Results


Conclusion


Legend (Wait for it...) dary

For a comprehensive guide on Lateral and Axial Resolution, refer to the Lateral and Axial Resolution of the QuaRep Working Group 05.

Acquisition protocol 


 Results


Conclusion


If t


i

Unable to render {include} The included page could not be found.

  • No labels