You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 20 Next »

Regular assessment of a microscope's quality and performance is crucial for maintaining reliable results. This blog post aims to provide a practical guide to effective microscope quality control.

Illumination power warmup kinetic

When starting an instrument, it takes time to reach a stable steady state. This duration is known as the warmup period. It is critical to record a warmup kinetic at least once to accurately define this period.

Acquisition protocol

  1. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  2. Center the sensor and the objective
  3. Zero the sensor to ensure accurate readings
  4. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  5. Turn on the light source and immediately record the power output over time (every 10 seconds for 1h is a good start but can be adjusted) until it stabilizes
  6. Repeat steps 3 to 5 for each light source you wish to monitor

Keep the light source ON at all time.

Results

Fill in the orange cells in the following spreadsheet template Illumination Warmup Kinetic_Template.xlsx to visualize your results. 

For each light source plot the measured power output (mW) over time.


Calculate the relative power: Relative Power = Power/MaxPower and plot the Relative Power (%) over time.

Visually identify the time required to reach 99.5%.

Report the results


385nm475nm555nm630nm
Stabilisation time500To be acquired
Min Power (mW)121.995.424.0
Max Power (mW)122.595.424.0
Stability Factor (%)99.75%99.98%99.97%

Conclusion

The illumination warmup time for this specific instrument is about 5 minutes.

Maximum illumination power output

This measure evaluates the maximum power output of each light source, considering both the quality of the light source and the components along the light path. Over time, we anticipate a gradual decrease in power output, accounting for the aging of the hardware, including the light source and other optical components.

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 100%
  7. Record the average power output for 10 seconds
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Maximum Illumination Power Output_Template.xlsx to visualize your results. 

For each light source plot the measured maximal power output (mW).


Plot the maximal power output (mW) measured and compare it to the specifications from the manufacturer. Calculate the relative power: Relative Power = Measured Power / Specifications.


Report the results


Manufacturer
Specifications (mW)
Measurements
2024-11-22 (mW)
Relative Power (%)
385nm150.25122.281%
470nm110.495.987%
555nm31.92475%
630nm5239.2676%

Conclusion

This instrument provides 80% of the power given by the manufacturer specifications. These results are consistent because the manufacturer specifications are using a different objective and likely different dichroic mirrors.

Illumination stability

The light sources used on a microscope should be constant or at least stable over the time scale of an experiment. For this reason power stability is recorded over 4 different time-scale.

This measure compares the power output over time. Four different timescales are measured:

  • Real-time illumination stability: Continuous recording for 1 min. This represents the duration of a z-stack acquisition.
  • Short-term illumination stability: Every 1-10 seconds for 5-15 min. This represents the duration of several iamges.
  • Mid-term illumination stability: Every 10-30 seconds for 1-2 hours. This represents the duration of a typical acquisition session or short time-lapse experiments. For longer time-lapse experiments, longer duration may be used.
  • Long-term illumination stability: Once a year or more over the lifetime of the instrument (this is measured in the Maximum Power Output section comparing with previous measurements)

The Stability factor is then calculated S (%) = 100 x (1- (Pmax-Pmin)/(Pmax+Pmin)).

Real-time illumination stability

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 100%
  7. Record the power output as fast as possible for 1 minute
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Illumination Stability_Template.xlsx to visualize your results.

For each light source plot the measured power output (mW) over time.


Calculate the relative power: Relative Power = Power/MaxPower and plot the Relative Power (%) over time.

 Calculate the Stability factor S (%) = 100 x (1- (Pmax-Pmin)/(Pmax+Pmin)) and reports the results in a table.


Stability Factor Real-Time
385nm99.98%
475nm99.96%
555nm99.95%
630nm99.94%

Conclusion

The light sources are highly stable (>99.9%) during a 1 min period.

Short-term illumination stability

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 100%
  7. Record the power output every 10 seconds for 15 minutes
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Illumination Stability_Template.xlsx to visualize your results.

For each light source plot the measured power output (mW) over time.


Calculate the relative power: Relative Power = Power/MaxPower and plot the Relative Power (%) over time.

 Calculate the Stability factor S (%) = 100 x (1- (Pmax-Pmin)/(Pmax+Pmin)) and reports the results in a table.


Stability Factor Short-Term
385nm99.72%
475nm99.89%
555nm99.99%
630nm99.95%

Conclusion

The light sources are highly stable (>99.7%) during a 15 min period.

Mid-term illumination stability

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 100%
  7. Record the power output every 10 seconds for 1 hour
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Illumination Stability_Template.xlsx to visualize your results.

For each light source plot the measured power output (mW) over time.

Calculate the relative power: Relative Power = Power/MaxPower and plot the Relative Power (%) over time.


 Calculate the Stability factor S (%) = 100 x (1- (Pmax-Pmin)/(Pmax+Pmin)) and reports the results in a table.


Stability Factor Mid-Term
385nm99.63%
475nm99.98%
555nm99.97%
630nmTo be acquired

Conclusion

The light sources are highly stable (>99.5%) during a 1 h period.

Long-term illumination stability

Long-term illumination stability  measure the power output over the lifetime of the instrument. This is measured in the Maximum Power Output section by comparing with previous measurements.

Illumination stability conclusion


Real-time

1 min

Short-term

15 min

Mid-term

1 h

385nm

99.98%

99.72%

99.63%

475nm

99.96%

99.89%

99.98%

555nm

99.95%

99.99%

99.97%

630nm

99.94%

99.95%

To be acquired

The light sources are highly stable (>99.5%).

Illumination Input-Output Linearity

This measure compares the power output when the input varies. We expect a linear relationship between the input and the power output.

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 0%, 10, 20, 30…, 100%
  7. Record the power output for each input
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Illumination Power Linearity_Template.xlsx to visualize your results.

For each light source plot the measured power output (mW) function of the input (%).


Calculate the relative power: Relative Power = Power/MaxPower and plot the Relative Power (%) function of the input (%).

Determine the equation for each curve, typically a linear relationship of the form Output = K × Input. Report the slope (K) and the coefficient of determination (), which should be as close to 1 as possible.


Illumination Input-Output Linearity


Slope

R2

385nm

0.9969

1

475nm

0.9984

1

555nm

1.0012

1

630nm

1.0034

1

Conclusion

The light sources are highly linear.

Objectives and cubes transmittance

Since we are using a power meter we can easily assess the transmittance of the objectives and the filter cubes. This measure compares the power output when different objectives and cubes are in the light path. It evaluates the transmittance of each objective and compares it with the manufacturer specifications. It can detect defects or dirt on objectives.

Objectives transmittance

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 100%
  7. Record the power output for each objective as well as without objective
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Objective and cube transmittance_Template.xlsx to visualize your results.

For each objective plot the measured power output (mW) function of the wavelength (nm).

Calculate the relative transmittance: Relative Transmittance = Power/PowerNoObjective and plot the Relative Transmittance(%) function of the wavelength (nm).

Calculate and report the average transmittance for each objective.


Average transmittance
2.5x-0.07577%
10x-0.25 Ph160%
20x-0.5 Ph262%
63x-1.429%

Compare the average transmittance to the specification provided by the manufacturer.


Specification

[400-750]

Average transmittance

[470-630]

2.5x-0.075

>90%

84%

10x-0.25 Ph1

>80%

67%

20x-0.5 Ph2

>80%

68%

63x-1.4

>80%

35%

Here we see that the measurements are close to the specification at the exception of the 63x-1.4 objective. This is expected because the 63x objective has a smaller back aperture which reduces the amount of light received. You can also compare the complete transmittance curves.

Conclusion

The objectives are transmitting light properly.

Cubes transmittance

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 100%
  7. Record the power output for each filter cube
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Objective and cube transmittance_Template.xlsx to visualize your results.

For each filter cube plot the measured power output (mW) function of the wavelength (nm).

Calculate the relative transmittance: Relative Transmittance = Power/PowerofMaxFilter and plot the Relative Transmittance(%) function of the wavelength (nm).

Calculate and report the average transmittance for each filter cube at the appropriate wavelength.


Transmittance
DAPI/GFP/Cy3/Cy5100%
DAPI14%
GFP47%
DsRed47%
DHE0%
Cy584%
  • The DAPI cube only transmits 14% of the excitation light compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5. It is usable but will provide a low signal. This likely because of the excitation filter within the cube is not properly matching the light source. This filter could be removed since an excitation filter is already included within the light source.
  • The GFP and DsRed cubes transmit 47% of the excitation light compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5 transmits. It works properly.
  • The DHE cube does not transmit any light from the colibri. This cube could be removed and stored.
  • The Cy5 cube transmit 84% compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5. It works properly.

Conclusion

Actions have to be taken for the DAPI and DHE.

XYZ Drift

This experiment assesses the stability of the system in the XY and Z directions. As previously noted, when an instrument is started, it requires time to reach a stable steady state, a phase known as the warmup period. To accurately determine this duration, it is essential to record a warmup kinetic at least once a year.

Acquisition Protocol

  1. Place 1 µm diameter fluorescent beads (TetraSpec Fluorescent Microspheres Size Kit, mounted on a slide) on the stage

  2. Center the sample under a high-NA dry objective

  3. Select an imaging channel (e.g., Cy5)

  4. Acquire a large Z-stack every minute for 24 hours
    It is crucial to account for potential drift in the Z-axis by acquiring a Z-stack that is significantly larger than the visible bead size (e.g., 40 µm).

Results

  1. Use the TrackMate plugin for FIJI to detect and track spots over time
  2. Apply Difference of Gaussians (DoG) spot detection with a detection size of 1 µm
  3. Set a quality threshold greater than 20 and enable sub-pixel localization for increased accuracy
  4. Export the detected spot coordinates as a CSV file for further analysis

Fill in the orange cells in the following spreadsheet template XYZ Drift Kinetic_Template.xlsx. to visualize your results. Just copy paste XYZT and Frame columns from trackmate spots CSV file to the orange column in the XLSX file. Fill in the NA and Emission wavelength used.

Calculate the relative displacement in X, Y and Z: Relative Displacement = Position - PositionInitial and plot the relative displacement over time.

We observe an initial drift that stabilizes over time in X (+2.3 um), Y (+1.3 um) and Z (-10.5 um).

Calculate the displacement Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2) + (Z2-Z1)2 ) and plot the displacement over time.

Calculate the resolution of your imaging configuration, Resolution = Lambda / 2*NA and plot the resolution over time (constante).


Identify visually the time when the displacement is lower than the resolution of the system. On this instrument it takes 120 min to reach its stability. 

Calculate the velocity, Velocity = (Displacement2-Displacement1)/T2-T1) and plot the velocity over time.


Calculate the average velocity before and after stabilisation and report the results in a table

Objective NA0.5
Wavelength (nm)705
Resolution (nm)705
Stabilisation time (min)122
Average velocity Warmup (nm/min)113
Average velocity System Ready (nm/min)14

Conclusion

The warmup time for this specific instrument is about 2 hours. The average displacement velocity after warmup is 14 nm/min which is acceptable.

XYZ Repositioning accuracy

This experiment evaluates how accurate is the system in XY by measuring the accuracy of repositioning. Several variables can affect repositioning accuracy: i) Time, ii) Traveled distance, iii) Speed and iv) acceleration.

Acquisition protocol

  1. Place 1 um diameter fluorescent beads (TetraSpec Fluorescent Microspheres Size Kit mounted on slide) on the stage.
  2. Center the sample under a high NA dry objective.
  3. Select an imaging channel (e.g., Cy5)
  4. Acquire a Z-stack at 2 different positions separated by 0 um, 1 um, 10 um, 100 um, 1 000 um, 10 000 um, 80 000um in X and Y direction

  5. Repeat the acquisition 20 times
    Be careful your stage might have a smaller range!
    Be careful not to damage the objectives (lower the objectives during movement)

I recommend to acquire 3 dataset for each condition.

Results

  1. Use the TrackMate plugin for FIJI to detect and track spots over time
  2. Apply Difference of Gaussians (DoG) spot detection with a detection size of 1 µm
  3. Set a quality threshold greater than 20 and enable sub-pixel localization for increased accuracy
  4. Export the detected spot coordinates as a CSV file for further analysis


Fill in the orange cells in the following spreadsheet template XY Repositioning Accuracy_Template.xlsx. to visualize your results. Just copy paste XYZT and Frame columns from trackmate spots CSV file to the orange column in the XLSX file. Fill in the NA and Emission wavelength used.

This experiment shows the displacement in X, Y and Z after + and - 30mm movement in X and Y, repeated 20 times.


Report the results in a table.

Objective NA0.5
Wavelength (nm)705
Resolution (nm)705
X Accuracy (nm)195
Y Accuracy (nm)175
Z Accuracy (nm)52
Repositioning Accuracy (nm)172


Because several variables can affect repositioning accuracy (i) Time, ii) Traveled distance, iii) Speed and iv) acceleration) we decided to test them. To do this we use the following code to automatically process an opened image in ImageJ/FIJI using the Trackmate plugin. It will save the spot detection as a CSV file on your Desktop.

Automatic Trackmate
import os
import sys
import json
from ij import IJ, WindowManager
from java.io import File
from ij.gui import GenericDialog
from fiji.plugin.trackmate import Model, Settings, TrackMate, SelectionModel, Logger
from fiji.plugin.trackmate.detection import DogDetectorFactory
from fiji.plugin.trackmate.tracking.jaqaman import SparseLAPTrackerFactory
from fiji.plugin.trackmate.features import FeatureFilter
from fiji.plugin.trackmate.features.track import TrackIndexAnalyzer
from fiji.plugin.trackmate.gui.displaysettings import DisplaySettingsIO
from fiji.plugin.trackmate.gui.displaysettings.DisplaySettings import TrackMateObject
from fiji.plugin.trackmate.visualization.table import TrackTableView
from fiji.plugin.trackmate.visualization.hyperstack import HyperStackDisplayer

# Ensure UTF-8 encoding
reload(sys)
sys.setdefaultencoding('utf-8')

# Path to the config file that will store last user settings
config_file_path = os.path.join(os.path.expanduser("~"), "Desktop", "trackmate_config.json")

# Default settings in case there's no previous config file
default_settings = {
    'subpixel_localization': True,
    'spot_diameter': 0.5,  # Default 0.5 microns
    'threshold_value': 20.904,  # Default threshold value
    'apply_median_filtering': False
}

# Function to load settings from the config file
def load_settings():
    if os.path.exists(config_file_path):
        with open(config_file_path, 'r') as f:
            return json.load(f)
    else:
        return default_settings

# Function to save settings to the config file
def save_settings(settings):
    with open(config_file_path, 'w') as f:
        json.dump(settings, f)

# Load previous settings (or use default if no file exists)
user_settings = load_settings()

# Create output directory on the user's desktop
output_dir = os.path.join(os.path.expanduser("~"), "Desktop", "Output")
if not os.path.exists(output_dir):
    os.makedirs(output_dir)

# Get the currently selected image
imp = WindowManager.getCurrentImage()
if imp is None:
    sys.exit("No image is currently open.")

# Get the image title for naming output files
filename = imp.getTitle()

# ----------------------------
# Create the model object
# ----------------------------
model = Model()
model.setLogger(Logger.IJ_LOGGER)  # Send messages to the ImageJ log window

# ------------------------
# Create input dialog
# ------------------------

# Create the dialog box for user input
gd = GenericDialog("TrackMate Parameters")

# Add fields to the dialog, prefill with last saved values
gd.addCheckbox("Enable Subpixel Localization?", user_settings['subpixel_localization'])  # Default: from saved settings
gd.addNumericField("Spot Diameter (microns):", user_settings['spot_diameter'], 2)  # Default diameter: from saved settings
gd.addSlider("Threshold Value:", 0, 255, user_settings['threshold_value'])  # Slider for threshold value (0-255 range, default: from saved settings)
gd.addCheckbox("Apply Median Filtering?", user_settings['apply_median_filtering'])  # Default: from saved settings

# Show the dialog
gd.showDialog()

# Check if the user canceled the dialog
if gd.wasCanceled():
    sys.exit("User canceled the operation.")

# Get user inputs from the dialog
subpixel_localization = gd.getNextBoolean()  # Whether to enable subpixel localization
spot_diameter = gd.getNextNumber()  # Spot diameter in microns
threshold_value = gd.getNextNumber()  # Threshold value from the slider
apply_median_filtering = gd.getNextBoolean()  # Whether to apply median filtering

# Save the new settings to the configuration file
user_settings = {
    'subpixel_localization': subpixel_localization,
    'spot_diameter': spot_diameter,
    'threshold_value': threshold_value,
    'apply_median_filtering': apply_median_filtering
}
save_settings(user_settings)

# ------------------------
# Prepare settings object
# ------------------------
settings = Settings(imp)

# Configure detector
settings.detectorFactory = DogDetectorFactory()
settings.detectorSettings = {
    'DO_SUBPIXEL_LOCALIZATION': subpixel_localization,  # Set subpixel localization
    'RADIUS': spot_diameter / 2,  # Convert diameter to radius for detector
    'TARGET_CHANNEL': 1,  # Target channel (customize if needed)
    'THRESHOLD': threshold_value,  # User-defined threshold value
    'DO_MEDIAN_FILTERING': apply_median_filtering,  # Apply median filtering if selected
}

# Configure tracker
settings.trackerFactory = SparseLAPTrackerFactory()
settings.trackerSettings = settings.trackerFactory.getDefaultSettings()
settings.trackerSettings['ALLOW_TRACK_SPLITTING'] = True
settings.trackerSettings['ALLOW_TRACK_MERGING'] = True

# Add all known feature analyzers to compute track statistics
settings.addAllAnalyzers()

# Configure track filters
track_filter = FeatureFilter('TRACK_DISPLACEMENT', 10, False)  # Filter tracks with displacement >10 pixels
settings.addTrackFilter(track_filter)

# -------------------
# Instantiate plugin
# -------------------
trackmate = TrackMate(model, settings)

# --------
# Process
# --------
if not trackmate.checkInput():
    sys.exit("Error checking input: {}".format(trackmate.getErrorMessage()))
if not trackmate.process():
    sys.exit("Error during processing: {}".format(trackmate.getErrorMessage()))

# ----------------
# Display results
# ----------------
selection_model = SelectionModel(model)

# Read the default display settings
ds = DisplaySettingsIO.readUserDefault()
ds.setTrackColorBy(TrackMateObject.TRACKS, TrackIndexAnalyzer.TRACK_INDEX)
ds.setSpotColorBy(TrackMateObject.TRACKS, TrackIndexAnalyzer.TRACK_INDEX)

# Display tracks and spots on the image
displayer = HyperStackDisplayer(model, selection_model, imp, ds)
displayer.render()
displayer.refresh()

# -----------------
# Export results
# -----------------
# Export spot table as CSV
spot_table = TrackTableView.createSpotTable(model, ds)
output_csv_path = os.path.join(output_dir, "{}_spots.csv".format(filename))
spot_table.exportToCsv(File(output_csv_path))



This should create a lot of CSV Files that we need to be aggregated for the following analysis. The following script in R can process all csv files placed in an Output folder on your desktop.

# Load and install necessary libraries at the beginning
rm(list = ls())

# Check and install required packages if they are not already installed
if (!require(dplyr)) install.packages("dplyr", dependencies = TRUE)
if (!require(stringr)) install.packages("stringr", dependencies = TRUE)
if (!require(ggplot2)) install.packages("ggplot2", dependencies = TRUE)
if (!require(corrplot)) install.packages("corrplot", dependencies = TRUE)

# Load libraries
library(dplyr)
library(stringr)
library(ggplot2)
library(corrplot)

# Set default input and output directories
default_input_dir <- file.path(Sys.getenv("USERPROFILE"), "Desktop", "Output")  # Default to "Output" on Desktop
InputFolder <- default_input_dir  # Use default folder

# Specify the Output folder (this is a fixed folder on the Desktop)
OutputFolder <- file.path(Sys.getenv("USERPROFILE"), "Desktop", "Output")
if (!dir.exists(OutputFolder)) dir.create(OutputFolder, recursive = TRUE)

# List all CSV files in the folder
csv_files <- list.files(path = InputFolder, pattern = "\\.csv$", full.names = TRUE)
if (length(csv_files) == 0) stop("No CSV files found in the selected directory.")

header <- names(read.csv(csv_files[1], nrows = 1))

# Function to clean the filename
clean_filename <- function(filename) {
  filename_parts <- strsplit(filename, " - ")[[1]]
  cleaned_filename <- sub("\\.czi", "", filename_parts[1])
  cleaned_filename <- sub("_spots", "", cleaned_filename)
  cleaned_filename <- sub("\\.csv", "", cleaned_filename)
  return(cleaned_filename)
}

# Read and merge all CSV files
merged_data <- csv_files %>%
  lapply(function(file) {
    data <- read.csv(file, skip = 4, header = FALSE)
    colnames(data) <- header
    data <- data %>% arrange(FRAME)
    
    # Clean and add source file info
    filename <- basename(file)
    data$SourceFile <- clean_filename(filename)
    
    # Extract variables from the filename
    filename_parts <- strsplit(clean_filename(filename), "_")[[1]]
    for (i in seq_along(filename_parts)) {
      variable_name <- paste("Variable-", sprintf("%03d", i), sep = "")
      data[[variable_name]] <- filename_parts[i]
    }
    
    # Add time columns if available
    if ("POSITION_T" %in% colnames(data)) {
      data$`Time (sec)` <- round(data$POSITION_T, 0)
      data$`Time (min)` <- round(data$`Time (sec)` / 60, 2)
    }
    
    # Calculate displacement columns (X, Y, Z, 3D, 2D)
    if ("POSITION_X" %in% colnames(data)) {
      first_value <- data$POSITION_X[data$FRAME == 0][[1]]
      data$`X (nm)` <- (data$POSITION_X - first_value) * 1000
    }
    if ("POSITION_Y" %in% colnames(data)) {
      first_value <- data$POSITION_Y[data$FRAME == 0][[1]]
      data$`Y (nm)` <- (data$POSITION_Y - first_value) * 1000
    }
    if ("POSITION_Z" %in% colnames(data)) {
      first_value <- data$POSITION_Z[data$FRAME == 0][[1]]
      data$`Z (nm)` <- (data$POSITION_Z - first_value) * 1000
    }
    
    # Calculate displacement (3D and 2D)
    if (all(c("X (nm)", "Y (nm)", "Z (nm)") %in% colnames(data))) {
      data$`Displacement 3D (nm)` <- sqrt(
        diff(c(0, data$`X (nm)`))^2 + 
          diff(c(0, data$`Y (nm)`))^2 + 
          diff(c(0, data$`Z (nm)`))^2
      )
      data$`Displacement 3D (nm)`[1] <- 0
    }
    if (all(c("X (nm)", "Y (nm)") %in% colnames(data))) {
      data$`Displacement 2D (nm)` <- sqrt(
        diff(c(0, data$`X (nm)`))^2 + 
          diff(c(0, data$`Y (nm)`))^2
      )
      data$`Displacement 2D (nm)`[1] <- 0
    }
    
    return(data)
  }) %>%
  bind_rows()

# Save the merged data to a new CSV file
output_file <- "merged_data.csv"
write.csv(merged_data, file = file.path(OutputFolder, output_file), row.names = FALSE)

cat("All CSV files have been merged and saved to", output_file, "\n")

# Load merged data for visualization (or analysis)


This script calculates the relative position in X, Y and Z: PositionRelative= Position - PositionInitialfor each axes eind each file. It also calculates the 2D and 3D displacement: 2D_Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2)); 3D_Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2) + (Z2-Z1)2 ) and provides the results as CSV file merged_data.csv that can be further processed and summarized with a pivot table XY Repositioning Accuracy_Template_All-Files.xlsx

Plot the 3D displacement for each condition function of the acquisition frame.

We observe here a high variability at the Frame 1 (2nd image). This variability can comes from the X, Y or Z axis. I can also come from a combination of those 3. We now plot the displacement in each direction function of the Frame.

We observe that the X axis is contributing to the high variability of the first frame. Ploting the 3D displacement as a scatter dotplot for each condition and repeat.

We observe that the recorded data is consistent at the exception of a single value per condition.



In these graphs we observe the variability per experiment. Notice that most of the experiment show a shift in X and some in Y. Data is then more consistent.




Travelled distance significantly affect the repositioning accuracy at 1mm, 10mm and 30mm.






Conclusion



Field Illumination Uniformity


References

The information provided here is inspired by the following references:

doi.org/10.17504/protocols.io.5jyl853ndl2w/v2

https://doi.org/10.1083/jcb.202107093








What need to be assessed?

Resolution

Field Illumination Uniformity

Channel alignement (Co-registration)

Detector Noise



  • No labels