You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Next »

Regular assessment of a microscope's quality and performance is crucial for maintaining reliable results. This blog post aims to provide a practical guide to effective microscope quality control.

Illumination power warmup kinetic

When starting an instrument, it takes time to reach a stable steady state. This duration is known as the warmup period. It is critical to record a warmup kinetic at least once to accurately define this period.

Acquisition protocol

  1. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  2. Center the sensor and the objective
  3. Zero the sensor to ensure accurate readings
  4. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  5. Turn on the light source and immediately record the power output over time (every 10 seconds for 1h is a good start but can be adjusted) until it stabilizes
  6. Repeat steps 3 to 5 for each light source you wish to monitor

Keep the light source ON at all time.

Results

Fill in the orange cells in the following spreadsheet template Illumination Warmup Kinetic_Template.xlsx to visualize your results. 

For each light source plot the measured power output (mW) over time.


Calculate the relative power: Relative Power = Power/MaxPower and plot the Relative Power (%) over time.

Visually identify the time required to reach 99.5%.

Report the results


385nm475nm555nm630nm
Stabilisation time500To be acquired
Min Power (mW)121.995.424.0
Max Power (mW)122.595.424.0
Stability Factor (%)99.75%99.98%99.97%

Conclusion

The illumination warmup time for this specific instrument is about 5 minutes.

Maximum illumination power output

This measure evaluates the maximum power output of each light source, considering both the quality of the light source and the components along the light path. Over time, we anticipate a gradual decrease in power output, accounting for the aging of the hardware, including the light source and other optical components.

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 100%
  7. Record the average power output for 10 seconds
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Maximum Illumination Power Output_Template.xlsx to visualize your results. 

For each light source plot the measured maximal power output (mW).


Plot the maximal power output (mW) measured and compare it to the specifications from the manufacturer. Calculate the relative power: Relative Power = Measured Power / Specifications.


Report the results


Manufacturer
Specifications (mW)
Measurements
2024-11-22 (mW)
Relative Power (%)
385nm150.25122.281%
470nm110.495.987%
555nm31.92475%
630nm5239.2676%

Conclusion

This instrument provides 80% of the power given by the manufacturer specifications. These results are consistent because the manufacturer specifications are using a different objective and likely different dichroic mirrors.

Illumination stability

The light sources used on a microscope should be constant or at least stable over the time scale of an experiment. For this reason power stability is recorded over 4 different time-scale.

This measure compares the power output over time. Four different timescales are measured:

  • Real-time illumination stability: Continuous recording for 1 min. This represents the duration of a z-stack acquisition.
  • Short-term illumination stability: Every 1-10 seconds for 5-15 min. This represents the duration of several iamges.
  • Mid-term illumination stability: Every 10-30 seconds for 1-2 hours. This represents the duration of a typical acquisition session or short time-lapse experiments. For longer time-lapse experiments, longer duration may be used.
  • Long-term illumination stability: Once a year or more over the lifetime of the instrument (this is measured in the Maximum Power Output section comparing with previous measurements)

The Stability factor is then calculated S (%) = 100 x (1- (Pmax-Pmin)/(Pmax+Pmin)).

Real-time illumination stability

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 100%
  7. Record the power output as fast as possible for 1 minute
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Illumination Stability_Template.xlsx to visualize your results.

For each light source plot the measured power output (mW) over time.


Calculate the relative power: Relative Power = Power/MaxPower and plot the Relative Power (%) over time.

 Calculate the Stability factor S (%) = 100 x (1- (Pmax-Pmin)/(Pmax+Pmin)) and reports the results in a table.


Stability Factor Real-Time
385nm99.98%
475nm99.96%
555nm99.95%
630nm99.94%

Conclusion

The light sources are highly stable (>99.9%) during a 1 min period.

Short-term illumination stability

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 100%
  7. Record the power output every 10 seconds for 15 minutes
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Illumination Stability_Template.xlsx to visualize your results.

For each light source plot the measured power output (mW) over time.


Calculate the relative power: Relative Power = Power/MaxPower and plot the Relative Power (%) over time.


 Calculate the Stability factor S (%) = 100 x (1- (Pmax-Pmin)/(Pmax+Pmin)) and reports the results in a table.


Stability Factor Short-Term
385nm99.72%
475nm99.89%
555nm99.99%
630nm99.95%

Conclusion

The light sources are highly stable (>99.7%) during a 15 min period.

Mid-term illumination stability

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 100%
  7. Record the power output every 10 seconds for 1 hour
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Illumination Stability_Template.xlsx to visualize your results.

For each light source plot the measured power output (mW) over time.

Calculate the relative power: Relative Power = Power/MaxPower and plot the Relative Power (%) over time.


 Calculate the Stability factor S (%) = 100 x (1- (Pmax-Pmin)/(Pmax+Pmin)) and reports the results in a table.


Stability Factor Mid-Term
385nm99.63%
475nm99.98%
555nm99.97%
630nmTo be acquired

Conclusion

The light sources are highly stable (>99.5%) during a 1 h period.

Long-term illumination stability

Long-term illumination stability  measure the power output over the lifetime of the instrument. This is measured in the Maximum Power Output section by comparing with previous measurements.

Illumination stability conclusion


Real-time

1 min

Short-term

15 min

Mid-term

1 h

385nm

99.98%

99.72%

99.63%

475nm

99.96%

99.89%

99.98%

555nm

99.95%

99.99%

99.97%

630nm

99.94%

99.95%

To be acquired

The light sources are highly stable (>99.5%).

Illumination Input-Output Linearity

This measure compares the power output when the input varies. We expect a linear relationship between the input and the power output.

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 0%, 10, 20, 30…, 100%
  7. Record the power output for each input
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template Illumination Power Linearity_Template.xlsx to visualize your results.

For each light source plot the measured power output (mW) function of the input (%).


Calculate the relative power: Relative Power = Power/MaxPower and plot the Relative Power (%) function of the input (%).

Determine the equation for each curve, typically a linear relationship of the form Output = K × Input. Report the slope (K) and the coefficient of determination (), which should be as close to 1 as possible.



Illumination Input-Output Linearity


Slope

R2

385nm

0.9969

1

475nm

0.9984

1

555nm

1.0012

1

630nm

1.0034

1

Conclusion

The light sources are highly linear.

Objectives and cubes transmittance

Since we are using a power meter we can easily assess the transmittance of the objectives and the filter cubes. This measure compares the power output when different objectives and cubes are in the light path. It evaluates the transmittance of each objective and compares it with the manufacturer specifications. It can detect defects or dirt on objectives.

Objectives transmittance

Acquisition protocol

  1. Warmup the light sources (see previous section for the required duration)
  2. Place a power meter sensor (e.g., Thorlabs S170C) on the stage
  3. Center the sensor and the objective
  4. Zero the sensor to ensure accurate readings
  5. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software
  6. Turn on the light source to 100%
  7. Record the power output for each objective as well as without objective
  8. Repeat steps 5 to 7 for each light source/wavelength

Results

Fill in the orange cells in the following spreadsheet template  to visualize your results.

For each objective plot the measured power output (mW) function of the wavelength (nm).

Calculate the relative power: Relative Transmittance= Power/PowerNoObjective and plot the Relative Relative Transmittance(%) function of the wavelength (nm).

Calculate and report the average transmittance for each objective.


Average transmittance
2.5x-0.07577%
10x-0.25 Ph160%
20x-0.5 Ph262%
63x-1.429%

Compare the average transmittance to the specification provided by the manufacturer.


Specification

[400-750]

Average transmittance

[470-630]

2.5x-0.075

>90%

84%

10x-0.25 Ph1

>80%

67%

20x-0.5 Ph2

>80%

68%

63x-1.4

>80%

35%

Here we see that the measurements are close to the specification at the exception of the 63x-1.4 objective. This is expected because the 63x objective has a smaller back aperture which reduces the amount of light received. You can also compare the complete transmittance curves.

Conclusion

The objectives are transmitting light properly.

Cubes transmittance

Acquisition Protocol

  1. Warm up the system
  2. Setup the Power Meter
    1. Place a power meter sensor (e.g., Thorlabs S170C) on the stage.
    2. Center the sensor under a low magnification objective.
  3. Prepare the Measurement
    1. Select the wavelength of the light source you wish to monitor using your power meter controller (e.g., Thorlabs PM400) or software.
    2. Zero the sensor to ensure accurate readings.
  4. Record the power output
    1. Turn on the light source to 100%
    2. Record the power output for each filter cube
  5. Repeat for Multiple Light Sources
    1. Repeat steps 3 to 4 for each light source you

Results

For each cube plot the measured power output (mW) function of the wavelength (nm). The excel file provides a template that can be filled in Objective and cube transmittance_Template.xlsx


Calculate the relative transmittance: Relative Transmittance = Power/PowerObjective and plot the Relative Transmittance (%) function of the wavelength (nm).


Report the transmittance of the appropriate wavelenth for each filter cube.


Transmittance
DAPI/GFP/Cy3/Cy5100%
DAPI14%
GFP47%
DsRed47%
DHE0%
Cy584%
  • The DAPI cube only transmits 14% of the excitation light compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5. It is usable but will provide a low signal. This likely because of the excitation filter within the cube is not properly matching the light source. This filter could be removed since an excitation filter is already included within the light source.
  • The GFP and DsRed cubes transmit 47% of the excitation light compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5 transmits. It works properly.
  • The DHE cube does not transmit any light from the colibri. This cube could be removed and stored.
  • The Cy5 cube transmit 84% compared to the Quad Band Pass DAPI/GFP/Cy3/Cy5. It works properly.

Conclusion

Actions have to be taken for the DAPI and DHE.


XYZ Drift

This experiment evaluates how stable is the system in XY and Z. As mentioned earlier, when starting an instrument, it takes time to reach a stable steady state. This duration is known as the warmup period. It is critical to record a warmup kinetic at least once to accurately define this period.

Acquisition protocol

  1. Place 1 um diameter fluorescent beads (TetraSpec Fluorescent Microspheres Size Kit mounted on slide) on the stage.
  2. Center the sample under a high NA dry objective.
  3. Choose an imaging channel (exemple Cy5)

  4. Acquire a large Z-stack every minute for 24 h
    It is very important to anticipate a drift in Z ie: to acquire a z-stack that is much larger than the visible bead (40 um)

Results

Use TrackMate Plugin for FIJI to detect the spot and tack it over time. DoG Spot detection with 1 um object detection, >20 quality threshold with sub-pixel localization. Detected spots coordinates were exported as a CSV File. The following excel file provides a template to measure XYZ Drift over time XYZ Drift Kinetic_Template.xlsx. Just copy paste XYZT and Frame columns from trackmate spots CSV file to the orange column in the XLSX file. Fill in the NA and Emission wavelength used.

Calculate the relative displacement in X, Y and Z: Relative Displacement = Position - PositionInitial and plot the relative displacement over time.

We observe an initial drift that stabilizes over time in X (+3.5 um), Y (+1 um) and Z (-12 um).

Calculate the displacement Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2) + (Z2-Z1)2 ) and plot the displacement over time.

Calculate the resolution of your imaging configuration, Resolution = Lambda / 2*NA and plot the resolution over time (constante).


Identify visually the time when the displacement is lower than the resolution of the system. On this instrument it takes 320 min to reach its stability. 


Calculate the velocity, Velocity = (Displacement2-Displacement1)/T2-T1) and plot the velocity over time.

Calculate the average velocity before and after stabilisation.

Average velocity Warmup (nm/min)106113
Average velocity System Ready (nm/min)

36

14

Conclusion

The warmup time for this specific instrument is quite long 5 hours. The average displacement velocity after warmup is 36 nm/min which is acceptable.

XYZ Repositioning accuracy

This experiment evaluates how accurate is the system in XY by measuring the accuracy of repositioning. Several variables can affect repositioning accuracy: i) Time, ii) Traveled distance and iii) Speed.

Acquisition protocol

  1. Place 1 um diameter fluorescent beads (TetraSpec Fluorescent Microspheres Size Kit mounted on slide) on the stage.
  2. Center the sample under a high NA dry objective.
  3. Choose an imaging channel (exemple Cy5)

  4. Acquire a 2 position Z-stack 20 times

  5. Repeat step 4 by moving 0 um, 1 um, 10 um, 100 um, 1 000 um, 10 000 um, 80 000um in X and Y direction
    Be careful your stage might have a smaller range!
    Be careful not to damage the objectives (lower the objectives during movement)

I recommend to acquire 3 dataset for each condition

Results

Use TrackMate Plugin for FIJI to detect the spot and tack it over time. DoG Spot detection with 1 um object detection, 20 quality threshold with sub-pixel localization. Detected spots coordinates were exported and displacement from the initial image was calculated in nm.

The following code can automatically process an open image in ImageJ/FIJI using the Trackmate plugin. It will save the spot detection file as a CSV.

Automatic Trackmate
import sys

from ij import IJ
from ij import WindowManager

from fiji.plugin.trackmate import Model
from fiji.plugin.trackmate import Settings
from fiji.plugin.trackmate import TrackMate
from fiji.plugin.trackmate import SelectionModel
from fiji.plugin.trackmate import Logger
from fiji.plugin.trackmate.detection import LogDetectorFactory
from fiji.plugin.trackmate.detection import DogDetectorFactory
from fiji.plugin.trackmate.tracking.jaqaman import SparseLAPTrackerFactory
from fiji.plugin.trackmate.gui.displaysettings import DisplaySettingsIO
from fiji.plugin.trackmate.gui.displaysettings.DisplaySettings import TrackMateObject
from fiji.plugin.trackmate.features.track import TrackIndexAnalyzer
from fiji.plugin.trackmate.io import CSVExporter
from fiji.plugin.trackmate.visualization.table import TrackTableView
from java.io import File
# Importation de l'objet ImageJ
from ij import IJ



import fiji.plugin.trackmate.visualization.hyperstack.HyperStackDisplayer as HyperStackDisplayer
import fiji.plugin.trackmate.features.FeatureFilter as FeatureFilter

# We have to do the following to avoid errors with UTF8 chars generated in 
# TrackMate that will mess with our Fiji Jython.
reload(sys)
sys.setdefaultencoding('utf-8')

# Get currently selected image
imp = WindowManager.getCurrentImage()
#imp = IJ.openImage('https://fiji.sc/samples/FakeTracks.tif')
#imp.show()
output = '/Users/stifanin/Desktop/Output/' #CUSTOMIZE YOUR PATH HERE
# Récupérer l'image actuellement ouverte
imp2 = IJ.getImage()

# Obtenir le nom du fichier (titre) de l'image
filename = imp2.getTitle()
#input_filename = '/Users/tinevez/Desktop/FakeTracks.xml'

# Obtenir les informations sur le fichier
#file_info = imp.getOriginalFileInfo()

# Récupérer le chemin du fichier
#file_path = file_info.directory + file_info.fileName

#print("Nom du fichier de l'image ouverte : " + filename)


#----------------------------
# Create the model object now
#----------------------------

# Some of the parameters we configure below need to have
# a reference to the model at creation. So we create an
# empty model now.

model = Model()

# Send all messages to ImageJ log window.
model.setLogger(Logger.IJ_LOGGER)



#------------------------
# Prepare settings object
#------------------------

settings = Settings(imp)

# Configure detector - We use the Strings for the keys
settings = Settings(imp)
 
# Configure detector
settings.detectorFactory = DogDetectorFactory()
settings.detectorSettings = {
    'DO_SUBPIXEL_LOCALIZATION' : True, #Customize HERE
    'RADIUS' : 0.5, #CUSTOMIZE HERE
    'TARGET_CHANNEL' : 1, #CUSTOMIZE HERE
    'THRESHOLD' : 20.904, #CUSTOMIZE HERE
    'DO_MEDIAN_FILTERING' : False,
}

# Configure spot filters - Classical filter on quality
#filter1 = FeatureFilter('QUALITY', 30, False)
#settings.addSpotFilter(filter1)

# Configure tracker - We want to allow merges and fusions
settings.trackerFactory = SparseLAPTrackerFactory()
settings.trackerSettings = settings.trackerFactory.getDefaultSettings() # almost good enough
settings.trackerSettings['ALLOW_TRACK_SPLITTING'] = True
settings.trackerSettings['ALLOW_TRACK_MERGING'] = True

# Add ALL the feature analyzers known to TrackMate. They will 
# yield numerical features for the results, such as speed, mean intensity etc.
settings.addAllAnalyzers()

# Configure track filters - We want to get rid of the two immobile spots at
# the bottom right of the image. Track displacement must be above 10 pixels.

filter2 = FeatureFilter('TRACK_DISPLACEMENT', 10, False)
settings.addTrackFilter(filter2)


#-------------------
# Instantiate plugin
#-------------------

trackmate = TrackMate(model, settings)

#--------
# Process
#--------

ok = trackmate.checkInput()
if not ok:
    sys.exit(str(trackmate.getErrorMessage()))

ok = trackmate.process()
if not ok:
    sys.exit(str(trackmate.getErrorMessage()))


#----------------
# Display results
#----------------

# A selection.
selectionModel = SelectionModel( model )

# Read the default display settings.
ds = DisplaySettingsIO.readUserDefault()
# Color by tracks.
ds.setTrackColorBy( TrackMateObject.TRACKS, TrackIndexAnalyzer.TRACK_INDEX )
ds.setSpotColorBy( TrackMateObject.TRACKS, TrackIndexAnalyzer.TRACK_INDEX )

displayer =  HyperStackDisplayer( model, selectionModel, imp, ds )
displayer.render()
displayer.refresh()

# Echo results with the logger we set at start:
model.getLogger().log( str( model ) )

# Spot table. Will contain only the spots that are in visible tracks.
spot_table = TrackTableView.createSpotTable( model, ds )
spot_table_csv_file = File(output+filename+".csv" )
spot_table.exportToCsv( spot_table_csv_file )



#out_file_csv = input_filename.replace( '.xml', '.csv' )
#only_visible = True # Export only visible tracks
# If you set this flag to False, it will include all the spots,
# the ones not in tracks, and the ones not visible.
#CSVExporter.exportSpots( "TOTO", model, only_visible )


This should create a lot of CSV Files that we need to aggregates for the following analysis.

# Load necessary library
rm(list = ls())
install.packages("dplyr")
library(dplyr)

# Specify the folder containing the CSV files
InputFolder <- "C:\\Users\\stifanin\\Desktop\\Input\\"
OutputFolder <- "C:\\Users\\stifanin\\Desktop\\Output\\"
if (!dir.exists(OutputFolder)) {
  dir.create(OutputFolder, recursive = TRUE)
}

# List all CSV files in the folder
csv_files <- list.files(path = InputFolder, pattern = "\\.csv$", full.names = TRUE)


header <- names(read.csv(csv_files[1], nrows = 1))

filename_columns <- c("Date", "Measurement", "Axis", "Distance", "Speed", "Experiment")


# Read and merge all CSV files
merged_data <- csv_files %>%
  lapply(function(file) {
    # Read the file, skip the first 3 lines, and apply the header
    data <- read.csv(file, skip = 4, header = FALSE)
    colnames(data) <- header
    # Sort the data by the "FRAME" column in increasing order
    data <- data %>%
      arrange(FRAME)
    # Add a column for the filename (basename removes the path)
    data$SourceFile <- basename(file)
    
    filename <- basename(file)
    # Remove .tif from the filename if it exists
    filename <- gsub("\\.tif", "", filename)
    
    
    # Split the filename at underscores
    filename_parts <- strsplit(filename, "_")[[1]]
    
    # Ensure the number of parts matches the expected columns
    if (length(filename_parts) == length(filename_columns)) {
      for (i in seq_along(filename_columns)) {
        data[[filename_columns[i]]] <- filename_parts[i]
      }
      # Remove .tif.csv from the Experiment column
      data$Experiment <- gsub("\\.tif\\.csv$", "", data$Experiment)
    } else {
      warning(paste("Filename does not have the expected number of parts:", filename))
    }
    
    
    
    
    # Add the "Time (sec)" column, which rounds "Position_T" (assuming Position_T exists)
    if ("POSITION_T" %in% colnames(data)) {
      data$`Time (sec)` <- round(data$POSITION_T,0)
      data$`Time (min)` <- round(data$`Time (sec)` / 60, 2)  # rounded to 2 decimal places
      
    } else {
      warning(paste("Position_T column not found in file:", filename))
    }
    
    
   
    if ("POSITION_X" %in% colnames(data)) {
      first_value <- data$POSITION_X[data$FRAME==0][[1]]
      data$`X (nm)` <- (data$POSITION_X - first_value)*1000
    } else {
      warning(paste("Position_X column not found in file:", filename))
    }
    
    
    if ("POSITION_Y" %in% colnames(data)) {
      first_value <- data$POSITION_Y[data$FRAME==0][[1]]
      data$`Y (nm)` <- (data$POSITION_Y - first_value)*1000
    } else {
      warning(paste("POSITION_Y column not found in file:", filename))
    }
    
    
    if ("POSITION_Z" %in% colnames(data)) {
      first_value <- data$POSITION_Z[data$FRAME==0][[1]]
      data$`Z (nm)` <- (data$POSITION_Z - first_value)*1000
    } else {
      warning(paste("POSITION_Z column not found in file:", filename))
    }
    
    # Calculate displacement for each frame using Euclidean distance (in nm)
    if (all(c("X (nm)", "Y (nm)", "Z (nm)") %in% colnames(data))) {
      # Create a new column 'Displacement (nm)' based on Euclidean distance
      data$`Displacement (nm)` <- sqrt(
        diff(c(0, data$`X (nm)`))^2 +
          diff(c(0, data$`Y (nm)`))^2 +
          diff(c(0, data$`Z (nm)`))^2
      )
      # Set displacement of the first frame to 0 (or adjust as needed)
      data$`Displacement (nm)`[1] <- 0
    } else {
      warning(paste("Position columns missing for displacement calculation in file:", filename))
    }
    
    if ("Displacement (nm)" %in% colnames(data)) {
      displacement_sd <- sd(data$`Displacement (nm)`, na.rm = TRUE)
      data$`Displacement_SD (nm)` <- displacement_sd
    } else {
      warning(paste("Displacement column missing for standard deviation calculation in file:", filename))
    }
    return(data)
  }) %>%
  bind_rows()

# Print the merged data
#print(merged_data)

# Optionally, save the merged data to a new CSV file
output_file <- "merged_data.csv"
write.csv(merged_data, file = paste0(OutputFolder,output_file), row.names = FALSE)

# Message to indicate success
#cat("All CSV files have been merged and saved to", output_file, "\n")

This script should provide a csv file merged_data.csv that can be processed and summarized with a pivot table XY Repositioning Accuracy_Template.xlsx

Calculate the relative position in X, Y and Z: PositionRelative= Position - PositionInitial for each axes

Calculate the displacement Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2) + (Z2-Z1)2 ).


Calculate the resolution of your imaging configuration, Resolution = Lambda / 2*NA and plot the resolution over time (constante).


.

We observe an initial drift that stabilizes over time in X (+3.5 um), Y (+1 um) and Z (-12 um).

Calculate the displacement Displacement = Sqrt( (X2-X1)2 + (Y2-Y1)2) + (Z2-Z1)2 ) and plot the displacement over time.

Calculate the resolution of your imaging configuration, Resolution = Lambda / 2*NA and plot the resolution over time (constante).


Identify visually the time when the displacement is lower than the resolution of the system. On this instrument it takes 320 min to reach its stability. 


Calculate the velocity, Velocity = (Displacement2-Displacement1)/T2-T1) and plot the velocity over time.

Calculate the average velocity before and after stabilisation.

Average velocity Warmup (nm/min)106
Average velocity System Ready (nm/min)

36

Conclusion

The warmup time for this specific instrument is quite long 5 hours. The average displacement velocity after warmup is 36 nm/min which is acceptable.


References

The information provided here is inspired by the following references:

doi.org/10.17504/protocols.io.5jyl853ndl2w/v2

https://doi.org/10.1083/jcb.202107093








What need to be assessed?

Resolution

Fiel Illumination Uniformity

Channel alignement (Co-registration)

Stage drift

Stage repositioning accuracy

Detector Noise



  • No labels