Blog

Tiling and Stiching

A microscope can capture a defined area of a sample. This area is called Field-of-View (FOV) and depends on the optical configuration and microscope acquisition device. This is a limiting feature of microscopy. To be able to observe with a higher resolution the total visualized area is reduced. This can be an issue when trying to visualize feature that are bigger than the FOV.

One way to deal with this issue is to acquire multiple images and stitch them together after acquisition. Instead of acquiring adjacent FOV it is best to have partially overlapping regions. These regions will help to stitch images together.

While many softwares provide proprietary stitching solution we will focus here on the free and versatile plugin for ImageJ named Grid Collection Stitching.

Developed by Stephan Preibisch, this plugin is also part of FIJI distribution of ImageJ.

Stitching process

Stitching usually occurs in 3 steps:

  1. The first is the "layout" which finds the adjacent images for each given image. This step approximatively place the images in relation to each other.
  2. The second step finely transform (rotation, translation) one image to the adjacent ones. It matches detected features in one image to the same feature in the adjacent image
  3. The last step blends the images so the results appears smooth


Layout

Three pieces of information can be used to define the layout.

1. Images metadata

Modern microscopes use motorized stages to move the sample in X and Y. These coordinates can be stored in the image metadata and used during stitching. Knowing the approximate position of each tile greatly help stitching as you just have to compute the fine matching between the different images.

2. Tiles configuration and acquisition order

if you have 25 images (Image 1, Image 2,....) and know that it comes from a 5 x 5 acquisition from the top left to the bottom righ, by row from left to right; then you can quickly place your images to their approximate positions. Sometimes the tile configuration is directly saved into the file names (Image X1Y1, Image X2Y1 etc.), this can also be used to define the approximate tile layout

3. Images themselves

The data in the image can also be used to define the layout. It requires computing power as it usually parse all possible pairwise combination and compute a correlation coefficient. It then matches images with highest correlation.

Transformation

Once the layout is defined, the images need to be finely adjusted one to another. Because microscopes are not perfect some translation and rotation can be used to finely match identified features in adjacent images. To do this images are usually acquired with a 10 to 20% overlapping region. This region will be used to finely match adjacent images.

Blending

A blending can be applied to the overlapping region to ensure a smooth tiled result.


Protocol

  • Open up FIJI
  • Open the Grid/Collection stitching plugin Menu Plugins>Stitching>Grid/Collection stitching

Your files are saved under Tile_x001_y001.tif, you know the grid size and the percentage overlap

  • Type: Filename defined position
  • Order: Defined by filename
  • Click OK
  • Indicate the grid size (for example 5x5 if you have 25 images)
  • Under directory click Browse
  • Select the folder containing your files
  • Click Choose
  • Under file names for tiles Tile_x{xxx}_y{yyy}.tif

Several options are available I recommend using the following:

  • Add tiles as ROIs (to check tiling quality
  • Compute Overlap
  • Display fusion


Your files are saved under Tile_001.tif, you know the grid size, the acquisition order and the percentage overlap

  • Type: Grid: Column by column
  • Order: Down & Right
  • Click OK
  • Indicate the grid size (for example 5x5 if you have 25 images)
  • Under directory click Browse
  • Select the folder containing your files
  • Click Choose
  • Under file names for tiles Tile_{iii}.tif

Several options are available I recommend using the following:

  • Add tiles as ROIs (to check tiling quality
  • Compute Overlap
  • Display fusion
  • Subixel accuracy


Your are capturing images with a manual stage. If you read this before the acquisition I would suggest to acquire your tiles using a given size and scheme (for example 3x3 snake horizontal right). This will allow to use the process above (except Type Grid: Snake by rows). Make sure to have some overlap between images to be able to finely place them. Since you are probably reading this after your acquisition you would have file saved as Tile_001.tif... but you do not know the grid size or the acquisition order nor the percentage overlap

  • Type: Unknown position
  • Order: All files in Directory
  • Click OK
  • Under directory click Browse
  • Select the folder containing your files
  • Check Confirm files

Several options are available I recommend using the following:

  • Add tiles as ROIs (to check tiling quality
  • Ignore Z stage position
  • Subpixel accuracy
  • Display fusion
  • Computation parameters: Save computation time


If you choose Type: Grid the plugin except sequential continuous files i i+1 etc...

If you choose Positions from File the plugin expect one single file multiseries file. To Combine several files into one single T series, open the images individually or all together Then combine them as a stack

Image>Stacks>Images to Stack

Check use tiltes as labels

then convert the Stack to a T serie

Image>Hyperstacks>Stacks to Hyperstacks

Slices (z)=1 Frames (t)=number that was in z and you have replaced by 1

In my experience proprietary softwares do not encode X and Y values in the metadata properly so this method is not often used



Type: Positions from file
Use this is you want to use the image metadata to define the tile positions. This only works if you have one single input file with all the tiles inside. In my experience this works when the file issaved under the acquisition software proprietary format.
You can also use this type of stitching if you have an additional text files defining the position of each image. You can also create this file yourself 


Example of Tile Configuration File
# Define the number of dimensions we are working on
dim = 3
# Define the image coordinates (in pixels)
img_01.tif; ; (0.0, 0.0, 0.0)
img_05.tif; ; (409.0, 0.0, 0.0)
img_10.tif; ; (0.0, 409.0, 0.0)
img_15.tif; ; (409.0, 409.0, 0.0)
img_20.tif; ; (0.0, 818.0, 0.0)
img_25.tif; ; (409.0, 818.0, 0.0)


Notes

The higher the overlap the more computing power required.
Percentage overlap is approximate. Starts low and increase until result is satisfying

It is much easier to stitch when the layout is known.

It is much easier to stitch images when there are many visible features: slide of tissue is easier than sparse cell culture; bright field images are easier than fluorescence images 


Renaming your files

You can easily rename files on a mac using Automator.

On a PC you can use Bulk Rename Utility



More information

https://imagej.net/plugins/image-stitching




Quality Control

Introduction

Microscopy is an approximation of reality. The point spread function is the better example of this: A single point will appear as a blurry ellipse using photonic microscopy. This transformation depends on the optical components which varies with time.

Quality control monitors this transformation over time.





Download MetroloJ QC from GitHub - MontpellierRessourcesImagerie/MetroloJ_QC

Download iText Library v 5.5.13 from https://repo1.maven.org/maven2/com/itextpdf/itextpdf/5.5.13.2/itextpdf-5.5.13.2.jar

Download ImageJ Download (imagej.net) or FIJI Fiji Downloads (imagej.net)


Open ImageJ or FIJI

Install MetroloJ QC and iTextPDF by dropping the .jar file into ImageJ status bar



Start up the microscope

Follow the set up procedure (load the test sample and focus with the lowest magnification objective).

Adjust to obtain a Kholer illumination

Remove the sample from the microscope

Nikon Ti2 without occular a shadow appears on the upper left corner of the FOV



Take a BF image for Widefield illumination Homogeneity

Analyze the image using MetroloJ QC and Check centering accuracy and total homogeneity

4x


Here the centering is off to the left side and the intensity is falling below 30%.


replace the illumination arm by the application screenlight


removing the objective:


If image is similar then the issue comes from the detection side.

Test all items on the detection path: Objective, filter cubes, lightpath selector, camera

Objective 20x-075

It improve the hopmogeneity but centering is still off

60x1.4 gives a perfect homogeneity



Repeat for each objective

Change objective

Adjust Kholer

Take a BF image for Widefield illumination Homogeneity


If illumination is not homogeneous then 

Conclusion: Camera may be re-aligned to obtain an homogenous illumination in bright field.


5408nm laser line not working used for YFP. Laser is shining outside the fiber but not reachingh the objective. Has the cube been changed?


Microscopy consummables


Immersion Oil

Very important component. Mostly used for oil immersion objectives, the refractive index should match the RI of glass from the coverslip and the objectives used. Different types or grades are available A (Low viscosity), B (high viscosity), N, F, FF (for fluorescence) etc... as well as different viscosity. Because the refractive index (and viscosity) vary with the temperature you should buy immersion oil matching the room temperature. Usually 23C but you could also need 30C and 37C.

To date Cargille provide the best options

Cargille

  • Cargille Immersion Oil type FF #16212, 16 oz (473mL), 94$, 0.2$/mL RI=1.48
  • Cargille Immersion Oil type HF #16245, 16 oz (473mL), 94$, 0.2$/mL RI=1.51
  • Cargille Immersion Oil type LDF #16241, 16 oz (473mL), 94$, 0.2$/mL RI = 1.51

Extremely low fluorescence is achieved by Type LDF and Type HF. Type FF is virtually fluorescence-free, though not ISO compliant. Type HF is slightly more fluorescent than Type LDF, but is halogen-free.

Thorlabs

  • MOIL-30 Olympus Type F, 30mL, $84, 2.8$/mL
  • MOIL-20LN Leica Type N, 20mL, $75, 3.75$/mL
  • OILCL30 Cargille Type LDF, 30mL, $28, 1.1$/mL RI=1.51
  • MOIL-10LF Leica Type F, 10mL, $59, 5.9$/mL

https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=5381

Zeiss

Edmund Optics

Lens Cleaner

Use a lens cleaner to clean objective lenses from oil.

Tiffen

Tiffen is a great product designed for camera lenses but it works great for microscope optics as well

Edmund Optics

  • Lens Cleaner #54-828 (8oz, 236mL) 14$; 6 cts/mL
  • Purosol #57-727 (4oz, 115mL), 29$, 25 cts/mL

Zeiss, Nikon, Olympus, Leica

  • Home made recipe: 85% n-hexane analytical grade, 15% isopropanol analytical grade

Lens tissue

Edmund Optics

  • Lens Tissue #60-375 500 sheets 36$ 7cts/sheet 

Thorlabs

  • Lens Tissue #MC-50E 1250 sheets 93$ 7.6cts/sheet

Tiffen

  • Lens Tissue #EK1546027T 250 sheets 112$ 44cts/sheet

Liquid light guides

Excelitas

  • 3mm core diameter, 1.5m length sold by Digikey #1601-805-00038-ND 682$

Thorlabs

  • 3mm core diameter, 1.2m length #LLG03-4H 410$
  • 3mm core diameter, 1.8m length #LLG03-6H 490$

Edmund Optics

  • 3 mm core diameter, 1.8m length #53-689 3mm 700$
  • Adapters available #66-905

Microscope world

  • Liquid Light Guide 3mm core diameter, 1.5m length #805-00038 445$

Bulbs

AVH Technologies

HXP R 120W/45C 780$


OSRAM

XP R 120 W/45 #69119

Others consumables

  • Cotton swab
  • Absorbent polyester swabs for cleaning optical components, Alpha, Clean Foam or Absorbond series TX743B) from www.texwipe.com
  • Rubber Blower GTAA 1900 from Giottos www.giottos.com
Image Analysis

Let's say you have many images taken the same way from two different samples: One Control group and One test group.


What will/should you do to analyse them?


The first thing I would do will be to open a random pair of image (one from the control and one from the test) and have a look at them...

Control                                                   Test


Then I would normalize the brightness and contrast for each channel and across images to make the display settings  the same for both images.

ImageJ Macro to Equalize Brightness and Contrast for all opened images
// This macro was created by nstifani@gmail.com
// Feel free to reach out for any question or improvement suggestions
// This macro will look for Min and Max values on all open images and then apply the Min and the Max to adjust the display brightness of all opened images.
 
// Create Arrays to store image info
ListOfCh=newArray(nImages);
ListOfSlices=newArray(nImages);
ListOfFrames=newArray(nImages);
 
// Get the dimensions of the opened Images
for(ImageI=1; ImageI<nImages+1; ImageI++){
selectImage(ImageI);
getDimensions(width, height, channels, slices, frames);
ListOfCh[ImageI-1]=channels;
ListOfSlices[ImageI-1]=slices;
ListOfFrames[ImageI-1]=frames;
}// end for ImageI
 
 
// Get some statistics
Array.getStatistics(ListOfCh, MinCh, MaxCh, MeanCh, StdDevCh);
Array.getStatistics(ListOfSlices, MinSlice, MaxSlice, MeanSlice, StdDevSlice);
Array.getStatistics(ListOfFrames, MinFrame, MaxFrame, MeanFrame, StdDevFrame);
 
 
// Process all chanels using two functions
for(ChI=1; ChI<MaxCh+1;ChI++){
MinAndMax=GetBnCValues(ChI);
MinMin=MinAndMax[0];
MaxMax=MinAndMax[1];
ApplyBnC(ChI,MinMin, MaxMax);
}
 
 
function GetBnCValues(ChI) {
ListOfMin=newArray(nImages);
ListOfMax=newArray(nImages);
 
// Measure Min and Max for all open images
for(ImageI=1; ImageI<nImages+1; ImageI++){
selectImage(ImageI);
Stack.setChannel(ChI);
resetMinAndMax();
//run("Enhance Contrast", "saturated=0.35");
getMinAndMax(min, max);
ListOfMin[ImageI-1]=min;
ListOfMax[ImageI-1]=max;
}// end for ImageI
 
// Get Statistics
Array.getStatistics(ListOfMin, MinMin, MaxMin, MeanMin, StdDevMin);
Array.getStatistics(ListOfMax, MinMax, MaxMax, MeanMax, StdDevMax);
return newArray(MinMin, MaxMax);
}
 
function ApplyBnC(ChI, MinMin, MaxMax) {
for(ImageI=1; ImageI<nImages+1; ImageI++){
selectImage(ImageI);
Stack.setChannel(ChI);
setMinAndMax(MinMin, MaxMax);
}// end for ImageI
ImageJ Macro Equalize BnC for all images in a folder
//It might be easier to process images from a folder
//You would need to customize this path to your computer
InputDir="/Users/nicolas/Desktop/Input/";
 
// You could also get a prompt to select the InputDir
// InputDir = getDirectory("Choose a Directory ");
 
 
// And to save results in an output folder
// You would need to customize this path to your computer
//OutputPath="/Users/nicolas/Desktop/Output/";
 
//You could also create a new folder based on the name of the input folder
ParentPath=File.getParent(InputDir);
InputDirName=File.getName(InputDir);
OutputDir=ParentPath+File.separator+InputDirName+"_Results";
i=1;
 
while(File.exists(OutputDir)){
OutputDir=ParentPath+File.separator+InputDirName+"_Results"+"-"+i;
i++;
}
File.makeDirectory(OutputDir);
OutputPath=OutputDir+File.separator;
 
 

function GetBnCValues(ChI) {
ListOfMin=newArray(ListFile.length);
ListOfMax=newArray(ListFile.length);
  
// Measure Min and Max for all open images
for(Filei=0; Filei<ListFile.length; Filei++){
FilePath=InputDir+ListFile[Filei];
open(FilePath);
ImageName=getTitle();
//selectWindow(ImageName);
Stack.setChannel(ChI);
resetMinAndMax();
run("Enhance Contrast", "saturated=0.35");
getMinAndMax(min, max);
ListOfMin[Filei]=min;
ListOfMax[Filei]=max;
selectWindow(ImageName); run("Close");
}// end of for FileI
// Get Statistics
Array.getStatistics(ListOfMin, MinMin, MaxMin, MeanMin, StdDevMin);
Array.getStatistics(ListOfMax, MinMax, MaxMax, MeanMax, StdDevMax);
return newArray(MinMin, MaxMax);
}// End of GetBnCValues function



function ApplyBnC(ChI, MinMin, MaxMax) {
for(Filei=0; Filei<ListFile.length; Filei++){
FilePath=InputDir+ListFile[Filei];
open(FilePath);
ImageName=getTitle();
//selectWindow(ImageName);
Stack.setChannel(ChI);
setMinAndMax(MinMin, MaxMax);
saveAs("Tiff", OutputPath+ImageName);
selectWindow(ImageName); run("Close");
}// end for Filei
}//end of function

 
ListFile=getFileList(InputDir);
run("Set Measurements...", "area mean standard modal min centroid center perimeter bounding fit shape feret's integrated median skewness kurtosis area_fraction stack display redirect=None decimal=3");
run("Clear Results");
 
// It might be faster to work in batchmdoe
setBatchMode(true);
 

// Create Arrays to store image info
ListOfCh=newArray(ListFile.length);
ListOfSlices=newArray(ListFile.length);
ListOfFrames=newArray(ListFile.length);
 
for (Filei=0; Filei<ListFile.length; Filei++){
FilePath=InputDir+ListFile[Filei];
open(FilePath);
ImageName=getTitle();
//selectWindow(ImageName);
getDimensions(width, height, channels, slices, frames);
ListOfCh[Filei]=channels;
ListOfSlices[Filei]=slices;
ListOfFrames[Filei]=frames;
selectWindow(ImageName); run("Close");
}// end for FileI


// Get some statistics
Array.getStatistics(ListOfCh, MinCh, MaxCh, MeanCh, StdDevCh);
Array.getStatistics(ListOfSlices, MinSlice, MaxSlice, MeanSlice, StdDevSlice);
Array.getStatistics(ListOfFrames, MinFrame, MaxFrame, MeanFrame, StdDevFrame);


//for (Filei=0; Filei<ListFile.length; Filei++){
//FilePath=InputDir+ListFile[Filei];
//open(FilePath);
//ImageName=getTitle();
////selectWindow(ImageName);

// Process all chanels using two functions
for(ChI=1; ChI<MaxCh+1;ChI++){
MinAndMax=GetBnCValues(ChI);
MinMin=MinAndMax[0];
MaxMax=MinAndMax[1];
ApplyBnC(ChI,MinMin, MaxMax);
}





Then I would look at each channel individually and use a LUT Fire to have better appreciate the intensities.


Macro Apply LUT Fire to all opened images
for(ImageI=1; ImageI<nImages+1; ImageI++){
selectImage(ImageI);
getDimensions(width, height, channels, slices, frames);
for(ChI=1; ChI<channels+1;ChI++){
Stack.setChannel(ChI);
Property.set("CompositeProjection", "null");
Stack.setDisplayMode("color");
run("Fire");
}
}


Finally I will use the syncrhonize Windows features to navigate the two images at the same time

ImageJ>Windows>Synchronize Windows

ou dans une macro

run("Synchronize Windows");



                           Control  Ch1                                         Test Ch1                        

                           Control  Ch2                                         Test Ch2                        


To my eyes the Ch1 and Ch2 are slighly brighter in the Test conditions.

Here we can't compare Ch1 to Ch2 because the range of display are not the same. It seems than Ch2 is much weaker than Ch1 but it is actually not accurate. The best way to sort this is to add a calibration bar to each channel ImageJ>Tools>Calibration Bar. The calibration bar is non-destructive as it will be added to the overlay. To "print" it on the image you can flatten it on the image ImageJ>Image>Overlay>Flatten.





                               Control  Ch1                                         Control Ch2       


If you apply the same display to Ch1 and Ch2 then you can see that Ch1 overall more intense while Ch2 has few very strong spots.


                         Control  Ch1                       Control Ch2 with same display than Ch1



Looking more closely

In Ch1 we can see that there is some low level intensity and a high level circular foci whereas in Ch2 there is a bean shaped structure. In the example below the Foci seems stronger in the control than the Test condition.


                       Control Ch1                                            Test Ch1




                       Control Ch2                                                    Test Ch2


But we need obviously to do some quantification to confirm or infirm these first observations.


The first way to address it would be in a bulk fashion: By measuring the mean intensity for example for all the images.



Macro ImageJ Collect Global measurements
 

//It might be easier to process images from a folder
//You would need to customize this path to your computer
InputDir="/Users/nicolas/Desktop/Input/";

// You could also get a prompt to select the InputDir
// InputDir = getDirectory("Choose a Directory ");


// And to save results in an output folder
// You would need to customize this path to your computer
//OutputPath="/Users/nicolas/Desktop/Output/";

//You could also create a new folder based on the name of the input folder
ParentPath=File.getParent(InputDir); 
InputDirName=File.getName(InputDir);
OutputDir=ParentPath+File.separator+InputDirName+"_Results";
i=1;

while(File.exists(OutputDir)){
OutputDir=ParentPath+File.separator+InputDirName+"_Results"+"-"+i;
i++;
}
File.makeDirectory(OutputDir);
OutputPath=OutputDir+File.separator;


//Then you can measure all values for all ch and all images

ListFile=getFileList(InputDir); 
run("Set Measurements...", "area mean standard modal min centroid center perimeter bounding fit shape feret's integrated median skewness kurtosis area_fraction stack display redirect=None decimal=3");
run("Clear Results");

// It might be faster to work in batchmdoe
setBatchMode(true);


for (Filei=0; Filei<ListFile.length; Filei++){
FilePath=InputDir+ListFile[Filei];
open(FilePath);
ImageName=getTitle();
selectWindow(ImageName);
run("Select None");
run("Measure Stack...");
selectWindow(ImageName); run("Close");
}
selectWindow("Results");
saveAs("Text", OutputPath+"Overall_Measurements.csv");
selectWindow("Results"); run("Close");
   


If all works fine you should have a CSV file you can open with your favorite spreadsheet applications. This table should give one line per image and all available measurements for the whole image and for each channel of the image. Of course some measurements will be all the same because the images were taken in the same way.

What to do with the file? Explore the data and see if there is any relevant information.

My view would be to use a short script in R to plot all the data and to some basic statistics


R Plot ImageJ Global Measurements
# Prompt for the input CSV file from ImageJ measurements
Input<-file.choose()

# You must cange this path to match your computer
Output<-"/Users/nicolas/Desktop/Output/"
#Output <- "C:\\Users\\stifanin\\OneDrive - Universite de Montreal\\Bureau\\Output\\"


data<-read.csv(Input)

require("ggpubr")

List<-strsplit(as.character(data$Label), ':')

#Add Filename and Group to the data
data$Filename<-as.factor(sapply(List, "[[", 1))
Group<-strsplit(as.character(data$Filename), '_')
data$Group<-as.factor(sapply(Group, "[[", 1))
# MAke Ch as a factor
data$Ch<-as.factor(data$Ch)

#Create a list of plots
ListofPlots<-list()
i=1;

for(ColI in 3:(ncol(data)-2)){
  if(!is.factor(data[[ColI]])){
    for (ChI in 1:nlevels(data$Ch)){
   Graph<-   ggboxplot(
        data[data$Ch==ChI,], x = "Group", y = colnames(data[ColI]),
        color = "Group", palette = c("#4194fa", "#db51d4"),
        add = "jitter", title=paste0(colnames(data[ColI])," of Channel ", ChI)
      )+stat_compare_means(method = "t.test")
   ListofPlots[[i]]<-Graph
i=i+1
  }
  }
}

#Export the graphs
ggexport(
  plotlist = ListofPlots, filename = paste0(Output,"Graphs.pdf"))

This should give you a pdf file with one plot per page. You can scroll it and look at the data. p-values from t-test are indicated on the graphs. As you can see below the mean intensity in both Ch1 and Ch2 are higher in the test than the control. What does it mean?



It means than the average pixel intensity is higher in the test conditions than the control condition.

Other values that are significantly different:

  • Mean Intensity Ch1 and Ch2 (Control<Test)
  • Maximum Intensity of Ch1 (Control>Test) Brightest value in the image
  • Integrated Intensity of Ch1 and Ch2(Control<Test) It is equal to the Mean x Area
  • Median Ch1 and Ch2 (Control<Test)
  • Skew of Ch1 (Control>Test): The third order moment about the mean. Relate to the distribution of intensities. If=0 then intensities are symmetrically distributed around the mean. if<0 then distribution is asymmetric to the Left of the mean (lower intensities), if>0 then it is to the right (higher intensities).
  • Raw Integrated Intensities of Ch1 and Ch2 (Control<Test) Sum of all pixel intensities


Now we start to have some results and statistically relevant information about the data. The test condition have a higher mean intensity (integrated intensity, median and raw integrated intensities are all showing the same result) for both Ch1 and Ch2. This is surprising because I had the opposite impression while looking at the image with normalized intensities and LUT fire applied (see above). Another surprising result is the fact that Control images have a higher maximum intensity than Test images but only for Ch1. This is clearly seen in the picture above.

One thing that can explain these results is that the number of cells can be different in the control vs the test images. If there are more cells in one condition then there are more pixels stained (and less background) and the mean intensity would be higher not because the signal itself is higher in each cell but because there are more cell...

To solve this we need to count the number of cells per image. This can be done manually or using segmentation based of intensity.

Looking at the image it seems that Ch1 is a good candidate to segment each cell.

ImageJ Macro Segment and Measure
// It might be easier to process all images ina folder
// You can specify this folder. You need to customize this path to your computer
InputDir="/Users/nicolas/Desktop/Input/";
 
// Or you can get a prompt to select the InputDir
// InputDir = getDirectory("Choose a Directory ");
 
 
// To save results in an output folder
// You can specify the OutputPath. You need to customize this path to your computer
//OutputPath="/Users/nicolas/Desktop/Output/";
 
//Or you can create a new folder based on the name of the input folder
//This is the method I prefer
ParentPath=File.getParent(InputDir);
InputDirName=File.getName(InputDir);
OutputDir=ParentPath+File.separator+InputDirName+"_Results";
i=1;
 
while(File.exists(OutputDir)){
OutputDir=ParentPath+File.separator+InputDirName+"_Results"+"-"+i;
i++;
}
File.makeDirectory(OutputDir);
OutputPath=OutputDir+File.separator;
File.makeDirectory(OutputPath+"Cropped Cells");
OutputCellPath=OutputPath+"Cropped Cells"+File.separator;

//// End of creating a new ouput folder

 
// Prepare some measurements settings and clean up the place
run("Set Measurements...", "area mean standard modal min centroid center perimeter bounding fit shape feret's integrated median skewness kurtosis area_fraction stack display redirect=None decimal=3");
run("Clear Results");


//Then you can start to process all images
 
ListFile=getFileList(InputDir);
 run("ROI Manager...");
// It might be faster to work in batchmode
setBatchMode(true);


 
for (Filei=0; Filei<ListFile.length; Filei++){
FilePath=InputDir+ListFile[Filei];
open(FilePath);
ImageName=getTitle();
ImageNameNoExt=File.getNameWithoutExtension(FilePath);
getDimensions(width, height, channels, slices, frames);

//Remove the Background
selectWindow(ImageName);
ImageNameCorrected=ImageNameNoExt+"_Corrected";
run("Duplicate...", "title=&mageNameCorrected duplicate");
rename(ImageNameCorrected);
selectWindow(ImageNameCorrected);
run("Subtract...", "value=500");
run("Subtract Background...", "rolling=22 sliding stack");

//Adjust the display
for(ChI=1; ChI<channels+1; ChI++){
Stack.setChannel(ChI);
resetMinAndMax();
run("Enhance Contrast", "saturated=0.35");
//setMinAndMax(500, 3000);
}
Stack.setChannel(1);
Property.set("CompositeProjection", "Sum");
Stack.setDisplayMode("composite");

//Combine both color for full cell segmentation
run("RGB Color");
rename("RGB");
run("Duplicate...", "title=Mask duplicate");
run("16-bit");
resetThreshold();
setAutoThreshold("Otsu dark no-reset");
setOption("BlackBackground", true);
run("Convert to Mask");
run("Open");
run("Dilate");
run("Fill Holes");
run("Analyze Particles...", "size=2-12 circularity=0.60-1.00 exclude add");
selectWindow("Mask");
saveAs("TIFF", OutputPath+ImageNameNoExt+"_Cell-Mask.tif");
MaskImage=getTitle(); 
selectWindow(MaskImage);run("Close");
selectWindow("RGB");
run("Remove Overlay");
run("From ROI Manager");
saveAs("TIFF", OutputPath+ImageNameNoExt+"_Segmentation-Control.tif");
ControlImage=getTitle(); 
selectWindow(ControlImage);run("Close");

selectWindow(ImageNameCorrected);
count = roiManager("count");
for (i = 0; i < count; i++) {
roiManager("select", i);
run("Measure Stack...", "channels slices frames order=czt(default)");
}
RoiManager.select(0);

selectWindow("Results");
saveAs("Results", OutputPath+ImageNameNoExt+"_Measurements_Cells.csv");
run("Clear Results");

selectWindow(ImageNameCorrected);
saveAs("TIFF", OutputPath+ImageNameNoExt+"_Background-removed.tif");
ImageCorrected =getTitle();
selectWindow(ImageCorrected);

NbROIs=roiManager("size");
AllROIs=Array.getSequence(NbROIs);
roiManager("Select", AllROIs);
OutputCellName=OutputCellPath+ImageNameNoExt+"_";
RoiManager.multiCrop(OutputCellName, " save tif");
roiManager("Save", OutputPath+ImageNameNoExt+"_Cell-ROIs.zip");
roiManager("Deselect");
roiManager("Delete");
selectWindow(ImageCorrected);run("Close");
selectWindow(ImageName);run("Close");

}// end for FileI



This macro starts to be a bit long but to summarize here are the steps:

  • Open each image
  • Remove the offset from the camera (500 GV) and apply a rolling ball background subtraction
  • Create a RGB composite regrouping both channels
  • Convert this RGB to a 16-bit image
  • Threshold the RGB image using the Otsu alogrythm
  • Process the binary to improve detection (Open, Dilate, Fill Hole)
  • Analyze particles to detect cells with size=2-12 circularity=0.60-1.00
  • Add the results to the ROI manager
  • Save the Mask for the control of segmentation
  • Save an RGB image with the detection overlay as a control for the good detection
  • Save the ROIs
  • Use the ROIs to collect all measurements available and save the result as a csv
  • Save the image with the background removed
  • Crop each ROI from the image with the Background removed to isloate each cell

Few notes:

The camera offset is a value the camera adds to avoid having negative value due to noise. The best way to measure it is to take a dark image and have the mean intensity of the image. If you don't have that in hand you can choose a value sligly lower than the lowest pixel value found in your images.

The rolling ball background subtraction is powerfull tool to help with the segmentation

The values for the Analyze particles detection are the tricky part here. How I choose them? I use the thresholded image (binary) and I select the average guy: the spot that looks like all the others. I use the wand tool to create a selection and then I do Analyze>Measure This will give me a good estimate of the size (area) and the circularity. Then I select the tall/skinny and the short/not so skinny guys: I use again the wand tool to select the spots that would be my upper and lower limits. This will give me the range of spot size (area) and circularity. This is really a critical step that should be performed by hand prior running the script. You should do this on few different images to make sure the values are good enough to account for the variability you will encounter.

Now it is time to check that the job was done properly.

Looking at the control Images the detection isn't bad at all.


The only thing missing are the individual green foci seen below. Those cells look different as the FOci is very strong but there is not much fluorescence elsewhere (no diffuse green and no red). I might discuss with the scientist to see if it is OK to ignore them. If not I would need to change the threshold values and the detection parameters but let's say it is fine for now.

So now you should have a list of files (images, ROIs, and CSV files). We will focus on the CSV files has they contain the number of cells we are looking for and a lot more information we can also use.

We will reuse the previous R script but will add a little part to merge all the CSV files from the input folder.


Script R Merge CSV files and plot all data
#Select the Input and the Output folder
Input <- "/Users/nicolas/Desktop/Input_Results-1/"
Output<-"/Users/nicolas/Desktop/Output/"
require("ggpubr")
#Get the list of CSV files
FileList<-list.files(path=Input, full.names=TRUE, pattern=".*csv")

#Merge the file into one big data
for (FileI in 1:length(FileList)){
  data<-read.csv(FileList[FileI])
  if (FileI==1){
    BigData<-data
  }else{
    BigData<-rbind(BigData,data)
  }
  
}
# then the rest is the same than the previous script
data<-BigData

List<-strsplit(as.character(data$Label), ':')

#Add Filename and Group to the data
data$Filename<-as.factor(sapply(List, "[[", 1))
Group<-strsplit(as.character(data$Filename), '_')
data$Group<-as.factor(sapply(Group, "[[", 1))
# Make Ch as a factor
data$Ch<-as.factor(data$Ch)

write.csv(data, paste0(Output,"Detected-Cells_All-Measurements.csv"), row.names = FALSE)

#Create a list of plots
ListofPlots<-list()
i=1;

for(ColI in 3:(ncol(data)-2)){
  if(!is.factor(data[[ColI]])){
    for (ChI in 1:nlevels(data$Ch)){
   Graph<-   ggboxplot(
        data[data$Ch==ChI,], x = "Group", y = colnames(data[ColI]),
        color = "Group", palette = c("#4194fa", "#db51d4"),
        add = "jitter", title=paste0(colnames(data[ColI])," of Channel ", ChI)
      )+stat_compare_means(method = "t.test")
   ListofPlots[[i]]<-Graph
i=i+1
  }
  }
}

#Export the graphs
ggexport(
  plotlist = ListofPlots, filename = paste0(Output,"Graphs_individual.pdf"), 
  verbose=TRUE
)


This script will save the merged data in a single csv file. Usign your favourite spreadsheet manager you will be able to create a table and summarize the data with a pivot table to get the number of cells per group.


ControlTest
12251360

There are slighly more cells in the test than in the control. If we look at the number of detected cells per image we can confirm that there are more cells in the test conditions than in the control conditions. There are two images that have less celss than others in the control. We can go back to the detection to check those.


Looking at the detection images we can confirm that the two images from the control have less cells, so it is not a detection issue.


Together these results show that there is no more cells in one condition than the other.

On avera between 60 and 65 cells are detected by image with a total of 1200 cells per condition detected.


Then we can look at the graphs and look for what is statistically different, here is the short list

  • Area Test>Control
  • Mean Ch2 Test>Control
  • Min Ch2 Test>Control
  • Max Ch1 Control>Test
  • Max Ch2 Test>Control
  • Perimeter Test>Control
  • Width and Height Test>Control
  • Major and Minor Axis Test>Control
  • Feret Test>Control
  • Integrated Density Ch1 et Ch2 Test>Control
  • Skew et Kurt Ch1 Control>Test
  • Raw Integrated density of Ch2 Test>Control
  • Roundness Control>Test


Looking at p-values (statistical significance) is good approach but it is not enough. We should also look at the biological significance of the numbers. For example the roundess is statiscially different between the control and test but this difference is really small. What does it mean biologically that the test cells are a tiny bit less circular. In this specific case : nothing much. So we can safely forget about it to focus on more important things. 


This can easily be done since we have generated a CSV file gathering all the data Detected-Cells_All-Measurements.csv


R Script Create Graph with Descriptive Statistics
# Prompt for the input CSV file from ImageJ measurements
Input<-file.choose()

# You must cange this path to match your computer
Output<-"/Users/nicolas/Desktop/Output/"

data<-read.csv(Input)

data$Ch<-as.factor(data$Ch)



require("ggpubr")
#Create a list of plots
ListofPlots<-list()
i=1;

for(ColI in 3:(ncol(data)-2)){
  if(is.numeric(data[[ColI]]) || is.integer(data[[ColI]])){
    for (ChI in 1:nlevels(data$Ch)){
      Graph<-ggsummarystats(
        data[data$Ch==ChI,], x = "Group", y=colnames(data[ColI]),
        ggfunc = ggviolin, digits = 2,
        color = "Group", palette = c("#4194fa", "#db51d4"),
        summaries=c("n", "mean","sd","ci"), add="mean_sd", title=paste0(colnames(data[ColI])," of Channel ", ChI))
      ListofPlots[[i]]<-Graph
      i=i+1
    }
  }
}

#Export the graphs
ggexport(
  plotlist = ListofPlots, filename = paste0(Output,"Graphs_Individual_Descriptive Stats.pdf"), 
  verbose=TRUE
)



Then for all the variables that are statically different between control and test groups we can have a closer look at the data.

  • Area Control 5.03um2 ; Test 5.35um2 p-value = 1.7e−07
    This is a relatively small increase 6%. If we bring this back to the diameter it is even smaller 3% increase. Yet the most relevant value in my view is the volume because cells are spheres in the real life. This increase is about a 10% increase in the cell volume. This is relevant information. Cells in the Test conditions are 10% bigger than in the control condition. 
  • Mean Ch2 Control 1078 GV; Test 1197 GV p-value < 2.2e−16
    This means that the bean shape structure labelled by Ch2 is 10% brighter in the test cells than in the controls. Yet the segmentation was performed on the full cell using Ch1 as a proxy. This result prompt for a segmentation based on Ch2. Are the bean shaped bigger or brighter? 
  • Min Ch2 Control 198 GV; Test 213 GV p-value 1e−05
    The difference in the minimum of Ch2 represent about 7.5% increase. If the minimum is higher it suggests that there is a global increase in the fluorescent of Ch2 and not a redistribution (ie clusterization). The segmentation on the Ch2 might sort out which option is really occurring
  • Max Ch1 Control 10111 GV; Test 8393 GV p-value < 2.2e−16
    This difference represent a 17% decrease of the maximum intensity of Ch1. If we remember Ch1 has two fluorescent levels, low within the nuclei and an intense foci. Here the maximum represent the Foci only. So we can conclude that the Foci are less intense in the test compare to the control.
  • Max Ch2 Control 3522 GV; Test 3761 GV p-value = 0.00035
    Here we have the opposite situation where in the bean shape structure the maximum intensity is higher in  the test condition than the control 
  • Perimeter Control 8.45um ; Test 8.74um p-value = 1.7e−08
    Here we have another measurement about the size of the detected cells. Using the area we can estimate about a 3.1% perimeter increase. Here the increase is about 3.5%. This is consistant meaning the increase in area and perimeter are about the same.
  • Width and Height Control 2.49um; Test 2.57um p-value = 8e−07
    Again here another measurement of the size of the detected cells which are larger in the test than the control
  • Major and Minor axis length Control 2.73um; Test 2.84um  p-value = 2.7e−10
    Similar to above excepted that instead of measuring the width and heigh of a rectangle around the cell it is an ellipse fitting the cell. In my view this would be more relevant variable than the previous one but since all data converge to slightly larger cells in the test than the control we can focus on the most interesting ones (area or perimeter)
  • Feret Control 2.95um; Test 3.05um p-value = 5.1e−10
    This is the maximum distance between two points of the detected cells. It relates to the shape of the cell
  • Integrated Density Ch1 Control 11296 GV; Test 11886 GV p-value
    This is the product of area and mean intensity. We know that the area is larger but not the mean intensity. Yet the integrated density of Ch1 is sligtlhy larger (5%) in test vs control.  
  • Integrated Density Ch2 Control 5570 GV; Test 6548 p-value < 2.2e−16
    Here we have a 17% increase which represent the combination of increase of cell size and mean intensity of Ch2
  • Skew of Ch1 Control 2.47; Test 1.77 p-value < 2.2e−16
    The skewness refers to the asymmetrical shape of the distribution of the intensities. Since values are above 0 it means that the distribution is skewed towards the higher intensities. Since this number is lower in the test cells, it means that the distribution of Ch1 intensity is less asymmetric. This can be easily explained by looking at Ch1 which has a low intensity within the nuclei and a strong foci. The foci high intensities provides a higher skew number. Since the skew is less closer to 0 in test conditions this could mean that the foci is less intense or that the fluorescence of Ch1 is more evenly distributed. Since we don't have an increase nor a decrease of mean intensity in Ch1 it is likely that the fluorescence is more evenly distributed in the test condition Thant the control. In other words it seems that in the control conditions the foci is brighter by taking some of the low fluorescence that is present in the nuclei. In other words, it seems that the test conditions can't make very bright foci and that the Ch1 fluorescence is more evenly distributed.
  • Kurt of Ch1 Control 11.3; Test 7.24 p-value < 2.2e−16
    The Kurt account for the flatness of the distribution. If Kurt=0 we have a normal distribution, if <0 it is flatter than normal; if >0 it is more peaked. In this case the data is very very peaked in both control and test. This is quite interesting especially when looking at the violin graphs below. In the control we can see 2 peaks one close to 0 and one close to 11. This means that there are two kinds of cells in the control condition, the one with a foci and the one with only low level distributed Ch1 fluorescence. In the test conditions the Kurt are entered and the two peaks are so close that they almost merged.


  • Raw Integrated density of Ch2 Control 313299 vs Test 368339; p-value < 2.2e−16
    This is the sum of the pixel intensities. It is higher in the test cell than the control cell. This can be because the intensity is higher and/or the area is larger. In this case we have seen previously that the area is larger. Interestingly Raw intensity is higher for Ch2 but is the same for Ch1. 

  • Roundness Control 0.84; Test 0.83 p-value = 0.0032
    As said before even though the value is significant the difference here is minimal and we can easily go to focus on another 


As we have seen here it seems that Ch1 and Ch2 have defined structrures that differs between the Control and test group. Our analysis looking at each cell does not provide enough detail on each structure. More specifically it can't discriminate between the low intensity Ch1 and the high intensity Foci. Also it looks at the overall fluorescence of Ch2 in the cell while we can clearly identify a bean shape structure. The next step would be to segment the images in 3 ways: high Ch1 intensities would correspond the the foci, high Ch2 intensities would correspond to the bean shape structure and low Ch1 intensities (all Ch1 intensities but excluding the high intensities from the Foci) would relate to the overall nuclei








All about Objectives

What are the parameters to choose a good objective

What is the  strehl ratio?

Screw type

RMS (Royal Microscopical Society objective thread)

M25 (metric 25-millimeter objective thread)

M32 (metric 32-millimeter objective thread).

Less common or older

M27 x0.75

M27x1.0


Working distance


Immersion


Numerical aperture


Optical Correction

Transittance


Others

Tubelength 

Adjustment 

I get often questions about how to write the Marterial and Method section of a manuscript including microscopy data. The diversity of technologies and applications in the field of microscopy may be complicated for some users and help of a microscopy specialist is often required.

Here I would like to present a usefull tool called MicCheck that provides a checklist of what to include in the material and method section of a manuscript including microscopy data. MicCheck is a free tool to help you. It is described in details in the following article. 

Montero Llopis, P., Senft, R.A., Ross-Elliott, T.J. et al. Best practices and tools for reporting reproducible fluorescence microscopy methods. Nat Methods 18, 1463–1476 (2021). https://doi.org/10.1038/s41592-021-01156-w


Photo-Toxicity


What is photo-toxicity?

Under appropriate temperature (37°C), humidity (hight) and atmosphere (5% CO2) it is possible to grow cells in vitro.

Healthy cells

But looking at these cells under the microscope is not without impact. This is similar to when you are exposing yourself to the sun light. If the sun is too strong and/or if you are exposed for a too long period you can have a bad sun burn... The same applies to cells under a microscope. While transmitted light (bright-field, phase-contrast, DIC) does not affect cell biology, fluorescence excitation light can be deleter.

How does photo-toxicity look like?

Strong illumination light will eventually burn your cells. Adhering cells will retract, detach and eventually die. See the example below (Click on the image to see the movie). These are cells exposed to 1 second of 70mW of 550nm light every 5 minutes. The total movie is about 3h. The cells retract, detach and eventually die.

Click on the image to see the movie


Below is another example of cells exposed to 1 second of 59mW of 395nm and 1s of 70mW of 550nm light every 5 minutes. The total duration is about 8h.


Click on the image to see the movie

How can I identify photo-toxicity has occured?

The best way to do this is to take a larger field of view image after your acquisition. Because photo-toxicity occurs only in the illuminated area, taking a picture of adjacent cells will provide with a very good (but not perfect) approach.

Effects of Photo-toxicity

If you step back then you can see that the damaged area is actually limited to the exposed area. Note: The recorded area is smaller than the exposed area.
Importantly this is not a perfect control. A more ideal control would be to have another dish with cells maintained in the same condition but without illumination. In fact in the image above we can't conclude whether or not the affected circular area has no impact on the surrounding cells. It is possible that cell death occurring under the illuminated field of view can produce and release molecules that will affect the surrounding area. Therefore the ideal control would be a totally separated dish.

What are the important factors when talking about photo-toxicity?

The amount of light, the wavelength, the surface illuminated, the duration of illumination, the repetition of the illumination.


Amount of light: If you shine a strong light vs a dim light

The wavelength: The energy carried by laser light is dependent of the wavelength. Shorter wavelength have more energy and are more deleterious

The area illuminated: Shining the same amount of light into a small area result in stronger damage

The duration of illumination: Shining light for 1 second is doing more damage than 10 ms

The repetition: Shining 10 ms 20 times per minute is doing more damage than 1 time per minute


Empirically you can determine if photo-toxicity occur by looking at your cells. IF your cells are not dividing, if they detach then you may have photo-toxicity.

It is always good to acquire a larger field of view of the recorded area to make sure no photo-toxicity has occurred.

With a power meter you can also evaluate the amount of energy that your cells are comfortable with.

To do this measure the power at the objective using your regular imaging settings.

You will obtain a value in mW (which is Joule/second).

Then divide this value by the field of view of your objective. This will provide the irradiance expressed in mW/cm2.

Finding the irradiance that is stress free or your cells is key. To do this you can shine continuously a defined irradiance and observe your cells for several hours. If your cells do not display any sign of photo-toxicity you can increase the irradiance and continue until finding the maximum acceptable continuous irradiance.

This will provide a good idea of how strong are your cells.

Obviously there is some flexibility since you will likely not continuously expose your cells. So you may pass above the maximum acceptable continuous irradiance value, which will stress your cells but eventually they will recover. Only you can determine if this amount of stress is experimentally acceptable and not modify the biology of what you want to measure.

Here is another discrete example.

Cells were imaged for 100ms with 70mW of 550nm light every 5 minutes. Cells are dividing faster than the effect of photo-toxicity that is occurring. Again here the best way to control is to take an overview image at the end of the acquisition and compare exposed cells to non-exposed cells.

Click on the image to see the movie






All-in-one software (acquisition, processing, analysis)

Proprietary

  • Zeiss
  • manufactured by for  . It comes in 2 flavours: Basic Research, Advanced Research plus some options Confocal, High-Content and Artificial intelligence (Ai) add-ons.
  • Leica
  • Evident / Olympus
  • PicoQuant
  • MBF BioScience
  • Visitron

Acquisition software

Free and Open-source

  • by Nico Stuurman and Mark Tsuchida

Visualization, Processing and Analysis software

Free and Open-source

Proprietary

Utilities

Expansion Microscopy

This is not a strictly a microscopy technic. It is more a method for sample preparation in which the sample is embedded in a Gel matrix which is digested and physically expended. Conventional microscopy acquisition is then applied to the expanded sample.


See

https://pubmed.ncbi.nlm.nih.gov/25592419/

https://en.wikipedia.org/wiki/Expansion_microscopy