Let's say you have many images taken the same way from two different samples: One Control group and One test group.
What will/should you do to analyse them?
The first thing I would do will be to open a random pair of image (one from the control and one from the test) and have a look at them...
Control Test
Then I would normalize the brightness and contrast for each channel and across images to make the display settings the same for both images.
// This macro was created by nstifani@gmail.com // Feel free to reach out for any question or improvement suggestions // This macro will look for Min and Max values on all open images and then apply the Min and the Max to adjust the display brightness of all opened images. // Create Arrays to store image info ListOfCh=newArray(nImages); ListOfSlices=newArray(nImages); ListOfFrames=newArray(nImages); // Get the dimensions of the opened Images for(ImageI=1; ImageI<nImages+1; ImageI++){ selectImage(ImageI); getDimensions(width, height, channels, slices, frames); ListOfCh[ImageI-1]=channels; ListOfSlices[ImageI-1]=slices; ListOfFrames[ImageI-1]=frames; }// end for ImageI // Get some statistics Array.getStatistics(ListOfCh, MinCh, MaxCh, MeanCh, StdDevCh); Array.getStatistics(ListOfSlices, MinSlice, MaxSlice, MeanSlice, StdDevSlice); Array.getStatistics(ListOfFrames, MinFrame, MaxFrame, MeanFrame, StdDevFrame); // Process all chanels using two functions for(ChI=1; ChI<MaxCh+1;ChI++){ MinAndMax=GetBnCValues(ChI); MinMin=MinAndMax[0]; MaxMax=MinAndMax[1]; ApplyBnC(ChI,MinMin, MaxMax); } function GetBnCValues(ChI) { ListOfMin=newArray(nImages); ListOfMax=newArray(nImages); // Measure Min and Max for all open images for(ImageI=1; ImageI<nImages+1; ImageI++){ selectImage(ImageI); Stack.setChannel(ChI); resetMinAndMax(); //run("Enhance Contrast", "saturated=0.35"); getMinAndMax(min, max); ListOfMin[ImageI-1]=min; ListOfMax[ImageI-1]=max; }// end for ImageI // Get Statistics Array.getStatistics(ListOfMin, MinMin, MaxMin, MeanMin, StdDevMin); Array.getStatistics(ListOfMax, MinMax, MaxMax, MeanMax, StdDevMax); return newArray(MinMin, MaxMax); } function ApplyBnC(ChI, MinMin, MaxMax) { for(ImageI=1; ImageI<nImages+1; ImageI++){ selectImage(ImageI); Stack.setChannel(ChI); setMinAndMax(MinMin, MaxMax); }// end for ImageI |
//It might be easier to process images from a folder //You would need to customize this path to your computer InputDir="/Users/nicolas/Desktop/Input/"; // You could also get a prompt to select the InputDir // InputDir = getDirectory("Choose a Directory "); // And to save results in an output folder // You would need to customize this path to your computer //OutputPath="/Users/nicolas/Desktop/Output/"; //You could also create a new folder based on the name of the input folder ParentPath=File.getParent(InputDir); InputDirName=File.getName(InputDir); OutputDir=ParentPath+File.separator+InputDirName+"_Results"; i=1; while(File.exists(OutputDir)){ OutputDir=ParentPath+File.separator+InputDirName+"_Results"+"-"+i; i++; } File.makeDirectory(OutputDir); OutputPath=OutputDir+File.separator; function GetBnCValues(ChI) { ListOfMin=newArray(ListFile.length); ListOfMax=newArray(ListFile.length); // Measure Min and Max for all open images for(Filei=0; Filei<ListFile.length; Filei++){ FilePath=InputDir+ListFile[Filei]; open(FilePath); ImageName=getTitle(); //selectWindow(ImageName); Stack.setChannel(ChI); resetMinAndMax(); run("Enhance Contrast", "saturated=0.35"); getMinAndMax(min, max); ListOfMin[Filei]=min; ListOfMax[Filei]=max; selectWindow(ImageName); run("Close"); }// end of for FileI // Get Statistics Array.getStatistics(ListOfMin, MinMin, MaxMin, MeanMin, StdDevMin); Array.getStatistics(ListOfMax, MinMax, MaxMax, MeanMax, StdDevMax); return newArray(MinMin, MaxMax); }// End of GetBnCValues function function ApplyBnC(ChI, MinMin, MaxMax) { for(Filei=0; Filei<ListFile.length; Filei++){ FilePath=InputDir+ListFile[Filei]; open(FilePath); ImageName=getTitle(); //selectWindow(ImageName); Stack.setChannel(ChI); setMinAndMax(MinMin, MaxMax); saveAs("Tiff", OutputPath+ImageName); selectWindow(ImageName); run("Close"); }// end for Filei }//end of function ListFile=getFileList(InputDir); run("Set Measurements...", "area mean standard modal min centroid center perimeter bounding fit shape feret's integrated median skewness kurtosis area_fraction stack display redirect=None decimal=3"); run("Clear Results"); // It might be faster to work in batchmdoe setBatchMode(true); // Create Arrays to store image info ListOfCh=newArray(ListFile.length); ListOfSlices=newArray(ListFile.length); ListOfFrames=newArray(ListFile.length); for (Filei=0; Filei<ListFile.length; Filei++){ FilePath=InputDir+ListFile[Filei]; open(FilePath); ImageName=getTitle(); //selectWindow(ImageName); getDimensions(width, height, channels, slices, frames); ListOfCh[Filei]=channels; ListOfSlices[Filei]=slices; ListOfFrames[Filei]=frames; selectWindow(ImageName); run("Close"); }// end for FileI // Get some statistics Array.getStatistics(ListOfCh, MinCh, MaxCh, MeanCh, StdDevCh); Array.getStatistics(ListOfSlices, MinSlice, MaxSlice, MeanSlice, StdDevSlice); Array.getStatistics(ListOfFrames, MinFrame, MaxFrame, MeanFrame, StdDevFrame); //for (Filei=0; Filei<ListFile.length; Filei++){ //FilePath=InputDir+ListFile[Filei]; //open(FilePath); //ImageName=getTitle(); ////selectWindow(ImageName); // Process all chanels using two functions for(ChI=1; ChI<MaxCh+1;ChI++){ MinAndMax=GetBnCValues(ChI); MinMin=MinAndMax[0]; MaxMax=MinAndMax[1]; ApplyBnC(ChI,MinMin, MaxMax); } |
Then I would look at each channel individually and use a LUT Fire to have better appreciate the intensities.
for(ImageI=1; ImageI<nImages+1; ImageI++){ selectImage(ImageI); getDimensions(width, height, channels, slices, frames); for(ChI=1; ChI<channels+1;ChI++){ Stack.setChannel(ChI); Property.set("CompositeProjection", "null"); Stack.setDisplayMode("color"); run("Fire"); } } |
Finally I will use the syncrhonize Windows features to navigate the two images at the same time
ImageJ>Windows>Synchronize Windows
ou dans une macro
run("Synchronize Windows");
Control Ch1 Test Ch1
Control Ch2 Test Ch2
To my eyes the Ch1 and Ch2 are slighly brighter in the Test conditions.
Here we can't compare Ch1 to Ch2 because the range of display are not the same. It seems than Ch2 is much weaker than Ch1 but it is actually not accurate. The best way to sort this is to add a calibration bar to each channel ImageJ>Tools>Calibration Bar. The calibration bar is non-destructive as it will be added to the overlay. To "print" it on the image you can flatten it on the image ImageJ>Image>Overlay>Flatten.
Control Ch1 Control Ch2
If you apply the same display to Ch1 and Ch2 then you can see that Ch1 overall more intense while Ch2 has few very strong spots.
Control Ch1 Control Ch2 with same display than Ch1
Looking more closely
In Ch1 we can see that there is some low level intensity and a high level circular foci whereas in Ch2 there is a bean shaped structure. In the example below the Foci seems stronger in the control than the Test condition.
Control Ch1 Test Ch1
Control Ch2 Test Ch2
But we need obviously to do some quantification to confirm or infirm these first observations.
The first way to address it would be in a bulk fashion: By measuring the mean intensity for example for all the images.
//It might be easier to process images from a folder //You would need to customize this path to your computer InputDir="/Users/nicolas/Desktop/Input/"; // You could also get a prompt to select the InputDir // InputDir = getDirectory("Choose a Directory "); // And to save results in an output folder // You would need to customize this path to your computer //OutputPath="/Users/nicolas/Desktop/Output/"; //You could also create a new folder based on the name of the input folder ParentPath=File.getParent(InputDir); InputDirName=File.getName(InputDir); OutputDir=ParentPath+File.separator+InputDirName+"_Results"; i=1; while(File.exists(OutputDir)){ OutputDir=ParentPath+File.separator+InputDirName+"_Results"+"-"+i; i++; } File.makeDirectory(OutputDir); OutputPath=OutputDir+File.separator; //Then you can measure all values for all ch and all images ListFile=getFileList(InputDir); run("Set Measurements...", "area mean standard modal min centroid center perimeter bounding fit shape feret's integrated median skewness kurtosis area_fraction stack display redirect=None decimal=3"); run("Clear Results"); // It might be faster to work in batchmdoe setBatchMode(true); for (Filei=0; Filei<ListFile.length; Filei++){ FilePath=InputDir+ListFile[Filei]; open(FilePath); ImageName=getTitle(); selectWindow(ImageName); run("Select None"); run("Measure Stack..."); selectWindow(ImageName); run("Close"); } selectWindow("Results"); saveAs("Text", OutputPath+"Overall_Measurements.csv"); selectWindow("Results"); run("Close"); |
If all works fine you should have a CSV file you can open with your favorite spreadsheet applications. This table should give one line per image and all available measurements for the whole image and for each channel of the image. Of course some measurements will be all the same because the images were taken in the same way.
What to do with the file? Explore the data and see if there is any relevant information.
My view would be to use a short script in R to plot all the data and to some basic statistics
# Prompt for the input CSV file from ImageJ measurements Input<-file.choose() # You must cange this path to match your computer Output<-"/Users/nicolas/Desktop/Output/" #Output <- "C:\\Users\\stifanin\\OneDrive - Universite de Montreal\\Bureau\\Output\\" data<-read.csv(Input) require("ggpubr") List<-strsplit(as.character(data$Label), ':') #Add Filename and Group to the data data$Filename<-as.factor(sapply(List, "[[", 1)) Group<-strsplit(as.character(data$Filename), '_') data$Group<-as.factor(sapply(Group, "[[", 1)) # MAke Ch as a factor data$Ch<-as.factor(data$Ch) #Create a list of plots ListofPlots<-list() i=1; for(ColI in 3:(ncol(data)-2)){ if(!is.factor(data[[ColI]])){ for (ChI in 1:nlevels(data$Ch)){ Graph<- ggboxplot( data[data$Ch==ChI,], x = "Group", y = colnames(data[ColI]), color = "Group", palette = c("#4194fa", "#db51d4"), add = "jitter", title=paste0(colnames(data[ColI])," of Channel ", ChI) )+stat_compare_means(method = "t.test") ListofPlots[[i]]<-Graph i=i+1 } } } #Export the graphs ggexport( plotlist = ListofPlots, filename = paste0(Output,"Graphs.pdf")) |
This should give you a pdf file with one plot per page. You can scroll it and look at the data. p-values from t-test are indicated on the graphs. As you can see below the mean intensity in both Ch1 and Ch2 are higher in the test than the control. What does it mean?
It means than the average pixel intensity is higher in the test conditions than the control condition.
Other values that are significantly different:
Now we start to have some results and statistically relevant information about the data. The test condition have a higher mean intensity (integrated intensity, median and raw integrated intensities are all showing the same result) for both Ch1 and Ch2. This is surprising because I had the opposite impression while looking at the image with normalized intensities and LUT fire applied (see above). Another surprising result is the fact that Control images have a higher maximum intensity than Test images but only for Ch1. This is clearly seen in the picture above.
One thing that can explain these results is that the number of cells can be different in the control vs the test images. If there are more cells in one condition then there are more pixels stained (and less background) and the mean intensity would be higher not because the signal itself is higher in each cell but because there are more cell...
To solve this we need to count the number of cells per image. This can be done manually or using segmentation based of intensity.
Looking at the image it seems that Ch1 is a good candidate to segment each cell.
// It might be easier to process all images ina folder // You can specify this folder. You need to customize this path to your computer InputDir="/Users/nicolas/Desktop/Input/"; // Or you can get a prompt to select the InputDir // InputDir = getDirectory("Choose a Directory "); // To save results in an output folder // You can specify the OutputPath. You need to customize this path to your computer //OutputPath="/Users/nicolas/Desktop/Output/"; //Or you can create a new folder based on the name of the input folder //This is the method I prefer ParentPath=File.getParent(InputDir); InputDirName=File.getName(InputDir); OutputDir=ParentPath+File.separator+InputDirName+"_Results"; i=1; while(File.exists(OutputDir)){ OutputDir=ParentPath+File.separator+InputDirName+"_Results"+"-"+i; i++; } File.makeDirectory(OutputDir); OutputPath=OutputDir+File.separator; File.makeDirectory(OutputPath+"Cropped Cells"); OutputCellPath=OutputPath+"Cropped Cells"+File.separator; //// End of creating a new ouput folder // Prepare some measurements settings and clean up the place run("Set Measurements...", "area mean standard modal min centroid center perimeter bounding fit shape feret's integrated median skewness kurtosis area_fraction stack display redirect=None decimal=3"); run("Clear Results"); //Then you can start to process all images ListFile=getFileList(InputDir); run("ROI Manager..."); // It might be faster to work in batchmode setBatchMode(true); for (Filei=0; Filei<ListFile.length; Filei++){ FilePath=InputDir+ListFile[Filei]; open(FilePath); ImageName=getTitle(); ImageNameNoExt=File.getNameWithoutExtension(FilePath); getDimensions(width, height, channels, slices, frames); //Remove the Background selectWindow(ImageName); ImageNameCorrected=ImageNameNoExt+"_Corrected"; run("Duplicate...", "title=&mageNameCorrected duplicate"); rename(ImageNameCorrected); selectWindow(ImageNameCorrected); run("Subtract...", "value=500"); run("Subtract Background...", "rolling=22 sliding stack"); //Adjust the display for(ChI=1; ChI<channels+1; ChI++){ Stack.setChannel(ChI); resetMinAndMax(); run("Enhance Contrast", "saturated=0.35"); //setMinAndMax(500, 3000); } Stack.setChannel(1); Property.set("CompositeProjection", "Sum"); Stack.setDisplayMode("composite"); //Combine both color for full cell segmentation run("RGB Color"); rename("RGB"); run("Duplicate...", "title=Mask duplicate"); run("16-bit"); resetThreshold(); setAutoThreshold("Otsu dark no-reset"); setOption("BlackBackground", true); run("Convert to Mask"); run("Open"); run("Dilate"); run("Fill Holes"); run("Analyze Particles...", "size=2-12 circularity=0.60-1.00 exclude add"); selectWindow("Mask"); saveAs("TIFF", OutputPath+ImageNameNoExt+"_Cell-Mask.tif"); MaskImage=getTitle(); selectWindow(MaskImage);run("Close"); selectWindow("RGB"); run("Remove Overlay"); run("From ROI Manager"); saveAs("TIFF", OutputPath+ImageNameNoExt+"_Segmentation-Control.tif"); ControlImage=getTitle(); selectWindow(ControlImage);run("Close"); selectWindow(ImageNameCorrected); count = roiManager("count"); for (i = 0; i < count; i++) { roiManager("select", i); run("Measure Stack...", "channels slices frames order=czt(default)"); } RoiManager.select(0); selectWindow("Results"); saveAs("Results", OutputPath+ImageNameNoExt+"_Measurements_Cells.csv"); run("Clear Results"); selectWindow(ImageNameCorrected); saveAs("TIFF", OutputPath+ImageNameNoExt+"_Background-removed.tif"); ImageCorrected =getTitle(); selectWindow(ImageCorrected); NbROIs=roiManager("size"); AllROIs=Array.getSequence(NbROIs); roiManager("Select", AllROIs); OutputCellName=OutputCellPath+ImageNameNoExt+"_"; RoiManager.multiCrop(OutputCellName, " save tif"); roiManager("Save", OutputPath+ImageNameNoExt+"_Cell-ROIs.zip"); roiManager("Deselect"); roiManager("Delete"); selectWindow(ImageCorrected);run("Close"); selectWindow(ImageName);run("Close"); }// end for FileI |
This macro starts to be a bit long but to summarize here are the steps:
Few notes:
The camera offset is a value the camera adds to avoid having negative value due to noise. The best way to measure it is to take a dark image and have the mean intensity of the image. If you don't have that in hand you can choose a value sligly lower than the lowest pixel value found in your images.
The rolling ball background subtraction is powerfull tool to help with the segmentation
The values for the Analyze particles detection are the tricky part here. How I choose them? I use the thresholded image (binary) and I select the average guy: the spot that looks like all the others. I use the wand tool to create a selection and then I do Analyze>Measure This will give me a good estimate of the size (area) and the circularity. Then I select the tall/skinny and the short/not so skinny guys: I use again the wand tool to select the spots that would be my upper and lower limits. This will give me the range of spot size (area) and circularity. This is really a critical step that should be performed by hand prior running the script. You should do this on few different images to make sure the values are good enough to account for the variability you will encounter.
Now it is time to check that the job was done properly.
Looking at the control Images the detection isn't bad at all.
The only thing missing are the individual green foci seen below. Those cells look different as the FOci is very strong but there is not much fluorescence elsewhere (no diffuse green and no red). I might discuss with the scientist to see if it is OK to ignore them. If not I would need to change the threshold values and the detection parameters but let's say it is fine for now.
So now you should have a list of files (images, ROIs, and CSV files). We will focus on the CSV files has they contain the number of cells we are looking for and a lot more information we can also use.
We will reuse the previous R script but will add a little part to merge all the CSV files from the input folder.
#Select the Input and the Output folder Input <- "/Users/nicolas/Desktop/Input_Results-1/" Output<-"/Users/nicolas/Desktop/Output/" require("ggpubr") #Get the list of CSV files FileList<-list.files(path=Input, full.names=TRUE, pattern=".*csv") #Merge the file into one big data for (FileI in 1:length(FileList)){ data<-read.csv(FileList[FileI]) if (FileI==1){ BigData<-data }else{ BigData<-rbind(BigData,data) } } # then the rest is the same than the previous script data<-BigData List<-strsplit(as.character(data$Label), ':') #Add Filename and Group to the data data$Filename<-as.factor(sapply(List, "[[", 1)) Group<-strsplit(as.character(data$Filename), '_') data$Group<-as.factor(sapply(Group, "[[", 1)) # Make Ch as a factor data$Ch<-as.factor(data$Ch) write.csv(data, paste0(Output,"Detected-Cells_All-Measurements.csv"), row.names = FALSE) #Create a list of plots ListofPlots<-list() i=1; for(ColI in 3:(ncol(data)-2)){ if(!is.factor(data[[ColI]])){ for (ChI in 1:nlevels(data$Ch)){ Graph<- ggboxplot( data[data$Ch==ChI,], x = "Group", y = colnames(data[ColI]), color = "Group", palette = c("#4194fa", "#db51d4"), add = "jitter", title=paste0(colnames(data[ColI])," of Channel ", ChI) )+stat_compare_means(method = "t.test") ListofPlots[[i]]<-Graph i=i+1 } } } #Export the graphs ggexport( plotlist = ListofPlots, filename = paste0(Output,"Graphs_individual.pdf"), verbose=TRUE ) |
This script will save the merged data in a single csv file. Usign your favourite spreadsheet manager you will be able to create a table and summarize the data with a pivot table to get the number of cells per group.
Control | Test |
1225 | 1360 |
There are slighly more cells in the test than in the control. If we look at the number of detected cells per image we can confirm that there are more cells in the test conditions than in the control conditions. There are two images that have less celss than others in the control. We can go back to the detection to check those.
Looking at the detection images we can confirm that the two images from the control have less cells, so it is not a detection issue.
Together these results show that there is no more cells in one condition than the other.
On avera between 60 and 65 cells are detected by image with a total of 1200 cells per condition detected.
Then we can look at the graphs and look for what is statistically different, here is the short list
Looking at p-values (statistical significance) is good approach but it is not enough. We should also look at the biological significance of the numbers. For example the roundess is statiscially different between the control and test but this difference is really small. What does it mean biologically that the test cells are a tiny bit less circular. In this specific case : nothing much. So we can safely forget about it to focus on more important things.
This can easily be done since we have generated a CSV file gathering all the data Detected-Cells_All-Measurements.csv
# Prompt for the input CSV file from ImageJ measurements Input<-file.choose() # You must cange this path to match your computer Output<-"/Users/nicolas/Desktop/Output/" data<-read.csv(Input) data$Ch<-as.factor(data$Ch) require("ggpubr") #Create a list of plots ListofPlots<-list() i=1; for(ColI in 3:(ncol(data)-2)){ if(is.numeric(data[[ColI]]) || is.integer(data[[ColI]])){ for (ChI in 1:nlevels(data$Ch)){ Graph<-ggsummarystats( data[data$Ch==ChI,], x = "Group", y=colnames(data[ColI]), ggfunc = ggviolin, digits = 2, color = "Group", palette = c("#4194fa", "#db51d4"), summaries=c("n", "mean","sd","ci"), add="mean_sd", title=paste0(colnames(data[ColI])," of Channel ", ChI)) ListofPlots[[i]]<-Graph i=i+1 } } } #Export the graphs ggexport( plotlist = ListofPlots, filename = paste0(Output,"Graphs_Individual_Descriptive Stats.pdf"), verbose=TRUE ) |
Then for all the variables that are statically different between control and test groups we can have a closer look at the data.
As we have seen here it seems that Ch1 and Ch2 have defined structrures that differs between the Control and test group. Our analysis looking at each cell does not provide enough detail on each structure. More specifically it can't discriminate between the low intensity Ch1 and the high intensity Foci. Also it looks at the overall fluorescence of Ch2 in the cell while we can clearly identify a bean shape structure. The next step would be to segment the images in 3 ways: high Ch1 intensities would correspond the the foci, high Ch2 intensities would correspond to the bean shape structure and low Ch1 intensities (all Ch1 intensities but excluding the high intensities from the Foci) would relate to the overall nuclei