Let's say you have many images taken the same way from two different samples: One Control group and One test group.


What will/should you do to analyse them?


The first thing I would do will be to open a random pair of image (one from the control and one from the test) and have a look at them...

Control                                                   Test


Then I would normalize the brightness and contrast for each channel and across images to make the display settings  the same for both images.

ImageJ Macro to Equalize Brightness and Contrast for all opened images
// This macro was created by nstifani@gmail.com
// Feel free to reach out for any question or improvement suggestions
// This macro will look for Min and Max values on all open images and then apply the Min and the Max to adjust the display brightness of all opened images.
 
// Create Arrays to store image info
ListOfCh=newArray(nImages);
ListOfSlices=newArray(nImages);
ListOfFrames=newArray(nImages);
 
// Get the dimensions of the opened Images
for(ImageI=1; ImageI<nImages+1; ImageI++){
selectImage(ImageI);
getDimensions(width, height, channels, slices, frames);
ListOfCh[ImageI-1]=channels;
ListOfSlices[ImageI-1]=slices;
ListOfFrames[ImageI-1]=frames;
}// end for ImageI
 
 
// Get some statistics
Array.getStatistics(ListOfCh, MinCh, MaxCh, MeanCh, StdDevCh);
Array.getStatistics(ListOfSlices, MinSlice, MaxSlice, MeanSlice, StdDevSlice);
Array.getStatistics(ListOfFrames, MinFrame, MaxFrame, MeanFrame, StdDevFrame);
 
 
// Process all chanels using two functions
for(ChI=1; ChI<MaxCh+1;ChI++){
MinAndMax=GetBnCValues(ChI);
MinMin=MinAndMax[0];
MaxMax=MinAndMax[1];
ApplyBnC(ChI,MinMin, MaxMax);
}
 
 
function GetBnCValues(ChI) {
ListOfMin=newArray(nImages);
ListOfMax=newArray(nImages);
 
// Measure Min and Max for all open images
for(ImageI=1; ImageI<nImages+1; ImageI++){
selectImage(ImageI);
Stack.setChannel(ChI);
resetMinAndMax();
//run("Enhance Contrast", "saturated=0.35");
getMinAndMax(min, max);
ListOfMin[ImageI-1]=min;
ListOfMax[ImageI-1]=max;
}// end for ImageI
 
// Get Statistics
Array.getStatistics(ListOfMin, MinMin, MaxMin, MeanMin, StdDevMin);
Array.getStatistics(ListOfMax, MinMax, MaxMax, MeanMax, StdDevMax);
return newArray(MinMin, MaxMax);
}
 
function ApplyBnC(ChI, MinMin, MaxMax) {
for(ImageI=1; ImageI<nImages+1; ImageI++){
selectImage(ImageI);
Stack.setChannel(ChI);
setMinAndMax(MinMin, MaxMax);
}// end for ImageI
ImageJ Macro Equalize BnC for all images in a folder
//It might be easier to process images from a folder
//You would need to customize this path to your computer
InputDir="/Users/nicolas/Desktop/Input/";
 
// You could also get a prompt to select the InputDir
// InputDir = getDirectory("Choose a Directory ");
 
 
// And to save results in an output folder
// You would need to customize this path to your computer
//OutputPath="/Users/nicolas/Desktop/Output/";
 
//You could also create a new folder based on the name of the input folder
ParentPath=File.getParent(InputDir);
InputDirName=File.getName(InputDir);
OutputDir=ParentPath+File.separator+InputDirName+"_Results";
i=1;
 
while(File.exists(OutputDir)){
OutputDir=ParentPath+File.separator+InputDirName+"_Results"+"-"+i;
i++;
}
File.makeDirectory(OutputDir);
OutputPath=OutputDir+File.separator;
 
 

function GetBnCValues(ChI) {
ListOfMin=newArray(ListFile.length);
ListOfMax=newArray(ListFile.length);
  
// Measure Min and Max for all open images
for(Filei=0; Filei<ListFile.length; Filei++){
FilePath=InputDir+ListFile[Filei];
open(FilePath);
ImageName=getTitle();
//selectWindow(ImageName);
Stack.setChannel(ChI);
resetMinAndMax();
run("Enhance Contrast", "saturated=0.35");
getMinAndMax(min, max);
ListOfMin[Filei]=min;
ListOfMax[Filei]=max;
selectWindow(ImageName); run("Close");
}// end of for FileI
// Get Statistics
Array.getStatistics(ListOfMin, MinMin, MaxMin, MeanMin, StdDevMin);
Array.getStatistics(ListOfMax, MinMax, MaxMax, MeanMax, StdDevMax);
return newArray(MinMin, MaxMax);
}// End of GetBnCValues function



function ApplyBnC(ChI, MinMin, MaxMax) {
for(Filei=0; Filei<ListFile.length; Filei++){
FilePath=InputDir+ListFile[Filei];
open(FilePath);
ImageName=getTitle();
//selectWindow(ImageName);
Stack.setChannel(ChI);
setMinAndMax(MinMin, MaxMax);
saveAs("Tiff", OutputPath+ImageName);
selectWindow(ImageName); run("Close");
}// end for Filei
}//end of function

 
ListFile=getFileList(InputDir);
run("Set Measurements...", "area mean standard modal min centroid center perimeter bounding fit shape feret's integrated median skewness kurtosis area_fraction stack display redirect=None decimal=3");
run("Clear Results");
 
// It might be faster to work in batchmdoe
setBatchMode(true);
 

// Create Arrays to store image info
ListOfCh=newArray(ListFile.length);
ListOfSlices=newArray(ListFile.length);
ListOfFrames=newArray(ListFile.length);
 
for (Filei=0; Filei<ListFile.length; Filei++){
FilePath=InputDir+ListFile[Filei];
open(FilePath);
ImageName=getTitle();
//selectWindow(ImageName);
getDimensions(width, height, channels, slices, frames);
ListOfCh[Filei]=channels;
ListOfSlices[Filei]=slices;
ListOfFrames[Filei]=frames;
selectWindow(ImageName); run("Close");
}// end for FileI


// Get some statistics
Array.getStatistics(ListOfCh, MinCh, MaxCh, MeanCh, StdDevCh);
Array.getStatistics(ListOfSlices, MinSlice, MaxSlice, MeanSlice, StdDevSlice);
Array.getStatistics(ListOfFrames, MinFrame, MaxFrame, MeanFrame, StdDevFrame);


//for (Filei=0; Filei<ListFile.length; Filei++){
//FilePath=InputDir+ListFile[Filei];
//open(FilePath);
//ImageName=getTitle();
////selectWindow(ImageName);

// Process all chanels using two functions
for(ChI=1; ChI<MaxCh+1;ChI++){
MinAndMax=GetBnCValues(ChI);
MinMin=MinAndMax[0];
MaxMax=MinAndMax[1];
ApplyBnC(ChI,MinMin, MaxMax);
}





Then I would look at each channel individually and use a LUT Fire to have better appreciate the intensities.


Macro Apply LUT Fire to all opened images
for(ImageI=1; ImageI<nImages+1; ImageI++){
selectImage(ImageI);
getDimensions(width, height, channels, slices, frames);
for(ChI=1; ChI<channels+1;ChI++){
Stack.setChannel(ChI);
Property.set("CompositeProjection", "null");
Stack.setDisplayMode("color");
run("Fire");
}
}


Finally I will use the syncrhonize Windows features to navigate the two images at the same time

ImageJ>Windows>Synchronize Windows

ou dans une macro

run("Synchronize Windows");



                           Control  Ch1                                         Test Ch1                        

                           Control  Ch2                                         Test Ch2                        


To my eyes the Ch1 and Ch2 are slighly brighter in the Test conditions.

Here we can't compare Ch1 to Ch2 because the range of display are not the same. It seems than Ch2 is much weaker than Ch1 but it is actually not accurate. The best way to sort this is to add a calibration bar to each channel ImageJ>Tools>Calibration Bar. The calibration bar is non-destructive as it will be added to the overlay. To "print" it on the image you can flatten it on the image ImageJ>Image>Overlay>Flatten.





                               Control  Ch1                                         Control Ch2       


If you apply the same display to Ch1 and Ch2 then you can see that Ch1 overall more intense while Ch2 has few very strong spots.


                         Control  Ch1                       Control Ch2 with same display than Ch1



Looking more closely

In Ch1 we can see that there is some low level intensity and a high level circular foci whereas in Ch2 there is a bean shaped structure. In the example below the Foci seems stronger in the control than the Test condition.


                       Control Ch1                                            Test Ch1




                       Control Ch2                                                    Test Ch2


But we need obviously to do some quantification to confirm or infirm these first observations.


The first way to address it would be in a bulk fashion: By measuring the mean intensity for example for all the images.



Macro ImageJ Collect Global measurements
 

//It might be easier to process images from a folder
//You would need to customize this path to your computer
InputDir="/Users/nicolas/Desktop/Input/";

// You could also get a prompt to select the InputDir
// InputDir = getDirectory("Choose a Directory ");


// And to save results in an output folder
// You would need to customize this path to your computer
//OutputPath="/Users/nicolas/Desktop/Output/";

//You could also create a new folder based on the name of the input folder
ParentPath=File.getParent(InputDir); 
InputDirName=File.getName(InputDir);
OutputDir=ParentPath+File.separator+InputDirName+"_Results";
i=1;

while(File.exists(OutputDir)){
OutputDir=ParentPath+File.separator+InputDirName+"_Results"+"-"+i;
i++;
}
File.makeDirectory(OutputDir);
OutputPath=OutputDir+File.separator;


//Then you can measure all values for all ch and all images

ListFile=getFileList(InputDir); 
run("Set Measurements...", "area mean standard modal min centroid center perimeter bounding fit shape feret's integrated median skewness kurtosis area_fraction stack display redirect=None decimal=3");
run("Clear Results");

// It might be faster to work in batchmdoe
setBatchMode(true);


for (Filei=0; Filei<ListFile.length; Filei++){
FilePath=InputDir+ListFile[Filei];
open(FilePath);
ImageName=getTitle();
selectWindow(ImageName);
run("Select None");
run("Measure Stack...");
selectWindow(ImageName); run("Close");
}
selectWindow("Results");
saveAs("Text", OutputPath+"Overall_Measurements.csv");
selectWindow("Results"); run("Close");
   


If all works fine you should have a CSV file you can open with your favorite spreadsheet applications. This table should give one line per image and all available measurements for the whole image and for each channel of the image. Of course some measurements will be all the same because the images were taken in the same way.

What to do with the file? Explore the data and see if there is any relevant information.

My view would be to use a short script in R to plot all the data and to some basic statistics


R Plot ImageJ Global Measurements
# Prompt for the input CSV file from ImageJ measurements
Input<-file.choose()

# You must cange this path to match your computer
Output<-"/Users/nicolas/Desktop/Output/"
#Output <- "C:\\Users\\stifanin\\OneDrive - Universite de Montreal\\Bureau\\Output\\"


data<-read.csv(Input)

require("ggpubr")

List<-strsplit(as.character(data$Label), ':')

#Add Filename and Group to the data
data$Filename<-as.factor(sapply(List, "[[", 1))
Group<-strsplit(as.character(data$Filename), '_')
data$Group<-as.factor(sapply(Group, "[[", 1))
# MAke Ch as a factor
data$Ch<-as.factor(data$Ch)

#Create a list of plots
ListofPlots<-list()
i=1;

for(ColI in 3:(ncol(data)-2)){
  if(!is.factor(data[[ColI]])){
    for (ChI in 1:nlevels(data$Ch)){
   Graph<-   ggboxplot(
        data[data$Ch==ChI,], x = "Group", y = colnames(data[ColI]),
        color = "Group", palette = c("#4194fa", "#db51d4"),
        add = "jitter", title=paste0(colnames(data[ColI])," of Channel ", ChI)
      )+stat_compare_means(method = "t.test")
   ListofPlots[[i]]<-Graph
i=i+1
  }
  }
}

#Export the graphs
ggexport(
  plotlist = ListofPlots, filename = paste0(Output,"Graphs.pdf"))

This should give you a pdf file with one plot per page. You can scroll it and look at the data. p-values from t-test are indicated on the graphs. As you can see below the mean intensity in both Ch1 and Ch2 are higher in the test than the control. What does it mean?



It means than the average pixel intensity is higher in the test conditions than the control condition.

Other values that are significantly different:

  • Mean Intensity Ch1 and Ch2 (Control<Test)
  • Maximum Intensity of Ch1 (Control>Test) Brightest value in the image
  • Integrated Intensity of Ch1 and Ch2(Control<Test) It is equal to the Mean x Area
  • Median Ch1 and Ch2 (Control<Test)
  • Skew of Ch1 (Control>Test): The third order moment about the mean. Relate to the distribution of intensities. If=0 then intensities are symmetrically distributed around the mean. if<0 then distribution is asymmetric to the Left of the mean (lower intensities), if>0 then it is to the right (higher intensities).
  • Raw Integrated Intensities of Ch1 and Ch2 (Control<Test) Sum of all pixel intensities


Now we start to have some results and statistically relevant information about the data. The test condition have a higher mean intensity (integrated intensity, median and raw integrated intensities are all showing the same result) for both Ch1 and Ch2. This is surprising because I had the opposite impression while looking at the image with normalized intensities and LUT fire applied (see above). Another surprising result is the fact that Control images have a higher maximum intensity than Test images but only for Ch1. This is clearly seen in the picture above.

One thing that can explain these results is that the number of cells can be different in the control vs the test images. If there are more cells in one condition then there are more pixels stained (and less background) and the mean intensity would be higher not because the signal itself is higher in each cell but because there are more cell...

To solve this we need to count the number of cells per image. This can be done manually or using segmentation based of intensity.

Looking at the image it seems that Ch1 is a good candidate to segment each cell.

ImageJ Macro Segment and Measure
// It might be easier to process all images ina folder
// You can specify this folder. You need to customize this path to your computer
InputDir="/Users/nicolas/Desktop/Input/";
 
// Or you can get a prompt to select the InputDir
// InputDir = getDirectory("Choose a Directory ");
 
 
// To save results in an output folder
// You can specify the OutputPath. You need to customize this path to your computer
//OutputPath="/Users/nicolas/Desktop/Output/";
 
//Or you can create a new folder based on the name of the input folder
//This is the method I prefer
ParentPath=File.getParent(InputDir);
InputDirName=File.getName(InputDir);
OutputDir=ParentPath+File.separator+InputDirName+"_Results";
i=1;
 
while(File.exists(OutputDir)){
OutputDir=ParentPath+File.separator+InputDirName+"_Results"+"-"+i;
i++;
}
File.makeDirectory(OutputDir);
OutputPath=OutputDir+File.separator;
File.makeDirectory(OutputPath+"Cropped Cells");
OutputCellPath=OutputPath+"Cropped Cells"+File.separator;

//// End of creating a new ouput folder

 
// Prepare some measurements settings and clean up the place
run("Set Measurements...", "area mean standard modal min centroid center perimeter bounding fit shape feret's integrated median skewness kurtosis area_fraction stack display redirect=None decimal=3");
run("Clear Results");


//Then you can start to process all images
 
ListFile=getFileList(InputDir);
 run("ROI Manager...");
// It might be faster to work in batchmode
setBatchMode(true);


 
for (Filei=0; Filei<ListFile.length; Filei++){
FilePath=InputDir+ListFile[Filei];
open(FilePath);
ImageName=getTitle();
ImageNameNoExt=File.getNameWithoutExtension(FilePath);
getDimensions(width, height, channels, slices, frames);

//Remove the Background
selectWindow(ImageName);
ImageNameCorrected=ImageNameNoExt+"_Corrected";
run("Duplicate...", "title=&mageNameCorrected duplicate");
rename(ImageNameCorrected);
selectWindow(ImageNameCorrected);
run("Subtract...", "value=500");
run("Subtract Background...", "rolling=22 sliding stack");

//Adjust the display
for(ChI=1; ChI<channels+1; ChI++){
Stack.setChannel(ChI);
resetMinAndMax();
run("Enhance Contrast", "saturated=0.35");
//setMinAndMax(500, 3000);
}
Stack.setChannel(1);
Property.set("CompositeProjection", "Sum");
Stack.setDisplayMode("composite");

//Combine both color for full cell segmentation
run("RGB Color");
rename("RGB");
run("Duplicate...", "title=Mask duplicate");
run("16-bit");
resetThreshold();
setAutoThreshold("Otsu dark no-reset");
setOption("BlackBackground", true);
run("Convert to Mask");
run("Open");
run("Dilate");
run("Fill Holes");
run("Analyze Particles...", "size=2-12 circularity=0.60-1.00 exclude add");
selectWindow("Mask");
saveAs("TIFF", OutputPath+ImageNameNoExt+"_Cell-Mask.tif");
MaskImage=getTitle(); 
selectWindow(MaskImage);run("Close");
selectWindow("RGB");
run("Remove Overlay");
run("From ROI Manager");
saveAs("TIFF", OutputPath+ImageNameNoExt+"_Segmentation-Control.tif");
ControlImage=getTitle(); 
selectWindow(ControlImage);run("Close");

selectWindow(ImageNameCorrected);
count = roiManager("count");
for (i = 0; i < count; i++) {
roiManager("select", i);
run("Measure Stack...", "channels slices frames order=czt(default)");
}
RoiManager.select(0);

selectWindow("Results");
saveAs("Results", OutputPath+ImageNameNoExt+"_Measurements_Cells.csv");
run("Clear Results");

selectWindow(ImageNameCorrected);
saveAs("TIFF", OutputPath+ImageNameNoExt+"_Background-removed.tif");
ImageCorrected =getTitle();
selectWindow(ImageCorrected);

NbROIs=roiManager("size");
AllROIs=Array.getSequence(NbROIs);
roiManager("Select", AllROIs);
OutputCellName=OutputCellPath+ImageNameNoExt+"_";
RoiManager.multiCrop(OutputCellName, " save tif");
roiManager("Save", OutputPath+ImageNameNoExt+"_Cell-ROIs.zip");
roiManager("Deselect");
roiManager("Delete");
selectWindow(ImageCorrected);run("Close");
selectWindow(ImageName);run("Close");

}// end for FileI



This macro starts to be a bit long but to summarize here are the steps:

  • Open each image
  • Remove the offset from the camera (500 GV) and apply a rolling ball background subtraction
  • Create a RGB composite regrouping both channels
  • Convert this RGB to a 16-bit image
  • Threshold the RGB image using the Otsu alogrythm
  • Process the binary to improve detection (Open, Dilate, Fill Hole)
  • Analyze particles to detect cells with size=2-12 circularity=0.60-1.00
  • Add the results to the ROI manager
  • Save the Mask for the control of segmentation
  • Save an RGB image with the detection overlay as a control for the good detection
  • Save the ROIs
  • Use the ROIs to collect all measurements available and save the result as a csv
  • Save the image with the background removed
  • Crop each ROI from the image with the Background removed to isloate each cell

Few notes:

The camera offset is a value the camera adds to avoid having negative value due to noise. The best way to measure it is to take a dark image and have the mean intensity of the image. If you don't have that in hand you can choose a value sligly lower than the lowest pixel value found in your images.

The rolling ball background subtraction is powerfull tool to help with the segmentation

The values for the Analyze particles detection are the tricky part here. How I choose them? I use the thresholded image (binary) and I select the average guy: the spot that looks like all the others. I use the wand tool to create a selection and then I do Analyze>Measure This will give me a good estimate of the size (area) and the circularity. Then I select the tall/skinny and the short/not so skinny guys: I use again the wand tool to select the spots that would be my upper and lower limits. This will give me the range of spot size (area) and circularity. This is really a critical step that should be performed by hand prior running the script. You should do this on few different images to make sure the values are good enough to account for the variability you will encounter.

Now it is time to check that the job was done properly.

Looking at the control Images the detection isn't bad at all.


The only thing missing are the individual green foci seen below. Those cells look different as the FOci is very strong but there is not much fluorescence elsewhere (no diffuse green and no red). I might discuss with the scientist to see if it is OK to ignore them. If not I would need to change the threshold values and the detection parameters but let's say it is fine for now.

So now you should have a list of files (images, ROIs, and CSV files). We will focus on the CSV files has they contain the number of cells we are looking for and a lot more information we can also use.

We will reuse the previous R script but will add a little part to merge all the CSV files from the input folder.


Script R Merge CSV files and plot all data
#Select the Input and the Output folder
Input <- "/Users/nicolas/Desktop/Input_Results-1/"
Output<-"/Users/nicolas/Desktop/Output/"
require("ggpubr")
#Get the list of CSV files
FileList<-list.files(path=Input, full.names=TRUE, pattern=".*csv")

#Merge the file into one big data
for (FileI in 1:length(FileList)){
  data<-read.csv(FileList[FileI])
  if (FileI==1){
    BigData<-data
  }else{
    BigData<-rbind(BigData,data)
  }
  
}
# then the rest is the same than the previous script
data<-BigData

List<-strsplit(as.character(data$Label), ':')

#Add Filename and Group to the data
data$Filename<-as.factor(sapply(List, "[[", 1))
Group<-strsplit(as.character(data$Filename), '_')
data$Group<-as.factor(sapply(Group, "[[", 1))
# Make Ch as a factor
data$Ch<-as.factor(data$Ch)

write.csv(data, paste0(Output,"Detected-Cells_All-Measurements.csv"), row.names = FALSE)

#Create a list of plots
ListofPlots<-list()
i=1;

for(ColI in 3:(ncol(data)-2)){
  if(!is.factor(data[[ColI]])){
    for (ChI in 1:nlevels(data$Ch)){
   Graph<-   ggboxplot(
        data[data$Ch==ChI,], x = "Group", y = colnames(data[ColI]),
        color = "Group", palette = c("#4194fa", "#db51d4"),
        add = "jitter", title=paste0(colnames(data[ColI])," of Channel ", ChI)
      )+stat_compare_means(method = "t.test")
   ListofPlots[[i]]<-Graph
i=i+1
  }
  }
}

#Export the graphs
ggexport(
  plotlist = ListofPlots, filename = paste0(Output,"Graphs_individual.pdf"), 
  verbose=TRUE
)


This script will save the merged data in a single csv file. Usign your favourite spreadsheet manager you will be able to create a table and summarize the data with a pivot table to get the number of cells per group.


ControlTest
12251360

There are slighly more cells in the test than in the control. If we look at the number of detected cells per image we can confirm that there are more cells in the test conditions than in the control conditions. There are two images that have less celss than others in the control. We can go back to the detection to check those.


Looking at the detection images we can confirm that the two images from the control have less cells, so it is not a detection issue.


Together these results show that there is no more cells in one condition than the other.

On avera between 60 and 65 cells are detected by image with a total of 1200 cells per condition detected.


Then we can look at the graphs and look for what is statistically different, here is the short list

  • Area Test>Control
  • Mean Ch2 Test>Control
  • Min Ch2 Test>Control
  • Max Ch1 Control>Test
  • Max Ch2 Test>Control
  • Perimeter Test>Control
  • Width and Height Test>Control
  • Major and Minor Axis Test>Control
  • Feret Test>Control
  • Integrated Density Ch1 et Ch2 Test>Control
  • Skew et Kurt Ch1 Control>Test
  • Raw Integrated density of Ch2 Test>Control
  • Roundness Control>Test


Looking at p-values (statistical significance) is good approach but it is not enough. We should also look at the biological significance of the numbers. For example the roundess is statiscially different between the control and test but this difference is really small. What does it mean biologically that the test cells are a tiny bit less circular. In this specific case : nothing much. So we can safely forget about it to focus on more important things. 


This can easily be done since we have generated a CSV file gathering all the data Detected-Cells_All-Measurements.csv


R Script Create Graph with Descriptive Statistics
# Prompt for the input CSV file from ImageJ measurements
Input<-file.choose()

# You must cange this path to match your computer
Output<-"/Users/nicolas/Desktop/Output/"

data<-read.csv(Input)

data$Ch<-as.factor(data$Ch)



require("ggpubr")
#Create a list of plots
ListofPlots<-list()
i=1;

for(ColI in 3:(ncol(data)-2)){
  if(is.numeric(data[[ColI]]) || is.integer(data[[ColI]])){
    for (ChI in 1:nlevels(data$Ch)){
      Graph<-ggsummarystats(
        data[data$Ch==ChI,], x = "Group", y=colnames(data[ColI]),
        ggfunc = ggviolin, digits = 2,
        color = "Group", palette = c("#4194fa", "#db51d4"),
        summaries=c("n", "mean","sd","ci"), add="mean_sd", title=paste0(colnames(data[ColI])," of Channel ", ChI))
      ListofPlots[[i]]<-Graph
      i=i+1
    }
  }
}

#Export the graphs
ggexport(
  plotlist = ListofPlots, filename = paste0(Output,"Graphs_Individual_Descriptive Stats.pdf"), 
  verbose=TRUE
)



Then for all the variables that are statically different between control and test groups we can have a closer look at the data.

  • Area Control 5.03um2 ; Test 5.35um2 p-value = 1.7e−07
    This is a relatively small increase 6%. If we bring this back to the diameter it is even smaller 3% increase. Yet the most relevant value in my view is the volume because cells are spheres in the real life. This increase is about a 10% increase in the cell volume. This is relevant information. Cells in the Test conditions are 10% bigger than in the control condition. 
  • Mean Ch2 Control 1078 GV; Test 1197 GV p-value < 2.2e−16
    This means that the bean shape structure labelled by Ch2 is 10% brighter in the test cells than in the controls. Yet the segmentation was performed on the full cell using Ch1 as a proxy. This result prompt for a segmentation based on Ch2. Are the bean shaped bigger or brighter? 
  • Min Ch2 Control 198 GV; Test 213 GV p-value 1e−05
    The difference in the minimum of Ch2 represent about 7.5% increase. If the minimum is higher it suggests that there is a global increase in the fluorescent of Ch2 and not a redistribution (ie clusterization). The segmentation on the Ch2 might sort out which option is really occurring
  • Max Ch1 Control 10111 GV; Test 8393 GV p-value < 2.2e−16
    This difference represent a 17% decrease of the maximum intensity of Ch1. If we remember Ch1 has two fluorescent levels, low within the nuclei and an intense foci. Here the maximum represent the Foci only. So we can conclude that the Foci are less intense in the test compare to the control.
  • Max Ch2 Control 3522 GV; Test 3761 GV p-value = 0.00035
    Here we have the opposite situation where in the bean shape structure the maximum intensity is higher in  the test condition than the control 
  • Perimeter Control 8.45um ; Test 8.74um p-value = 1.7e−08
    Here we have another measurement about the size of the detected cells. Using the area we can estimate about a 3.1% perimeter increase. Here the increase is about 3.5%. This is consistant meaning the increase in area and perimeter are about the same.
  • Width and Height Control 2.49um; Test 2.57um p-value = 8e−07
    Again here another measurement of the size of the detected cells which are larger in the test than the control
  • Major and Minor axis length Control 2.73um; Test 2.84um  p-value = 2.7e−10
    Similar to above excepted that instead of measuring the width and heigh of a rectangle around the cell it is an ellipse fitting the cell. In my view this would be more relevant variable than the previous one but since all data converge to slightly larger cells in the test than the control we can focus on the most interesting ones (area or perimeter)
  • Feret Control 2.95um; Test 3.05um p-value = 5.1e−10
    This is the maximum distance between two points of the detected cells. It relates to the shape of the cell
  • Integrated Density Ch1 Control 11296 GV; Test 11886 GV p-value
    This is the product of area and mean intensity. We know that the area is larger but not the mean intensity. Yet the integrated density of Ch1 is sligtlhy larger (5%) in test vs control.  
  • Integrated Density Ch2 Control 5570 GV; Test 6548 p-value < 2.2e−16
    Here we have a 17% increase which represent the combination of increase of cell size and mean intensity of Ch2
  • Skew of Ch1 Control 2.47; Test 1.77 p-value < 2.2e−16
    The skewness refers to the asymmetrical shape of the distribution of the intensities. Since values are above 0 it means that the distribution is skewed towards the higher intensities. Since this number is lower in the test cells, it means that the distribution of Ch1 intensity is less asymmetric. This can be easily explained by looking at Ch1 which has a low intensity within the nuclei and a strong foci. The foci high intensities provides a higher skew number. Since the skew is less closer to 0 in test conditions this could mean that the foci is less intense or that the fluorescence of Ch1 is more evenly distributed. Since we don't have an increase nor a decrease of mean intensity in Ch1 it is likely that the fluorescence is more evenly distributed in the test condition Thant the control. In other words it seems that in the control conditions the foci is brighter by taking some of the low fluorescence that is present in the nuclei. In other words, it seems that the test conditions can't make very bright foci and that the Ch1 fluorescence is more evenly distributed.
  • Kurt of Ch1 Control 11.3; Test 7.24 p-value < 2.2e−16
    The Kurt account for the flatness of the distribution. If Kurt=0 we have a normal distribution, if <0 it is flatter than normal; if >0 it is more peaked. In this case the data is very very peaked in both control and test. This is quite interesting especially when looking at the violin graphs below. In the control we can see 2 peaks one close to 0 and one close to 11. This means that there are two kinds of cells in the control condition, the one with a foci and the one with only low level distributed Ch1 fluorescence. In the test conditions the Kurt are entered and the two peaks are so close that they almost merged.


  • Raw Integrated density of Ch2 Control 313299 vs Test 368339; p-value < 2.2e−16
    This is the sum of the pixel intensities. It is higher in the test cell than the control cell. This can be because the intensity is higher and/or the area is larger. In this case we have seen previously that the area is larger. Interestingly Raw intensity is higher for Ch2 but is the same for Ch1. 

  • Roundness Control 0.84; Test 0.83 p-value = 0.0032
    As said before even though the value is significant the difference here is minimal and we can easily go to focus on another 


As we have seen here it seems that Ch1 and Ch2 have defined structrures that differs between the Control and test group. Our analysis looking at each cell does not provide enough detail on each structure. More specifically it can't discriminate between the low intensity Ch1 and the high intensity Foci. Also it looks at the overall fluorescence of Ch2 in the cell while we can clearly identify a bean shape structure. The next step would be to segment the images in 3 ways: high Ch1 intensities would correspond the the foci, high Ch2 intensities would correspond to the bean shape structure and low Ch1 intensities (all Ch1 intensities but excluding the high intensities from the Foci) would relate to the overall nuclei