GEN Exclusives

More »

Feature Articles

More »
Jan 1, 2010 (Vol. 30, No. 1)

Bioinformatic Tools Drive High-Content Analysis

Robust Computational Resources Make It Possible for Scientists to Fully Utilize Experimental Data

  • Multivariate Analysis

    Click Image To Enlarge +
    A cluster of human hepatocellular carcinoma cells with a genetically encoded peroxisome-targeting green fluorescent reporter (Biomanufacturing Research Institute and Technology Enterprise, North Carolina University)

    In addition to producing a huge volume of data, image analysis for cell-based assays can have a high degree of complexity. Phenotypic changes in cells are often subtle and involve an overwhelming number of parameters. The application of multivariate analysis to this problem is being tackled by Jonathan Sexton, Ph.D., assistant professor at the Biomanufacturing Research Institute and Technology Enerprise at North Carolina Central University.

    Dr. Sexton’s team has developed a cell-based assay for peroxisome biogenesis, which is of significance in type 2 diabetes. The assay used a fluorescent reporter to monitor changes in peroxisomes under the influence of drug compounds.

    The resulting data had a low signal-to-noise ratio, and it was difficult to identify phenotypic changes in the peroxisomes. Using multivariate analysis, Dr. Sexton’s team was able to normalize the data for cell size and DNA content in an unbiased way.

    “We identified as many parameters about this one object as we could—shape, size, total mass, how many vertices, nearest neighbor distances and on and on until we had 30 or 40 parameters,” Dr. Sexton explains. His group also included data for negative and positive controls, and ran a principle component analysis (PCA), which organizes variables in decreasing order of variability and is often used to create predictive models. “PCA allowed us to capture the relevant signal, reject the natural variability, and not interject any human bias in choosing what these parameters were.”

    High-content screening has benefited greatly from a close marriage with bioinformatics. Robust computational resources are allowing scientists to take advantage of the full range of data produced by experiments. As well, a new breed of programmer-scientist is adapting sophisticated modeling and statistical analysis methods for the life sciences.

    These methods not only process data for completed experiments, but in true systems-biology fashion become a tool for hypothesis development and experimental design. Development of bioassay ontology and the semantic web help to make the scientific literature smarter for the benefit of the whole community. 

Add a comment

  • You must be signed in to perform this action.
    Click here to Login or Register for free.
    You will be taken back to your selected item after Login/Registration.

Related content


GEN Jobs powered by connects you directly to employers in pharma, biotech, and the life sciences. View 40 to 50 fresh job postings daily or search for employment opportunities including those in R&D, clinical research, QA/QC, biomanufacturing, and regulatory affairs.
More »

Be sure to take the GEN Poll

Easing Restrictions for Terminal Patients

Should the Federal Government Pass a “Right to Try” Bill Allowing Terminally Patients Access to Experimental Medicines?

More »