With each passing day, continuous advancements are being made in the studies that cutting-edge instruments can perform and in the complexity of the biological samples that are being examined. The number of characteristics that can be scanned and the volume of information that is available for researchers to gather only seem to grow. Perhaps one of the most helpful uses for pattern-recognition software is in cellular imaging, where microscopy software advances are helping laboratory leaders make better-informed decisions on 3D imaging tools by lowering barriers, improving workflows, and reducing equipment lifecycle costs.
Aside from the time devoted to setup and comparative analysis, many hours may be needed to peform 3D imaging experiments, which commonly generate large amounts of images and data. Imaging for live-cell time-lapse studies take even longer, often occurring over multiple days with multiple time points, thereby amplifying data volume as well. These aspects of 3D imaging, along with some technical challenges with 3D cell preparation, have limited the wider adoption of 3D assays, especially in high-throughput screening formats. Fortunately, microscopy is quickly evolving to support researchers as they drive discovery in the life sciences.
There are three primary areas in which targeted imaging acquisition software is empowering scientists and improving outcomes. These are: enabling vastly more efficient imaging and scanning; aiding in higher-magnification scanning and the imaging of multiple characteristics; and reducing the overall burden of data storage requirements and computing power during and after experiments.
Faster, more accurate scanning
Time spent on experiments in the laboratory can be roughly divided into phases of preparation, observation, and analysis. With better software, we can significantly expedite the latter two.
Once an experiment is properly prepared, careful considerations must be made when setting up the necessary imaging parameters to adequately capture the phenotypic characteristics of interest such as size, intensity of fluorescent staining, and expression of specific biomolecules or markers to name a few. Some of the challenges that arise in 3D imaging stem from sample preparation and treatment. In particular, 3D objects do not always end up in the same position in every well, leading to difficulties in automating the same acquisition field of view across the entire plate. Furthermore, when vast numbers of cells or objects must be observed, imaging can take several hours.
These challenges have traditionally limited 3D imaging to low-throughput, high-resolution imaging, or to high-throughput, low-magnification imaging to reduce the number of acquired images, to ensure that all of the objects are captured, and to decrease imaging time. Equipped with specialty targeted imaging software, such as our QuickID feature, researchers no longer need to choose between these two imaging modalities. Researchers can now image hundreds of wells at low magnification in a single scan, detect objects quickly with preconfigured or fully customizable analysis options, save the positions of the objects, and then automatically switch to high-magnification image acquisition on only the relevant objects.
In a 96-well plate format, for example, the first phase of this two-phase acquisition is an initial 2× magnification scan across the entire plate in a matter of seconds. Can we quantify the improvement in efficiency? Yes. A quick software-assisted acquisition to pinpoint only objects of interest rather than traditional scanning reduces researcher time spent on this portion of the workflow by as much as 90%.
Accuracy, too, can be improved alongside efficiency. For example, automated imaging traditionally applies a user-defined field of view (that is, one or multiple fields of view) across a plate. The challenge for researchers is that you may not capture all the objects in every well, leading to inconsistencies in data, even at times requiring a researcher to reimage certain wells where the objects were missed in the initial acquisition. Advanced and flexible targeted imaging software technology alleviates this risk and expedites deeper scans of only objects of interest.
Complex characteristic analysis and acquisition
Following an initial 2× objective acquisition, objects of interest are identified, and their X and Y coordinates are used to automatically obtain images at required wavelengths, Z-stack, or time points—with the 20× or higher magnification objectives. Not only does this smart imaging technology save researchers the tedium of going back to each object of interest, it also expedites imaging. In an experiment on drug screening, this deeper imaging process was 10 times faster than traditional automated imaging of spheroids at high magnification.
In 3D imaging, fine-tuning an instrument to image multiple phenotypic characteristics with a high level of detail can bring about some challenges, but there is growing interest in doing so. New innovations in software are alleviating the issue, closing the gap between the level of complexity reached between 2D and 3D. Researchers can now add more colors and image greater numbers of 3D objects at high resolution, enabling a better understanding of the relationship between cell layers and subpopulations of cells and allowing assessment of multiple parameters even down to the subcellular level such as the integrity of cytoskeleton, mitochondria, and more—simultaneously.
Flexible software such as QuickID targeted imaging enables researchers to go even further with the combination of time-lapse and streaming modes: in live-cell imaging experiments that track cell change or death over time, such as with anticancer compound screening, multiple imaging sequences are necessary. With the ability to accurately home in on objects of interest and capture images at rapid frame rates, targeted imaging with streaming lets researchers see other, complex attributes such as ion flow. Calcium oscillations in cardiac spheroids, for example, may indicate whether cardiac spheroids are beating properly or not.
Reducing data storage and processing power needs
As scientists collect and store vast amounts of data in their search for cures, as well as for new discoveries and deeper insights in the life sciences, data storage needs become more than a talking point—they represent a hard cost with which laboratories must contend. It stands to reason that imaging efficiency is desirable because it speeds work and reduces costs. But it can also have the additional benefit of reducing data volume. For example, preliminary scans at a lower resolution of 2× across all wells can home in on only relevant cells, ceasing any further acquisition of nonrelevant images or objects and drastically reducing overall data volume for an experiment. A review of recent targeted acquisition experiments shows up to an 80% reduction in data volume.
Likewise, the burden on computer processing power is reduced, a major benefit for laboratories performing metadata mining on large experiments or across multiple experiments during data analysis. Practically speaking, this can mean extending the lifetimes of existing computer processors and equipment as well as realizing the savings that go along with deferred upgrades.
Cost of opportunity, or opportunity cost?
Advanced, novel targeted imaging software is reinventing microscopy, and enhancing efficiency in several ways: smoother workflows and higher speeds; improvements in capturing complexity and scanning multiple characteristics; and reduced computational burdens. Software innovations in microscopy are “flipping the script” on how laboratory leaders can and should think about the lifecycle costs of their equipment and how they staff and resource experiments.
Matthew Hammer is a cellular imaging scientist at Molecular Devices.