June 15, 2013 (Vol. 33, No. 12)
Josh P. Roberts
Whether it’s dispensing reagents, washing plates, or plating out bacteria, moving things around is at the core of biomedical R&D and production.
The task is increasingly done in high-throughput using robotics and automation, which take care of everything from the individual steps to coordination of the processes.
How all of this can be done better, smarter, and more accurately—starting with a common vocabulary with which to begin the discussion—and put together into productive workflows was discussed at Select Biosciences’ “European Lab Automation” held earlier this month in Hamburg.
A common language helps to facilitate discussion and move a field forward. Stefan Bammesberger, R&D engineer at the laboratory for MEMS applications at IMTEK, University of Freiburg in Germany, felt that in order to compare liquid handlers, he first needed to establish a way of categorizing them, and to find a vocabulary to discuss their performance.
He first separated dispensing technologies into those that use contact as a way of dislodging the dispensed liquid and those that do not. Because contact is a potential source of cross-contamination, a dispensing method that rather ejects the liquid may be preferable, especially for biomedical applications. Among the noncontact dispensers, two types of technologies predominate for the nanoliter to microliter range: valve-based and positive displacement, each with its own advantages and disadvantages.
Having delineated dispensing mode, Bammesberger set out to establish a terminology that could describe performance characteristics. Discussions always come down to precision (how close dispensed volumes are relative to each other) and accuracy (how close the dispensed volumes are relative to the target volume).
Yet, “when you look at how the different manufacturers characterize volumetric precision and accuracy, everybody does it somehow differently from the others, so it’s not really comparable. No standard has prevailed in the industry, and it’s really hard to compare the performance of different dispensing systems,” he said. What are necessary are parameters at once generic enough to be utilized for many different applications, yet meaningful and objective enough to allow comparison of very different liquid handlers.
Take filling a 384-well plate with an eight-channel dispenser. Usually manufacturers fill a plate and measure the volumes dispensed, but this doesn’t give insights into where the deviation from the ideal might come from.
“So I try to break it down into the very basic elements you do with a liquid handler, the most basic of which is the intra-run approach, where you have just one channel doing one thing—dispensing aliquot after aliquot,” Bammesberger explained. “Let it do the same thing several times and measure how it deviates.” He terms these “intra-run” measurements.
Building up from there, measurements looking at the reproducibility between runs are “inter-run.” On the other hand, “tip-to-tip” measurements look at deviations made when dispensing from multiple channels.
Bammesberger went on to demonstrate the applicability of his categorizations and terminology, using them as the basis with which to evaluate five commercial noncontact liquid dispensers. His sampling found that intra-run and inter-run CVs tend to be significantly smaller than the corresponding tip-to-tip CVs for a given target volume and liquid handler, prompting him to suggest that for high-precision applications, it may be beneficial to use only a single tip of a liquid handler.
Learn to Work Together
Joe Liscouski, executive director of the Institute for Laboratory Automation, also promotes the benefits of communication and standardization—such as the introduction of microtiter plates—as means to help fuel growth within the industry. With automation playing an increasingly larger role in the laboratory, researchers need to better communicate with technologies, and the technologies must also communicate and work with each other. It’s important to step back and have a look at how the whole process can be made more productive.
Right now, most lab automation is the result of custom-developed, purpose-driven systems. They’re put together piecemeal from different components—dispensers, washers, readers, and sealers—made by different manufacturers, interacting through made-to-order software. Frequently, the equipment was designed for use by human beings, not by automated methods. It’s a far cry from the USB plug-and-play connectivity enjoyed by the personal computer market today.
Liscouski said we need to start thinking about how the processes in the lab as a whole can be made more productive, and not just component by component. “In some cases that means re-engineering the processes,” he said.
Among his recommendations, “instead of having people act as robots, have robots act as robots. …It’s a matter of making better use of people. Instead of using them as a means of transferring materials from one place to another, it’s allowing the equipment to do that and allowing the people to spend more time doing analysis, doing thinking, doing research.”
These are all components of what Liscouski calls scientific manufacturing, in which the automation process is designed—or redesigned—as an entire system. To make full use of the concept, users who will need to be involved in the planning and facilitation must demand from vendors that their hardware and software will talk with others’ hardware and software. Those who are designing and building the systems should focus on “understanding how labs work, instead of the science people learning about how to talk in bits and bytes,” he said.
Liscouski was concerned that budding researchers are being trained in the sciences but not necessarily in the automated tools that are being used to do science. “When they come out of undergraduate education, they go into laboratories and there are all these tools that they’ve never seen before, and yet they’re expected to work with them and understand them,” he said. “The gap needs to get filled.”
A large emphasis of the nonprofit Institute for Laboratory Automation is education. The organization provides live, video, and online courses for both lab personnel and IT support.
“We probably have the only life sciences-based course for IT professionals that talks about what labs are, how they work, what goes on in them, what kinds of things they bump into, and then also talks about what’s necessary to provide the day-to-day support in those kinds of environments,” Liscouski notes.
No Time to be Smart
One of the pieces to be integrated into a liquid-handling system is the dispenser itself. Typically, the quantity of fluid released is regulated by the amount of time a valve is allowed to stay open, without any feedback as to how much is actually being dispensed. That’s fine as long as all the relevant parameters remain constant. But, for example, “you will dispense more fluid if the temperature is higher, because the viscosity is smaller, when you keep a constant time,” pointed out Laurent Tanguy, Ph.D., an R&D engineer at IMTEK.
Dr. Tanguy and his colleagues wanted to build a smart dispenser, one that could “tell you how much you dispensed and which adapts to changes of pressure or viscosity or rheological properties of your fluid,” he said. And so they designed a relatively simple system based on the perfect gas law.
A T-connecter is inserted between a syringe reservoir and a normally-closed nozzle. At the end of the T-connector’s third arm is a sensor that can detect the pressure of the gas in the arm. As the piston in the reservoir is depressed, it forces fluid further into the third arm, compressing the gas, and causing the sensor to register a change in pressure. The valve is opened to eject the fluid, bringing the system back to equilibrium, and then it is closed.
The pressure integral value is used as a regulating valve. It is proportional to the volume of the fluid being dispensed, and because the relationship is linear by doubling the amount of pressure exerted (by a stepper motor, for example) the volume dispensed doubles as well.
“If you know the volume of gas that is enclosed at the beginning you can use the Boyle-Mariotte Law, and calculate back to estimate how much fluid you dispensed,” Dr. Tanguy said.
Such a dispensing system can equally switch between fluids like water and DMSO “because the rheological properties change but the gas volume change inside the chamber is equal to the volume you dispense, so you can always calculate back. You have to do a first dispense, which is completely free because you don’t yet know the relationship between the integral and the volume,” Dr. Tanguy explained. “But you make the first dispense for the value of the integral and you know how much volume of fluid is out afterwards because you used the perfect gas law.”
The classic structural biology pipeline begins with hypothesizing where protein domains might be found encoded in a larger gene, followed by amplifying that gene fragment by PCR, putting it into a plasmid, expressing it in E. coli, and divining whether the resultant protein fragment is soluble.
“The problem is that a lot of the time this just doesn’t work—despite your best guess the protein fragment is insoluble,” said Darren Hart, Ph.D., team leader at the EMBL in Grenoble, France. “So you have to go ‘round the cycle and re-hypothesize what might be the best construct, make more bits of DNA by PCR, try to express them. There are people who are working on very difficult, poorly understood targets who end up doing this for a year or more.”
There is another way. Dr. Hart’s high-throughput Expression of Soluble Proteins by Random Incremental Truncation (ESPRIT) platform utilizes a directed evolution approach to generate essentially all possible domains found in a single gene, translate them, and screen them for solubility, all in a single, linear experiment.
“We use enzymes to randomly eat away at one or both ends of the piece of DNA,” he said. “The large proportion of this collection is junk—the domain boundaries are wrong, the bit of protein that is produced is nonsense and doesn’t fold up properly so it’s generally insoluble or protealized.” But the hope is that perhaps 1 in 1,000 of these truncated constructs will actually encode the piece of protein that folds up into a stable domain.
The plasmids are all made in a single droplet of just a few microliters, used to transform E. coli where individual bacteria take up single plasmids. Those cells are plated on agar where, during cell growth, an endogenous bacterial enzyme biotinylates a reporter tag on the target protein if soluble, providing a flag for recognition of the rare desired clones. Robotics choose 28,000 colonies per target and place them into the wells of 72 384-well labeled plates.
The contents of the plates are then arrayed onto a membrane, and the plate is frozen down. The membrane is probed with a fluorescent streptavidin to identify positive clones, which can then be taken out of the freezer to be sequenced.
Academic access to the technology is supported by EU funds, with contract research possible for pharmaceutical and vaccine companies.
“If we think that there are well-behaving domains that might be discoverable and that there is no obvious reason why it shouldn’t express in bacteria, we take the project,” Dr. Hart said. “And of those projects—which usually come with a history of failure and frustration, having been worked on in the classical way for many months—more than half actually yield soluble material. The person who has put the energy into doing this experiment comes away with a smile on their face, with something that they want to continue working on.”