January 1, 2017 (Vol. 37, No. 1)
Classical Gene Therapy Went Through a Difficult Period, New Methodologies Are Maturing More Quickly
Classical gene therapy took several decades to advance from proof of concept to clinical applications. In contrast, newer forms of gene therapy, such as those exploiting the CRISPR-Cas9 gene-editing system, may take less than a single decade to do the same.
It may seem puzzling that the new gene therapy should mature so much more quickly. After all, the new gene therapy promises more advanced capabilities, such as the targeting of specific genomic loci. Additional capabilities often mean additional complexities—more “bugs” to work out. Yet the new can benefit from the experience of the old, enabling it to progress more quickly.
Classical gene therapy went through a difficult developmental period, recalls Theodore Friedmann, M.D., professor of pediatrics, School of Medicine, University of California, San Diego, and director, Center for Genetic Therapies. He points out that during this period, gene therapy struggled with gene delivery, the problem of bringing a therapeutic gene to the right tissue at the right time to correct downstream biochemical, physiological, and metabolic problems.
Gene therapy eventually hit upon a workable solution: the viral vector. Engineered for safety, or “disarmed,” the viral vector is ready to carry therapeutic cargo to a diseased tissue or cell. Although viral vectors were first developed for use with classical gene therapy, they are also capable of advancing the new gene therapy.
In classical gene therapy, a viral vector can carry a gene that, upon insertion into the genome, restores a missing function and thereby corrects a disease phenotype. Rather than attempt to manipulate faulty genes, classical gene therapy inserts a new genes.
Despite its limitations, classical gene therapy has shown itself to be a powerful approach, and it has grown into a legitimate clinical field. Responsive diseases include severe combined immunodeficiency (SCID), pediatric retinal degeneration and blindness, hemophilia, neurological diseases, and lysosomal storage diseases such as adrenoleukodystrophy, commonly known as Lorenzo’s oil disease. Another application is the use of engineered T cells in immuno-oncology. T cells equipped with chimeric antigen receptors have shown remarkable promise against cancer.
But in gene therapy, the holy grail is to achieve salutary phenotypic changes more directly and with greater precision. Instead of merely inserting compensatory patches, gene therapy could resolve genetic diseases at the root level by correcting genetic misspellings. This is genome editing. Fortunately, genome-editing components can be loaded into viral vectors, expediting clinical development.
An early genome-editing approach used zinc finger nucleases (ZFNs). ZFNs provided proof of concept of genetic repair, and they have been applied in a number of clinical trials. Even more promising is the emergence of CRISPR-Cas9, which represents a new class of genome-editing tools. It can be used to repair the incorrect spelling of a disease-causing gene, and it can do so without leaving behind footprints.
“For research, the CRISPR methods are most convenient, but they have some limitations in the realm of delivery and possibly in the choice of target,” comments Dana Carroll, Ph.D., professor of biochemistry, University of Utah School of Medicine. “TALENs are the least limited in target selection and have excellent specificity. The ZFNs are the smallest and thus the easiest to incorporate into some viral vectors.”
“In the private sector, companies can choose among platforms based on IP issues. All platforms are capable of high efficiency and specificity. If a complex project involves only one or a few targets that will be attacked repeatedly, development of the cleavage reagent will represent a small part of the time and expense of the project, so any of the platforms could be used.”
With modifications that have been made in the last few years, nearly any sequence can be targeted with adequate specificity, continues Dr. Carroll. Specificity requirements vary among applications, and they need to be evaluated for each new nuclease/target combination.
Delivery of the reagents to the cells or tissues of interest is not a major issue for in vitro and ex vivo applications but remains significant for in vivo applications, both in humans and in model organisms.
Lastly, currently indel formation, insertion or deletion of bases, by non-homologous end joining (NHEJ) is more efficient in most cells than sequence replacement or insertion by homologous recombination (HR). Because many applications depend on sequence correction, it would be helpful if HR were more efficient. Consistently better HR efficiency, however, has proven difficult to achieve.
“The most obvious clinical applications are ones that can be done ex vivo,” states Dr. Carroll. Elaborating, he notes that two approaches are being taken to treating sickle cell disease by modifying hematopoietic stem cells with CRISPR. One of these approaches is being explored in Dr. Carroll’s own laboratory.
“The approach that we’re studying depends on HR with an oligonucleotide template to correct the offending single-base-pair disease mutation,” Dr. Carroll details. “The other approach uses CRISPR to reverse the repression of fetal beta globin in adults; the persistence of fetal hemoglobin is therapeutic and even curative.
“The approved clinical trials that involve CRISPR are simply using it to inactivate genes in therapeutic CAR-T cells in order to enhance their efficacy,” concludes Dr. Carroll.
Inserting Exogenous DNA
Creation of knockin and conditional knockout animal models requires the insertion of long exogenous DNA cassettes into specific sites in the genome.
The CRISPR-Cas9 system can make a cut at the desired location in the genome, but the cellular DNA repair system, NHEJ, immediately seals it back before the homology directed repair (HDR) pathway finds and inserts the exogenous DNA piece at the cut site.
These mechanisms were evaluated in a project led by Masato Ohtsuka, Ph.D., associate professor of molecular life science, Tokai University, and Channabasavaiah Gurumurthy, MVSC, Ph.D., assistant professor of developmental neuroscience, Munroe Meyer Institute and director, Transgenic Core Facility, University of Nebraska Medical Center. These scientists demonstrated that if the exogenous DNA is supplied as a single strand, instead of a double strand, the HDR process outpaces NHEJ and efficiently slides in the foreign DNA cassette at the cut site.
They termed this strategy Easi-CRISPR (“Easi” refers to efficient additions with ssDNA inserts). Easi-CRISPR addresses a major challenge of animal genome engineering. Whereas dsDNAs serve as poor donors for insertions at cut sites, the efficiency of ssDNA insertions in Easi-CRISPR has been demonstrated to be 25–100%. Over a dozen different models with 1,000–1,500 base pairs have been generated.
In another project, Drs. Ohtsuka and Gurumurthy worked with Masahiro Sato, Ph.D., professor of molecular biology, Kagoshima University, to develop genome-editing via oviductal nucleic acids delivery (GONAD). In GONAD, DNA and CRISPR reagents are introduced into a female mouse oviduct via electroporation, eliminating the traditional gene-editing steps of isolating fertilized eggs, microinjecting them with DNA, and then transferring the embryo into a new set of females.
As the technology evolves further, the molecular mechanisms of the HDR process needs to be better understood to enable large-scale genome engineering, wherein clusters of human genes can be inserted en masse to create humanized animal models. Creating one humanized mouse model, using traditional technologies, can take 5–10 years or more; CRISPR-based approaches can shrink the time requirement of genome-engineering experiments several-fold.
CRISPR consists of two components: a guide RNA (gRNA) and a nonspecific CRISPR-associated endonuclease (Cas9). The gRNA is a short synthetic RNA composed of a sequence necessary for Cas9 binding and a custom sequence that defines the genomic target to be modified.
Standard gel-based assays, such as T7E1, CEL-I, and Surveyor, can be used to determine gRNA activity. These assays, which use the enzyme mismatch cleavage method, are not very accurate; however, they are easy to adopt in any laboratory.
Another qualitative method is PCR amplification of the region containing the modification site followed by Sanger sequencing. Multiple peaks will result if double-strand break repair has occurred by aberrant rejoining, which indicates gRNA activity. These peaks can be deconvoluted if the Sanger traces are subjected to additional analysis. For example, the traces can be evaluated according to the TIDE (Tracking of Indels by DEcomposition) method.
Editing results can also be checked with a method called targeted deep sequencing. It is not typically implemented in individual laboratories, due to instrument and reagent costs, as well as difficulties related to data analysis. It is, however, a sensitive and quantitative approach.
“Targeted deep sequencing has changed our lives,” exclaims Shondra Miller, Ph.D., director, Genome Engineering and iPSC Center (GEiC), Washington University School of Medicine. “We can tell at a very sensitive level how much and what type of editing is occurring. Next-generation sequencing (NGS) gives a true picture of what is happening in a population of cells or even in a single-cell-derived clone.”
NGS provides a lot of information quickly, and is used for identifying knockouts and for determining knockout or knockin rates. Reagents and donor designs are checked first at the pooled level to determine which ones are the most efficient; population events as low as 0.2% can be determined.
GEiC also uses deep sequencing for genotyping animals modified with CRISPR-Cas9. Targeted deep sequencing at the loci of interest indicates what fractions of alleles in a mosaic animal have the desired modification.
NGS will continue to play a role as genome editing evolves. Not only can the technology contribute to better experimental design, it can also play a role in identifying off-target effects. In a therapeutic situation, millions of cells are taken out, edited, and replaced. Whatever combination of gRNA and Cas9 combination is used, the chance of having an off-target event in a million cells is high. Some combinations, however, may pose a higher risk of off-target events than others.
Whole genome sequencing can provide enough coverage of the genome to help determine if a therapy is safe. Multiple reads of each allele can indicate where off-target effects are occurring.
Work has commenced on the genetic modification of human embryos—both nonviable (recently) and viable (more recently). It has highlighted the complexity of social, ethical, and policy decisions.
“This a difficult area to define and fully comprehend,” maintains Dr. Friedmann. “It involves not only legitimate and ethically acceptable forms of human genetic modification for medical purposes and disease control, but also points to ethically hazardous applications and potential policy disasters.” Of particular concern are potential human enhancement and eugenic applications.
“Nontherapeutic human genetic modification is developing with such speed that it is worrisome that human society may not be prepared for the moral, ethical, and policy dilemmas that it poses,” Dr. Friedmann continues. “What is a legitimate target for genome editing? Who will identify those targets and to what end?”
As human genetic enhancement becomes increasingly feasible, we sometimes hear arguments that we are morally obligated to use any and all of the tools at our disposal to better humanity. Such arguments usually fail to clarify what is meant by better, who is going to be empowered to make such judgements, and how genetic and evolutionary harm may be avoided. If a gene is manipulated without totally understanding its interactions with other genes, then there is a chance of doing serious genetic harm.
Legislators and policy makers are going to be faced with technical and policy issues for which they may not be adequately prepared. Perhaps it is time to resurrect a version of the Office of Technology Assessment (OTA), which served the U.S. Congress from 1972 to 1995. The OTA provided congressional members and committees with objective, nonpartisan, and authoritative analysis of the complex scientific and technical issues of the late 20th century. Without access to updated information in such an explosively developing field, policy makers cannot be expected to make wise and enlightened decisions.