Crack in CRISPR Facade After Unanticipated in Vivo Mutations Arise
genengnews.com"Unanticipated?"
CRISPR is believed to be, at its core a bacterial immune system designed to 'knock out' problem viral genes by chopping them up at specific 'remembered' loci and hoping that the lossy repair mechanisms cause mutations that prevent the gene from functioning. It also evolved to work on the fairly small genomes of single-celled organisms.
So we've known for awhile that it's going to be quite tricky to get only the exact mutations that we want, at only the exact point that we want them.
It's not magic.
The mutations were unanticipated by the computational tools designed to identify likely off-target effects. This was in the article.
> It's not magic.
It's not a computer is probably apt.
Pre-publication press release. Full text of article not available. How good was their control group? The press release says they used mice that had previously been edited with CRISPR/Cas9 and found off-target mutations, most of which were single nucleotides. Did they have a good control group so as to exclude random point mutations? Who knows? What is the theory on how CRISPR/Cas9 causes single nucleotide off-target mutations? No idea.
This is academic publishing at its absolute worst. Put out a sensational press release and whip up a fervour before the article is available. Those diving into the scrum to comment have not read the article.
Also, it's research by MDs. Just sayin'.
The article is available: https://www.nature.com/nmeth/journal/v14/n6/full/nmeth.4293....
> "Co-housed FVB/NJ mice without CRISPR-mediated correction were used as the functional-deficient control. Briefly, an sgRNAexpressing plasmid had been coinjected, into FVB/NJ zygotes, with the single-stranded oligodeoxynucleotide (ssODN) donor template and Cas9 protein to generate mosaic F0 founders.1"
Following to reference 1:
> "The sgRNA plasmid was co-injected with the single-stranded oligodeoxynucleotide (ssODN) donor template and the Cas9 protein into FVB/N zygotes to generate eleven F0 founders.
[...]
Double-strand breaks (DSB) were detected in 7 of 11 mice
[...]
The target region was sequenced, revealing that F0 3 and 5 incorporated the donor template precisely in 35.7% and 18.8% of somatic cells, respectively (Fig. 1c), while F0 7 and 8 incorporated indels in the integration, corroborating the unexpected results in the RFLP data.
[...]
A mixture of 3 ng/mL sgRNA plasmid, 3ng/mL of Cas9 protein (NEB Ipswich, MA), and 1mM ssODN (Integrated DNA Technologies, Iowa) was injected into the pronuclei and cytoplasm of FVB/N inbred zygotes. Zygotes that survived injection were transferred into oviducts of 0.5-day post-coitum, pseudopregnant B6xCBA F1 females and carried to term. The resulting gene-corrected mice were backcrossed, initially into the FVB/N background, to determine the germ-line transmission efficiency of the repair." https://www.ncbi.nlm.nih.gov/pubmed/27203441
This is missing some crucial info isn't it? How many FVB/N zygotes were injected to generate those 11 original mice?
The more you work in the field, the more you discover how underwhelming the details in biology papers are. They give results without detailing the algorithms, hence destroying reproducibility, they hide datasets behind confidentiality, presents large-scale graphs without precise data, and so on.
Generally, reproducibility in biology papers is but a far away dream.
>They give results without detailing the algorithms, hence destroying reproducibility, they hide datasets behind confidentiality, presents large-scale graphs without precise data, and so on.
Almost all science disciplines do this if the prestigious journals won't police it. Peer-reviewers who cared would quash these during referee periods if they wanted to, but they need to publish their data-less work as well.
Only a few journals across all of science care about that kind of thing. A few economics journals force algorithm and full datasets to be published.
It is still awful but far better now than a few decades ago. The absolute worst is Nature/Science from ~2000. For someone trying to figure out what is going on (rather than just believe what the authors claim) most of those papers are not even worth reading.
The problem with top-ranked journal more likely arises from page limitation. The amount of data needed for a paper published in such journals with so little pages and figures allowed basically means no one could have enough space to put enough material in the main content. And then who really takes a serious effort in supplementary materials?
With regards to method section of paper, while it is an important part for reproducibility, it is not writing in great details in top journals. The reasons I think lie in it is mostly in the supplementary material, the multiple methods used make it crazily long, furthermore applications of the commercial kits and highly-automated machines make it not necessary for researcher to write in great details.
Anyway, I agree that papers from Nature/Science are hard to read sometime, especially without reading the supplementary.
Any time before the advent of "supplementary materials" pretty much meant that you had nothing more than a highlight of the method. It was terrible indeed.
However, even today I find that roughly half the publications are not worth the paper they are printed on, or the bandwidth to download them.
Check out pre-1940 papers (the year may be later depending on exact sub field). I've seen that they used to make it a point to include all the info (including raw data) to the point it was practical. Somewhere along the line the attitudes went wrong, I blame NHST personally.
NHST ? What's that ?
I explained it and provided some references in an earlier post here: https://news.ycombinator.com/item?id=13483055
Here is another reference you could check: http://andrewgelman.com/2016/02/04/the-notorious-n-h-s-t-pre...
...and in the context of an already sensationalized technique ("it's like TextEdit, but for genes!")
Most of the DNA that suffers random mutations is non-coding DNA. It's completely unknown what effects, if any, modifying non-coding DNA might have as its use is unknown.
I find it interesting that although anyone can experiment with CRISPR in their living room using something like this http://www.the-odin.com/ that no one has just tried modifying the non-coding DNA a lot and then observed any changes (or lack thereof) in the specimen.
It's still fairly hard to do in a living room, if you want to do anything novel.
You need probably low 4-figures of equipment first; a -20C freezer for DNA/buffer storage, small centrifuge, thermal cycler, some way of getting genes into a target, and incubation space for whatever you're modifying. Preferably you'll also have consistent temperature control and an extremely clean environment. You also need a source of primer sequences, which are short customized sequences of DNA used to tl;dr get a lot of copies of a DNA sequence which you only have in small quantities.
Nothing too difficult, and much of that equipment can be DIY'd for the home lab. But many chemical suppliers also won't ship to residential addresses or onboard individuals as customers, so you'll need to incorporate and shop around a bit. And wet lab protocols can be very finicky; you'll probably need to run through your workflow a lot before you get to the point where you can semi-reliably go from start to finish without fucking up. Often, you won't know exactly how you fucked up, but you can't argue with a lack of results.
It's possible, I think, but it's also time-consuming and difficult.
Anyways, if you did get a reliable setup working with say, micropropagated plant specimens, there are far more interesting prizes than seeing what non-coding DNA does. Plants make all kinds of cool stuff, from scents to flavors to alkaloids. And they've demonstrated an ability to take genes from things like jellyfish for e.g. autoluminescence, too.
There is the bio-maker movement, sometimes called biolabs, that puts fairly elaborate biotech in the hands of unaffiliated researchers and hobbyists. I wouldn't be surprised to see homebrew CRISPR at science fairs in the near future. https://www.meetup.com/denverbiolabs/
Indeed. And the thing I linked to (the odin) does allow you to do CRISPR in your living room for $150, as I said in my original comment.
That's a good point; community labs and bio makerspaces have great potential to help make this more accessible to us lay people!
Could the genes that produce THC be spliced into common lawn grass?
Yes, and they're all characterized. It would be difficult, though; you're looking at about a half-dozen genes for the intermediary steps from something common like acetyl-coa.
But with just THC, it probably wouldn't mimic the effects of cannabis very well. There wouldn't be any CBD, terpenes, etc etc.
Still, you could do it. It'd probably be easiest with something like tobacco which is well-understood and already has a system for sequestering cytotoxins.
Doubt you'd get something with appreciable yield stably transfected before wider legalization hits, though. And that would be some expensive pot.
DMT would probably be the easiest. It's only three steps from tryptophan, and I believe ask the enzymes are pretty simple.
Splice the gene directly in to your liver stem cells and be done with the middle-man once and for all.
That is the stuff of urban legends ...
> And they've demonstrated an ability to take genes from things like jellyfish for e.g. autoluminescence, too.
This almost feels like bikeshedding but...to be fair, pretty much the first thing anybody does with a new work organism in biotech is make it glow and it's been that way long before CRISPR! There's just something about it that people find irresistible.
I lost decades of my life chasing the dream of making things glow (more specifically: general germline mutation based on engineering). I agree the glowing tobacco plant was a sexy introduction to gene modification. Unfortunately doing anything non-trivial and actually useful (beyond the 'hello world' of a glowing plant) is really challenging.
Sure, and the autoluminescent plants experiment didn't use CRISPR; I was just pointing out how versatile plants are.
http://journals.plos.org/plosone/article?id=10.1371/journal....
> There's just something about it that people find irresistible.
It's cheap to verify and demonstrate success, so it's a natural first, or at least early, application.
Its like making a todo app in a new language
> a -20C freezer for DNA/buffer storage,
How about dry ice from the grocery store?
Its been a few years since I left the genetics world behind, and I was just a sysadmin, but even then I'm pretty sure they had determined non coding regions to play a vital role in protein formation/folding.
How would non-coding DNA help with protein folding when translation happens outside the nucleus?
It's possible that upstream UTRs impact RNA transcription rates, but I'd be surprised if UTRs mediate protein folding.
Non-coding = "not translated into proteins", not "isn't used". Some non-coding DNA produces non-coding RNA, including transfer RNAs (https://en.wikipedia.org/wiki/Non-coding_RNA), which help in protein construction.
This makes sense given we know that some nontranslated RNA has biological activity , like tRNA and rRNA. I was under the impression that "junk" DNA referred to DNA that was not transcribed.
nearly all DNA is transcribed at some low rate (this has been experimentally determined) but it seems probable that most long stretches of non-coding DNA contain little to no functional elements and could be replaced or removed with no observable functional effect.
I wanted to say something about RNA so maybe your onto something, but alas, I may be simply wrong.
Because to come to any real conclusions, we'd have to iteratively modify non-coding regions one at a time, and the resources to accomplish that are far beyond the average garage-geneticists's reach. And, sadly, there isn't a lot of funding for academic research into regions that aren't believed to have a significant impact.
Couldn't you just start with the odin (that I linked to above), and change loads of the non-coding DNA in the bacteria that comes with the getting started kit, and see if it survives or has any different characteristics?
Off target effects from CRISPR is not new. The method leverages DNA repair mechanisms in the cell to do the recombination. All errors involved in those DNA repair mechanisms (there are distinct ones) would apply here。
Moreover - this is biology. Biology is messy. Mistakes are the reason why the system works (evolution). To say that off-target effects were unanticipated is nonsense. Rather - how many off target effects will there be, can they be predicted, and can their effect be predicted.
But if CRISPR cuts off a single DNA sequence, surely the repair mechanisms won't damage a random gene far away. My very limited understanding is this paper is reporting widely spread damage on the genome.
The original bacteria CRISPR can only cut off pre-defined sequences, but the artificial gene tech CRISPR can add and edit as well. I'm wondering if the later two modes are the sources of the genome damage.
> but the artificial gene tech CRISPR can add and edit as well.
I was under the impression that addition and editing capability were essentially done by injecting the sequence that is desired and performing the cut. Then you simply rely on the repair mechanisms to have a chance to put the new segment in instead of the old segment. Is that not the case?
I seem to recall that in prior HN stories about CRISPR some poster was saying he believed CRISPR didn't work the way people thought it worked. He said it was simply killing cells that didn't have the desired mutation and biologists weren't realising that because of design errors in the experiments (or rather, sometimes mass cell dieoffs were being reported but not dwelled upon).
If CRISPR isn't actually editing the DNA but rather just selecting natural mutants that happen to have the desired edit, would that cause what's seen here?
Don't think so. Originally the CRISPR mechanism was discovered in bacteria, which only have one cell. It doesn't make sense for bacteria's anti-virus system to kill it's only cell.
This is a really interesting point.
I don't know if this experiment has been done, but I actually think that there's a good chance that a bacterial cell might acquire a sequence that it itself contains - CRISPR is known to work at the population level but my understanding is that the mechanism for acquisition of new sequences is unclear.
I'll take a look for any papers on the topic and repost should I find anything.
Thats assuming that its selection mechanism works at an individual level, and not a population level.
After all a colony of bacteria is (usually) a set of identical clones. As long as a few survive, the DNA lives on.
See my other post here. They don't report the initial number of zygotes injected (why not?) so we cannot assess the "selection" explanation.
They got 11 surviving mice in the end, but only 7 were "edited", and usually we see that ~1%-.1% cells are mutants at any given site I would expect they needed ~ 1000 zygotes.
This only really explains the NHEJ results though, not the HDR (when the repaired DNA includes an exogenous template). They report that 2 (out of the 7) mice had the sequence matching the template. However this was only in some of their cells (36% and 19%).
Two other mice had a sequence that was similar but contained mutations...
Anyway, maybe someone can email them and ask how many zygotes were used originally.
It looks like this strain of mice can produce ~10 pups (a lower bound since not all zygotes will make it to birth) every 5 days for 26 cycles. So, one breeding pair could produce at least ~260 zygotes in 4 months. From this, I'd say it is plausible (economically) that the number of zygotes injected for this study was in the thousands:
>"The fecundity of the FVB/N strain was assessed by data from nine breeding pairs, which produced 43 litters. Litter size ranged from 7 to 13, with a mean value of 9.5. (First litters were generally smaller.) This is superior to other commonly used inbred strains; for example 6.7 for C57BL/6J, 6.6 for SJL/J, 5.4 for 129/J, or 5.0 for DBA/2J (15). A typical breeding pair mated at every postpartum estrous cycle and continued breeding for at least half a year, usually longer." http://www.pnas.org/content/88/6/2065.full.pdf
>"Mice have a 4-5 day estrous cycle and ovulate on the third day. Placing the females with a male on the third day of their cycle will result in the maximum number of pregnancies." Also from the same reference (table 1), number of fertile cycles is ~26: https://www.jax.org/strain/001800
What are the chances that a specific mutation exists once you modify more than 4 or 5 bits of DNA? It's like 1 / (number of base pairs ^ 4) or something. The chances are almost nil.
That isn't how NHEJ works. All they see is some random indel at the site. So all you have to explain is the presence of any mutation at the target site.
I believe George Church published some work several years ago now using whole genome sequencing after CRISPR, saying they had neglible off-site integration or other mutations.
But going back,the findings were much more measured than that- they still saw off-site integration.
It seems to me that this high profile character often goes on record mentioning things that ultimately end up being just his wishful thinking and don't have much bearing.
I'm inclined to agree, except he did invent much of modern sequencing. He's great at technology, but limited in actual science.
the article headline kinda belies a bias before you even read it
Title is a bit dramatic. People just need to choose their gRNAs better based on available genomes.
Unanticipated
I'm just a normal person with nothing to do with this, and I expected such "surprises" from the beginning.
I remain convinced mankind should not mess with this.
The rest of us will enjoy leaving you behind =)
Hammond, is that you? Jurassic Park was a cautionary tale about hubris. People with lots of money and smarts were convinced they could control nature.
I couldn't be the only one worried about an "I Am Legend" type problem with this stuff. Obviously, that movie is an extreme, but the underlying problem of unintended mutation exists.
All known CRISPR systems work only under very specific conditions. You could drink a vial of Cas9 and it wouldn't do anything to you because it couldn't enter your cells.
i am sick of all the CRISPR; hype this is the cure being worse than the disease.
Not sure about it being overhyped, but similar sentiment, stated better is in this article (from last year):
http://www.sciencemag.org/news/2016/05/gene-editor-crispr-wo...
It's pretty clear there are some huge hurdles before it's useful, even for your average biochem laboratory. But the effort put in to overcome those hurdles is probably worth it.