Quote:
Originally Posted by RyanC
Oh geez, as if we didn't have it bad enough with susceptible to artificially constructed viruses by programs (computer viruses), now we could face physical/biological viruses...
As for the 700TB headline... isn't it pointless without knowing what sort of times there are available to access said data? Its my understanding that transcription/translation and other biological processes happen insanely fast, but is this on the level that's acceptable to the point of usefulness?
|
From what I can tell from the paper's abstract and the news article, assuming the data has already been encoded in DNA, to access this data, you would just need to sequence the DNA - currently, the fastest form of sequencing available is "next-generation sequencing" (as opposed to standard methods like Sanger sequencing). With NGS, you can apparently sequence the entire genome of an organism within hours. It will also cost you a hell of a lot of money.
So, it's definitely not fast or cheap enough for regular, wide-spread use -
yet, that is; who knows what they'll come up with in the future. But, at the moment, I think the main application of this would be for long-term storage, where you wouldn't be encoding and accessing the data regularly (the news article mentions this as well).