top of page

Abseq - A New Era of Unlimited Multiplexing in Cytometry?

Just read a very exciting paper in Nature's Scientific Reports Journal:

Basically, the authors use DNA sequences to tag antibodies instead of fluorochromes. Since an unlimited number of unique sequences can be generated, one could - theoretically - multiplex as many antibodies as wished.

We tried this a number of years ago -- planning to read out sequences on the 96-plex Fluidigm BioMark platform instead of a sequencer. We weren't successful because our conjugations weren't clean. Since then, O-link, Nanostring, and the authors of this paper have achieved antibody-oligonucleotide conjugation.

Abseq has two (at least) clever elements. If you stain a population of cells with, say, 300 antibodies, you won't be able to resolve single cell expression if the cells remain mixed together when they are analyzed (i.e. sequenced). After all, cell A and B will both stain with antibody 38, which is conjugated to the same oligonucleotide. You would need to analyze cells one-by-one, or barcode the individual cells. The authors use microfluidic droplet technology, which they pioneered, to isolate single cells and barcode them; then the cells can be mixed back together. The use of droplets is critical to throughput. To solve a problem inherent to sequencing - namely the biased amplification of some tags - the Abseq team has smartly added unique molecular identifiers to each DNA tag. This lets them remove signal from any tags that correspond to more than one original template, and in principle, it makes the readout more quantitative.

Abseq is a potentially exciting and empowering technology. But before we get too googly-eyed, keep in mind the following:

There are severe limits to the system. The maximum capacity of the system is a function of the number of reads a sequencer can do (usually 1 billion), the number of cells, and the number of reads per cell. The authors provide the following example, admitting that inefficiencies at various steps in the process make this pie in the sky thinking.

(Number of antibodies: 1000)

X (Dynamic range: two decades log (100))

X (Number of cells: 10,000)

Maximum number of reads (1X10^9).

So, let's imagine that you want to study T-cells specific to a viral epitope, for which you really need to analyze, say 300K cells to be able to find enough of cells to quantify and characterize. You have to pay for that 30X increase in cells, by reducing the number of antibodies by the same factor. After all, you don't have any room to give on the dynamic range. So, this means that you can't do more than 33 antibodies. Not much better than our 30-parameter flow cytometry, which has MUCH better dynamic range (5-decades), and isn't limited in the number of cells that can be acquired. Not to mention that flow cytometry instrumentation is more common, as are commercially available reagents. So, while my enthusiasm is still decent for this technology, it's tempered by this realization.

One of the amazing potential applications of this technology is the simultaneous measurement of protein and gene expression, as the authors tantalizingly tease us with at the end of the Discussion section. But keep in mind that, for a typical single cell gene expression experiment, you'll read out about of 15,000 genes (across ALL the cells analyzed), which pretty much maxes out the system... there's little room for simultaneous measurement of antibody tags (for protein expression). So, the forecast calls for rain on the parade route. Hopefully, I've miscalculated this.

I see two other limitations of the technology. First, the relatively high frequency of coincidental events. When mixing pure CD3+ cell lines with purely CD19+ cell lines, they see about 0.6% double positive cells, representing - they think - two cells encapsulated into the same droplet. This is pretty substantial background, I think, and will hurt for rare event applications. Second, and this is a general pet peeve of mine for most papers introducing a new technology or algorithm, the demonstration application is stupidly easy. Detecting pure cell lines is an important proof-of-principle, but it doesn't correspond to the kind of application a user will employ in real-life. In real life, they will want to detect poorly expressed markers with this technology, and they'll want to use many of those at once, and detect every combination of expression (++, +-, -+, --). Can you resolve distinct populations of cells based on the signal this technology provides? What about markers with broad dynamic range? IFNg can be expressed over 3-4 log decades, and bright vs. dim expression has biological consequences. Will Abseq be able to handle this kind of application? I really wanted to see a memory T-cell panel in this paper (not in the next one, or never) to prove that this technology is the real deal.

So, that's a little reality check. Despite the limitations I see, I'm really impressed and excited by the technology. It is technically very elegant, and potentially very powerful with some optimization and more work.

Featured Posts
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page