Subscribe to Elucidations:
       

Episode post here. Transcription by Prexie Miranda Abainza Magallanes.

Matt Teichman:
Hello and welcome to Elucidations, an unexpected philosophy podcast. I’m Matt Teichman, and with me today is Gaurav Venkataraman—a co-founder of Trisk Bio in London—and he is here to talk about memory and DNA and RNA. Gaurav Venkataraman, welcome.

Gaurav Venkataraman:
Thank you, Matt. It’s great to be here.

Matt Teichman:
So if you asked me where I thought memory was stored—and I’m not an ancient Greek philosopher, but a contemporary Matt Teichman—I would think that it was stored in the brain. But apparently, you’ve done a little bit of research to suggest that the story is a little bit more complicated than that. Maybe we could just start by talking about the indications that the story might be more complicated than that.

Gaurav Venkataraman:
Sure. The complexity of the story probably starts around the 1950s. That’s before what was known as the synaptic memory hypothesis had really taken hold—which states that the memories are stored in the synaptic weights between neurons. That’s the connection strength; it’s the kind of thing that neural networks are predicated on.

Matt Teichman:
Is storage even the right word for this, or is it more like there is an action happening in the system? Is that the right metaphor?

Gaurav Venkataraman:
That’s a great question. I don’t think anybody yet knows. The term that people use for memory storage is the engram—like, where is the engram—which would be the neural substrate of memory. And so the question is—and this was very hotly debated in the ‘50s—well, is it a molecule? Is it a strand of DNA, a strand of RNA, a certain protein—maybe like a prion—that somehow is long-lived, and therefore the brain can use it as a memory? Is it something like more dynamic? In modern parlance, it’s thought that it’s a firing pattern that is kept in working memory and that’s something more dynamic. Is it something intercellular, like calcium waves inside of neurons, that’s serving as the memory?

I think it comes down to almost a philosophy of mind question, or a cognitive science question, to say, well, is the brain something that’s just constantly dynamically responding to the environment, or is it something where you’re accessing abstract representations of things, and those representations are somehow molecularly or neurophysiologically-encoded? You might have seen, several years ago, there was this instance of what people called the Jennifer Aniston neuron, in which some neuroscientists—

Matt Teichman:
—it had the haircut? Is that why they called it that?

Gaurav Venkataraman:
Well, I read this many years ago, so I might be butchering it, but there was this neuroscientist at UCLA called Itzhak Fried, and in collaboration with Christof Koch, who was there at the time (now at the Allen Institute), they were doing some electrophysiology experiments in patients that were undergoing neurosurgery. They would do recordings of neurons and show people pictures of various things like apples, their grandma, and then Jennifer Aniston. What they reported—again, this is many years after I’ve read it—was that in certain people, there was a specific neuron that fired every time it was shown a picture of Jennifer Aniston. And so, the perhaps conclusion there is that that neuron was like representing Jennifer Aniston in the brain.

Matt Teichman:
Right. It’s at least an indication maybe that a recall is happening?

Gaurav Venkataraman:
Sure. Something like that.

Matt Teichman:
Yeah.

Gaurav Venkataraman:
So you can interpret it up and down the stack.

Matt Teichman:
Yeah, absolutely.

Gaurav Venkataraman:
On the one hand, you could say yes, that’s the Jennifer Aniston neuron, and your brain is basically a computer that then uses the representation of Jennifer Aniston to compute. Or you could say, “Yeah, well, there’s something phenomenological thing that’s happening, and that neuron was the exhaust fume of the process that was set in motion by looking at the photo of Jennifer Aniston—

Matt Teichman:
—maybe it has nothing, in fact, to do with the mental experience.

Gaurav Venkataraman:
Exactly.

Matt Teichman:
Yeah.

Gaurav Venkataraman:
I’m sure that in the paper, they did some sort of control, or had some sort of argument about this point. But I think fundamentally, it’s very difficult to draw a distinction between those two interpretations, given how little we know about the brain and how nascent our recording techniques are. One way you might argue that is to say, is it an exhaust fume or is it a representation? You could say, “Well, to figure that out, show me the rest of the brain.” And the answer is that you can’t see the rest of the brain, so you’re kind of taking a leap of faith.

Matt Teichman:
You can’t see the rest of the brain: why is that, exactly? What if I do an MRI; isn’t that the whole brain?

Gaurav Venkataraman:
You can do MRI, but you lose spatial resolution; you can’t see individual neurons, because the brain is densely-packed. You can do functional MRI, but you have problems with that: there, you’re measuring deoxygenated blood flow, and so you’re worried about deconvolving the hemodynamic response function from the actual neural activity. So any way you try to get in there, you’re dealing with trade-offs. Unlike looking at your face, it’s difficult to get a convincing picture.

Matt Teichman:
Right. I mean, to look at somebody’s face, you don’t have to get inside of anything.

Gaurav Venkataraman:
Exactly. Ed Boyden had a great quote about this—I forget it now—but it was like: the brain is just difficult to study because everything is densely-packed inside the brain—difficult to access—and perturbing any part of it changes the other parts. And so, even in mice, if you want to do—

Matt Teichman:
—in the worst case, via death…

Gaurav Venkataraman:
Exactly. But even in mice, if you want to do a deep recording in the brain, you have to cut out part of the cortex to see into it, right? And so then, you have your scientific conclusion and you wonder, “Well, I wonder if this would have changed had I left that part of the cortex back in?” And so that’s just like—

Matt Teichman:
—it could have an effect on what you’re observing…

Gaurav Venkataraman:
Yeah, exactly. People are aware of this and they try to do controls, but fundamentally, it’s something that you have to deal with just based on the recording techniques. Adam Marblestone and George Church and others are trying to work on nondestructive and noninvasive neural techniques. Maybe we will get there, and then we will figure out the truth about the Jennifer Aniston neuron.

Matt Teichman:
Yeah. It’s especially interesting to me how the metaphors we reach for in this stuff seem like they’re connected to machines.

Gaurav Venkataraman:
Yeah.

Matt Teichman:
When I was asking, well, does it make sense to think of memories as being stored, clearly, I have either flash or magnetic storage in mind as a metaphor.

Gaurav Venkataraman:
Yeah.

Matt Teichman:
So a flash drive, if you store something on it and then unplug the machine, the things persists. Whereas if information is loaded in computer’s memory and you unplug the computer then, then the information vanishes.

Gaurav Venkataraman:
Yup.

Matt Teichman:
So maybe that’s what I was getting at with the metaphor. But I feel like whenever I try to unpack one of these metaphors, the thing I always reach for is a robot, or a machine, or a computer, or something like that. It seems like it’s always tempting.

Gaurav Venkataraman:
I definitely thought about memory in those terms for a long time, and then I got convinced not to think about it in those terms. I was thinking about it literally at the point of like, okay, well, where is the RAM and where is the ROM in the brain? Thinking maybe DNA is the RAM, RNA is the ROM, or something like that. And then I read this paper by Phil Agre, who was a collaborator of David Chapman back in the AI Lab at MIT in the ‘80s. And he wrote this little thing that was very impactful to me called Writing and Representation. He and Chapman were working on this idea, trying to create AI agents. As a preliminary work, they were trying to figure out: what kind of representations does the brain store, if any?

Matt Teichman:
Those would be computer programs that act like people who are reasoning about what to do?

Gaurav Venkataraman:
Exactly. At the time—I think Agre or Chapman, one of them—was working under Rodney Brooks, or vaguely associated with Rodney Brooks, who at that time was putting out this paper that was called Intelligence Without Representation—the kind of ideas which I think led to the Roomba (although some people contest that claim). I think they actually made a robot that was bumbling around the halls. We’re not going to implement any sort of planning system in the robot; we’re just going to implement these dynamic response functions. And what we see is that the robot behaves as if it’s planning in this emergent way, but it fundamentally comes from just this like stimulus response loop. They were really trying to understand whether you needed abstract representations to get intelligence, and also, as a related but not identical issue, whether people actually used abstract representations.

And so Agre wrote this—

Matt Teichman:
—abstract representations, in this context, would you think of it—it’s kind of like a mental picture, or picture in your mind? How would you define that?

Gaurav Venkataraman:
How Agre argued in “Writing and Representation” is that abstract representations of things are what we tend to use with tools. So when you write, you use the word chair: that’s an abstract representation of a chair. But a chair can mean many things to people in different contexts. So the chair that I’m sitting on, the chair that you are sitting on, the chair that I’m throwing at you: those are, in a sense, different objects, and we condense those into the abstract representation of chair to use the technology of writing.

So going back to your question—this is a very long-winded answer—of this computer metaphor in the brain, Agre made this point, which I thought was extremely thought-provoking, which is that we often look to our tools and try and use those tools as metaphors to make AI progress or understand the brain. But we actually build tools specifically to do things that our brains are bad at. And so, trying to build tools to do things that our brains are bad at, then use those tools to map back onto our brain to understand them, is actually putting things almost exactly backwards. That kind of dooms you to never actually understand the brain, because you’re starting with concepts that your brain doesn’t use, almost definitionally.

Matt Teichman:
I couldn’t agree more. I still do it, but I totally agree.

Gaurav Venkataraman:
Yeah, we all do it. And there’s another take on this. Iain McGilchrist’s work is in on mind, because two days ago, he released his new book, The Matter with Things, which I highly recommend, in addition to The Master and His Emissary. He has a point, based on hemisphere differences, that we’re too focused on reductionist mechanical explanations of the world. But in any case, trying to think about the brain as a computer, or as a Turing machine, is how I thought about it for a long time, and I think it’s probably the wrong way to think about it. I don’t say that flippantly, like if you think about it that way, you’re wrong, or it’s not going to be generative. But if you choose to think about it that way, you should try to be as cautious as possible about the baggage that your metaphor is going to bring to bear.

Matt Teichman:
Be prepared to encounter some challenges.

Gaurav Venkataraman:
Exactly.

Matt Teichman:
So maybe to return to the question we started with, which is where memories are stored—if they are stored anywhere—what’s the motivation for thinking they might be stored in unexpected places?

Gaurav Venkataraman:
Right. So it goes back to when we were talking about the 1950s, and this is before the synaptic memory hypothesis was firm. At that time, it was very appealing to think that memories could be stored in molecules. Molecules are long-lasting, and if you want to make the mechanical analogy, it’s like—

Matt Teichman:
—magnetic tape…

Gaurav Venkataraman:
Exactly. It’s magnetic tape. And if you look at DNA, you’re like, well, this could be like a tape.

Matt Teichman:
It does kind of look like tape!

Gaurav Venkataraman:
Exactly. And in the modern era, there are synthetic biologists, like Erik Winfree, that have literally made Turing complete machines out of DNA.

Matt Teichman:
And Turing complete means it can do everything that any computer can do?

Gaurav Venkataraman:
Yeah, in theory. And that work has not put thermodynamic constraints on what the speed of the computation is, etc. They do it, thus far, in test tubes—and I think maybe some in vitro, but mostly in test tubes—to show that these circuits can compute.

So in any case, people were thinking about this, and there was this guy, who initially was a TV producer, and then became a scientist afterwards, called James McConnell. And he was working with this organism called the planarian flatworm. The planarian flatworm is the simplest known organism to have a 2-hemisphere brain. It has a 2-hemisphere brain in its little head, and it’s got little ear-looking things that do the sensory input.

Matt Teichman:
It’s adorable!

Gaurav Venkataraman:
It’s an adorable little worm, and it moves around and slithers kind of cutely. It has this amazing regenerative capacity, so you can cut it up into reportedly like 237 little pieces—

Matt Teichman:
237!

Gaurav Venkataraman:
Yeah, that’s the number, I think. Something like that.

Matt Teichman:
It’s a giant—and random—number.

Gaurav Venkataraman:
Yeah. I wouldn’t necessarily stand by that number. I don’t know, exactly—it’s in the methodology. But basically, I’ve cut those things up quite a lot; you can cut it into pretty small pieces, and it will regrow an organism.

McConnell had this idea while working with these flatworms: what if I taught it something? He a psychologist, and he was trying to teach them things. He said, okay, well they seem to be learning. What happens if I cut it in half? Which half will remember? So he started making this claim that both the head and the tail would remember the memory. That would suggest that the memory is at least not stored in the planarian brain. It could still be stored in synaptic connections around the periphery of the worm. But at the very least, you would have to admit that it’s not stored in the brain connections.

Matt Teichman:
Right. That does seem intuitive. Man, it’s almost like a Star Trek transporter blooper experiment, except it’s with different parts of the body, I guess is the difference.

Gaurav Venkataraman:
Well, it got very Star Trek when McConnell did his next experiments, which were cannibalisms. These worms are cannibalistic when hungry. So what he would do is train a group of worms, grind them up, and feed them to naïve worms, and he claimed that the memory transferred over from the worms he had trained to the naïve worms.

Matt Teichman:
Wow.

Gaurav Venkataraman:
Yeah. There were—

Matt Teichman:
—how do you test what that little worm can remember?

Gaurav Venkataraman:
This was the core problem with the work—the reason why it was ultimately discredited—and I would argue that it’s still a problem in neuroscience. You have to have a really good behavior, and you have to believe that your behavior is what you think it is. It is truly a fear memory, or truly a special memory, or whathaveyou. McConnell, at the time, wasn’t that far from Pavlov’s experiments, I think. People were still thinking a lot about these shock and light pairings. And so, what he was doing was literally electrifying—giving an electric shock—to the worms, and he would give them a light switch before the shock. The idea was that they would learn to pair the light with the shock, and then—

Matt Teichman:
–they would recoil when they see the light, expecting a shock?

Gaurav Venkataraman:
Exactly: recoil. And he would look at the worm and decide if it had recoiled, so there was this observer situation—which people do in behavioral experiments, and it’s fine, but you have to be honest about it, and careful about it, preferably blinded. That kind of thing.

So that was the behavior, and it’s important to note that at the time—this was in the late ‘50-ish, maybe even early ‘50s—associative learning was a brand new field, really. It wasn’t well-known how to train organisms, in general. And so, McConnell was trying to advance this idea of memory outside the brain—

Matt Teichman:
—is that just a term for conditioning, training?

Gaurav Venkataraman:
Exactly. Things that we think about today like classical conditioning, as part of the parlance—the appropriate controls to do for those kinds of experiments were worked out in the ‘60s and ‘70s. That was the heyday of behavioral learning experiments in mice, and things like this. And so now, if you go to a neuroscience lab we take it for granted, okay, we’re going to do a fear conditioning task in a mouse. Everyone knows how to do it; everyone accepts that this is the readout that really is fear, and let’s just jump to what they considered the interesting thing, which is the neural correlates, etc.

But this wasn’t at all true in the ‘50s. So they were trying to put together how to train organisms in general at the same time they were putting together this radical molecular hypothesis—that memory could be stored outside of the planarian brain. It was an exciting time, but a very dangerous time as well.

Matt Teichman:
So if it’s not stored in the brain, and you cut the worm in half—and I’m just imagining its head is up here and its feet are down here (obviously, it doesn’t literally have feet, but whatever)—the bottom half is down here, and you can cut them into 237 pieces. Is the idea that like the full memories are stored in every single cell? What is the hypothesis about where the memories come from, if not from the brain?

Gaurav Venkataraman:
That’s a great point. What if you cut it in three pieces? What if you cut it in four pieces? Presumably, if you cut it into, like, the minimal possible spec of a piece, and the memory is stored there, that would tell you something really important.

As far as I know, no experiments along those lines were done or have been done. Somebody actually suggested that to me, because there was a professor at the Whitehead Institute at MIT—whose name I’m now blanking on—who’s shown that you can generate a planarian from a few cells. I think what he did was: he had a dead worm and he somehow seeded it with some tissues from an alive worm and showed that the live worm could take over the dead tissues, or somehow regenerate itself.

And so, somebody was suggesting exactly that; he said, well, why don’t you train an organism? Take the single cell, and then show that the memory can transfer over into your regenerated organism in this way. Then then you would really have a strong argument that it was this kind of single cell memory. I think something like that could be done, and probably should be done, if this claim is to be made in the strongest possible terms. But as far as I know, nobody has tried that yet.

McConnell’s explanation was that the memory was stored in RNA. And how he tried to demonstrate that was by extracting RNA from the planarian flatworms and injecting that into naïve flatworms, then claiming the memory transferred over. But there’s another really important thing about the behavioral experiments that we should emphasize here, which is that it wasn’t straight out that the memory would transfer over. His claim was: “After I transferred the RNA, the worms would get—you could retrain the worms on the same task faster than you normally could, relative to some similar task.”

Matt Teichman:
They dimly recalled what the other dead worm remembered.

Gaurav Venkataraman:
Exactly. They needed this reminder training, but not as much training. And so already, you’re thinking like, okay, well, you’re scoring the stuff, the field doesn’t really know how to do behavioral experiments yet, and it’s not even a straight up memory transfer; it involves this retraining. It seems like there’s a lot of if-s going on here.

But the field was accepting it, and for a while, McConnell was riding high. He became, I believe, a tenured professor at the University of Michigan. He was getting a lot of NIH funding. And he was also a media darling. He had a background as a television producer, so he would go on TV and say: oh yeah, in 5 years, we’re going to have professor burgers. Instead of going to college, you’ll just eat this burger, and then you’ll have knowledge—just like the planarian flatworm can eat other worms and have knowledge.

Matt Teichman:
He would have loved The Matrix, if he had ever seen it.

Gaurav Venkataraman:
Yeah, exactly. So this was going on, and there were a lot of other labs that were doing this kind of RNA transfer in goldfish, some in monkeys. There were some important controls that got done that kind of shed doubt on the experiments. Some people said, well, it’s really just the uric acid effect—effectively, in the RNA transfer, you are caffeinating the recipient, so that’s why it can learn faster. It’s not a true memory. And it all came to a head when this Nobel laureate in chemistry got interested in the molecular basis of memory. His lab tried to reproduce the behavior and claimed that they could not reproduce it, and the field kind of scattered—

Matt Teichman:
—I can’t say I’m shocked by that.

Gaurav Venkataraman:
Yeah, the field kind of scattered at that point. McConnell’s experiments were not great, but some of them were pretty good, and some of the other experiments were pretty good. So various neuroscientists and biophysicists have been interested in this idea over time, as we learn more about the brain and the weird role that RNA seems to play in the brain.

Like, a lot of noncoding RNA, that doesn’t seem to incorporate proteins, it gets expressed, which is metabolically expensive. It seems to get very specifically trafficked, so it has very specific cellular locations in the brain. And so, it feels like that noncoding RNA should be doing something. If you squint at it, and again, if you’re willing to excuse the machine metaphor, it kind of looks like a software layer. So there has been this idea that perhaps there was something to these McConnell experiments. There was something to this kind of RNA memory/RNA computational hypothesis.

Matt Teichman:
That’s what I was going to say. You’re making it sound pretty crazy and implausible, the way you present the research, and yet you’ve worked on it. So I’m interested in how we get from there to: well, maybe we should continue exploring this hypothesis.

Gaurav Venkataraman:
You always have to evaluate the craziness of a hypothesis with respect to its plausibility along all the domains that you can think of.

Matt Teichman:
Like, how does it ramify in every possible thing you can observe?

Gaurav Venkataraman:
Exactly. Part of the reason why I got interested in it was really because of a sociologist of science, Harry Collins. He had a student that has written this tome of a dissertation. I went and saw the physical copy in Wales, and it’s, like, two books, single-spaced, 800 pages each, typewritten—that did an extremely study of all of the memory transfer RNA memory experiments, why they were disbelieved, and what happened. Because there were a lot of related experiments about various molecule-storing memories, etc.

That body of sociological work made me feel like: okay, there are actually some pretty gold nuggets in here: work that looks really good, has not been disproved, and that actually makes sense in light of all the molecular details that we have learned since the ‘50s, and makes sense in light of what we’ve learned about classical conditioning, and how to appropriately design the experiments. So given the benefit of hindsight, these actually look like really convincing experiments.

Matt Teichman:
What’s an example of something we’ve learned about molecular biology, since the original time of the experiment, that this seems to fit with?

Gaurav Venkataraman:
I would say the noncoding RNA expression in the brain is definitely the major thing. John Mattick has been the major proponent of that. He was standing up for noncoding RNA, which was considered junk. The way that ideas work in biology is: people observe something, everyone assumes that it’s junk, and some framework is erected under which it’s just junk, and then later, it’s discovered to be functional.

Matt Teichman:
Noncoding means it doesn’t in any way ramify in how our bodies are shaped, or how we metabolize stuff?

Gaurav Venkataraman:
No, it doesn’t mean that; it just means that it doesn’t make protein. So it’s noncoding in the sense that if you believe that the role of DNA is to make protein—the so-called central dogma of molecular biology—then this is noncoding, because it doesn’t encode for proteins.

But what Mattick has argued is that this stuff is very functional—exactly what you said, actually—that it’s extremely important for for example, body patterning. And there is tons of evidence for that now, and now there are tons of high prestige labs studying noncoding RNA and all its wonderful functions. It is now known that this noncoding RNA is very valuable, but initially, it was just considered to be junk.

Matt Teichman:
So it fits in well with the hypothesis that RNA might have another job besides to make proteins.

Gaurav Venkataraman:
Exactly. And noncoding RNA in the brain, in my view, is particularly suspicious, for all the reasons that I articulated. There’s a lot of it, it seems to be differentially expressed, it seems to be expressed in very specific locations. If you look up at the synapse, there are these things called RNA granules sitting there. All of that makes you wonder about like what noncoding RNA is doing in the brain, and if Mattick was onto something.

The other behavioral experiments that had been done at the time were by this guy, Mike Levin, at Tufts. He had revisited McConnell’s experiments, with a different, better assay, and in a fully automated system, so it’s much more objective than what McConnell was doing in the ‘50s. He claimed that planarian memories were stored outside of the brain. So for those reasons, it seemed like the work was worth revisiting.

Matt Teichman:
But how would you explore this on an animal that you can’t regenerate? Because it seems like the planarian worms have this very special feature, which is that they grow back when you cut them.

Gaurav Venkataraman:
I guess that’s the third reason why that their work seemed worth repeating. People were learning about epigenetics and transgenerational effects of memory in mouse models. This guy, Kerry Ressler, who was at, I think Emory, at the time, now is at Harvard, did fear conditioning in a mouse, and claimed in a Nature and Neuroscience paper that the fear was passed on to progeny.

Eric Miska’s lab did a nutritional assay in a mouse—or some sort of nutritional modification—and showed that there were methylation marks that were then passed on to progeny. So this idea that there was some experiential stuff getting passed on to progeny, even in mammals, was emerging from the literature. Extremely controversial, but emerging nonetheless.

Matt Teichman:
Yeah, memories, cognitive abilities, these kinds of things—that would at best seem to be the exception, rather than the rule—getting passed from generation to generation. Certainly, knowledge of a language does not at all get passed from generation to generation.

Gaurav Venkataraman:
Yeah. And you don’t wake up and you’re 7 years old, and then all of a sudden have your father’s memories, right? So clearly, there is some filtering that goes on, and so the question is—

Matt Teichman:
—but even if anything gets transmitted, that’s quite eyebrow-raising.

Gaurav Venkataraman:
Yeah. So in C. elegans now, and I think even bird models and worm models, the idea of transgenerational inheritance of sensitivities and some nutritional states, is relatively uncontroversial. There are big fields that study it, and they have very reproducible work, and they’re working on the mechanisms. So maternal transgenerational inheritance—things like this—are known, and in the worm model—in the C. elegans model—this guy in North Carolina whose name now escapes me—

Matt Teichman:
—C. elegans is another worm?

Gaurav Venkataraman:
…is another worm, different worm: 302 neurons, simple nervous system. He actually showed that in some contexts, RNA from the neurons actually passes through to the germ line. This was a clear example of some molecules, at least, kind of moving from the nervous system to this more transgenerational setup. So there is evidence on the margins that something like this might be going on.

But then, you’ve got to understand there’s a lot of question marks here, right? The other thing about for example, Kerry Ressler’s work, is that it wasn’t totally clear that the behavior was a meaningful fear memory. So this is still a question in the field: how do you know that when your mouse freezes, this is really what you would call a higher fear memory, versus something that you might think is just a lower order bit that could conceivably be transmitted via some wacky mechanism?

Matt Teichman:
Have the initial observations about mice been replicated at all, or what exactly have we found, in terms of what mice seem to be able to inherit this way?

Gaurav Venkataraman:
I think the part that’s been replicated thus far is this idea that tRNA fragments—which are these little bits of RNA—are playing some role in intergenerational inheritance. There’s these papers from Katharina Gapp and Eric Miska (whose lab I was in for a spell) about experiences in what’s called an ‘F2’ generation, so like an older generation being transmitted down. Different kinds of environmental exposure is causing these post-translational modifications.

Matt Teichman:
So the generations downstream could not have been familiar with whatever they were exposed to, but they reacted in a way that suggested they were a little bit familiar with it—stuff like that.

Gaurav Venkataraman:
Yeah. Or even something simple, like a high stress response, causing a propensity to stress, or things like that; very broad strokes.

Matt Teichman:
Where they wouldn’t have any reason at all to actually stress.

Gaurav Venkataraman:
Exactly, or the propensity to stress. So it’s not like there was some pairing of a banana smell in the old generation, and the young generation is sensitive to a banana smell. That is what Kerry Ressler claimed, and that claim is very controversial—this kind of true associative learning. But this idea that if you generally stress out a mouse, then the subsequent generations are generally fearful, and tRNA fragments are playing some sort of role—I believe that is pretty uncontroversial, or at least quite well-replicated, these days.

Matt Teichman:
Okay. I’m convinced this line of inquiry is well-motivated. What was some of the stuff you looked into and what did you find?

Gaurav Venkataraman:
I guess there are two things that I was able to do. The first was: I just revisited Mike Levin’s work (in collaboration with Mike Levin), trying to use an even better learning assay. As I articulated, your conclusions are really only as strong as your learning assay. You need something that’s robust, that’s interpretable as a real fear memory, and that is or is not stored outside the brain. Then you have your conclusion, in the planarian flatworm.

I was using this fear assay that was more associative than what Mike had been using. And we found some evidence for memory being stored outside the brain, without any retraining necessary. You could just cut the heads off, you could let them grow back, and you would have fearful worms, without having to retrain them and argue that, like, oh, they just learn the task faster. So it was a cleaner demonstration of memory outside the brain in the planarian flatworm, with a cleaner, more pure, associative learning essay.

That was the basic setup, and that’s at the point at which I got the EV grant to move to Eric Miska’s lab at Cambridge and try to pursue the hypothesis molecularly, with some collaborators of mine.

Matt Teichman:
Okay. So that experiment was at the cutting head off of a worm level. What did you look at at the molecular level?

Gaurav Venkataraman:
I think going to the molecular level is important, because biologists want a molecular story, and they want a molecular story for arguably quite a reasonable reason, which is that biology is very messy. Your conclusions are statistical, particularly behavioral conclusions. And so, it’s easier to believe that a behavioral conclusion is reasonable, if you have a really strong mechanistic story behind it.

Matt Teichman:
It’s like you’ve uncovered the causal backstory behind what you observed, maybe.

Gaurav Venkataraman:
Exactly. And also, to find a robust mechanism, you really need a good behavior. You know what I mean? So it’s a little bit of a stress test of the behavior as well, in addition to just feeling like a complete story.

So we went to Cambridge to try and figure it out. The big success that we had there was actually a theoretical one, because what we wanted to do was just do RNA sequencing. The idea was: let’s RNA sequence before and after learning. Let’s find the differences in what the RNA looks like, and then, based on that, try to come to some sort of conclusion. In the back of my mind, I had these Erik Winfree circuits that I was describing earlier. They were essentially these see-saw-like things, where you would have an RNA strand come in, and then you would set off this string of RNA strands hitting each other, and have an RNA strand come out. You could do general purpose computation.

And so, what I set out to do with a collaborator there in Eric Miska’s lab, David Jordan, was figure out: what are thermodynamic constraints on computing with RNA both fast and accurately? What we discovered is that these linear circuits have a very hard trade-off between going fast and being accurate. It didn’t seem like you could compute (so to speak) fast (so to speak)—which are cognitive time scales—with these linear circuits that I was trying to look for.

Matt Teichman:
And fast here means: the experience has a chance to get imprinted on the RNA.

Gaurav Venkataraman:
That was my guess, right. Then you could make arguments like, well how fast do you actually need it? What’s the time scale? And my answer is: I don’t know. All that I know for sure is that there seems to be this really hard trade-off between going fast and accurately with these kind of circuits. And Winfree knows this.

Matt Teichman:
Although memories being inaccurate is also—that’s a well-established thing as well, isn’t it?

Gaurav Venkataraman:
Sure. But what we also discovered was something kind of interesting, which is that if you don’t have these linear circuits, where A causes B causes C causes D, but instead you have reaction topologies that are all to all connected, you can compute both fast and accurately. And in fact, the faster you go, the more accurate you get.

Matt Teichman:
So in other words, if the relevant strands of RNA have a lot of cycles in them? Is that the idea, or—?

Gaurav Venkataraman:
It’s that they can talk to all the other strands, basically. There’s a lot of interconnectivity going on.

Matt Teichman:
Okay, yeah.

Gaurav Venkataraman:
Literally what it means is that if you write down the reaction network, it looks like an all to all graph instead of like a linear graph. And the reasons for that have to do with thermodynamics, and kinetic proofreading, and whether you discriminate via binding energies and activation energies, which I won’t lecture you on here—

Matt Teichman:
—next episode!

Gaurav Venkataraman:
But it was a really interesting conclusion to come to, because we realized that these reaction networks actually existed in the brain. The reaction networks that we seemed to arrive at thermodynamically looked a lot like RNA granules, which are sitting at the synapse. And so, we started thinking about RNA granules, thinking about how RNA granules respond to electrical input, and we came to the hypothesis that these granules were the site of what you might think of as computation in the brain.

Now again, whether you want to say they are the real seat of memory, versus the synapse, or whether they are in dynamic play with the synapse, just translating proteins and then get stuck up to the synapse—all of that is still very much up for debate, and there are people like Erin Schuman, who are doing great work along those lines. But we definitely came to the conclusion that the RNA granules were doing something very interesting and kind of suspicious.

Matt Teichman:
So in other words, you’ve uncovered one potential job for this RNA that doesn’t make proteins in the brain, where previously, that was a question mark.

Gaurav Venkataraman:
Uh, ish.

Matt Teichman:
Okay. [ LAUGHTER ]

Gaurav Venkataraman:
I think a lot of people have done a lot of work showing that noncoding RNA does a lot of important stuff. I wouldn’t say like we were the guys who realized that noncoding RNA—if there was one guy, so to speak, it was John Mattick who taught the community that noncoding RNA was really important.

To the extent that our paper is important I think, hopefully it will provide a theoretical foundation for the community to understand—which they are already understanding experimentally—that RNA granules are extremely important, functionally. We are definitely not the only people to say this; there are many people who are studying RNA granules. I think we are the only people yet to have realized that the structure of this reaction network is very special, from a thermodynamic perspective.

Matt Teichman:
Man, I feel like just RNA keeps coming up over and over again in the news, between the COVID pandemic, and—

Gaurav Venkataraman:
Yeah, you know, RNA is pretty crazy. Thinking of RNA as a software layer of the cell, for all the badness of those style analogies, I think is pretty good. It gets you pretty far—and it comes with a lot of very dangerous baggage—but nonetheless, it’s a pretty good way to think about things.

Matt Teichman:
If it were software, would it be the operating system or file system? I wonder.

Gaurav Venkataraman:
That’s a great point. It’s something like: the file system is the operating system. Like, the strands are the information, and they are also actively acting on each other, very much like this “intelligence without representation,” but at the intracellular level, where things are just happening dynamically. It’s not like there’s some planning system that you’re going to find; that would be much more like these linear reaction networks. It’s going to look like these all-to-all topologies; it’s going to look very chaotic. But the reason it looks chaotic is so that you can compute by activation energy differences, which is how you compute both fast and accurately.

Matt Teichman:
How did you test the ability of these RNA networks to perform computations? When you mentioned performing computations, I’m just thinking, okay, somehow you fed them an input, and somehow you read an output off of them, and then you had some expectation about what the output should be. That’s how you determine whether it was accurate or not. You established this trade-off between speed and accuracy. How did that work?

Gaurav Venkataraman:
So, we didn’t do any experiments. It was purely a theoretical paper.

Matt Teichman:
Ah, okay. Uh-huh.

Gaurav Venkataraman:
Based on the principles of kinetic proofreading.

Matt Teichman:
So you showed that it was a mathematical fact.

Gaurav Venkataraman:
Exactly. There were a lot of complicated mathematical proofs. It was, like, a 20-page appendix with all these complicated mathematical proofs, to show it in general. Erik Winfree has done the work to show the things compute, and what he does is use fluorescent tags. So what you see is this fluorescent output, as a function of certain inputs.

Matt Teichman:
I wonder if this is something that could be of interest to people designing either software or hardware. Could there be—there’s this phrase I sometimes like, bio-mimetics, with various technologies imitating stuff from nature. That’s another place my brain instantly goes: can we use this?

Gaurav Venkataraman:
Yeah, definitely. Microsoft has been interested in DNA and RNA computation for a long time. Some people in Cambridge, like at the Microsoft Research Institute, there are classic Erik Winfree-style synthetic circuits. But also, even more practically, people like Twist Biosciences, is a producer of DNA. They have this new synthesis technology for DNAs, have like collaborations with Microsoft Research, the idea being that DNA is an amazing storage substrate. It’s very small. You can store a lot of information on it. It lasts super long time. There’s this idea that there is a data deluge in the world, and we need to move from standard hard drives to DNA storage as the basis of computing. That’s totally independent of how things work in the brain, but it’s just to say that DNA is a great macromolecule.

Matt Teichman:
So in terms of the big takeaway for this research, one thing I’ve heard about octopuses is that the activity of their central nervous system is more distributed through their bodies than in people. So is the takeaway that we are a little bit more like octopuses than we thought?

Gaurav Venkataraman:
We’re definitely more like octopuses than we thought. And I’ll tell you an octopus story as a takeaway here. When I graduated college, I went to the National Institute of Health in Bethesda, and I did a 2-year fellowship there with Miguel Holmgren, a great scientist. At the time, he was working on RNA editing, which is enzymes that would edit bases of RNA, and therefore change protein function. One of his collaborators, Josh Rosenthal, a couple years after I left, published this big Cell paper showing that octopuses would actually edit their RNA in response to temperature, in order to survive in the cold. (I think that’s right. It’s been a long time since I read the research, but it’s something along those lines.)

And so, in the same way that octopuses are editing their RNA to respond to the environment, I think humans are probably also editing their RNA to respond to the environment, and memory storage is a part of that. RNA-editing enzymes are known to operate in humans, but I think the story of octopuses using RNA as the software layer, if you will, and humans are also using that RNA as a software layer—distributed or not distributed—probably holds true broadly.

Matt Teichman:
Gaurav Venkataraman, thanks so much for joining us.

Gaurav Venkataraman:
Thanks for having me, Matt.


Elucidations isn't set up for blog comments currently, but if you have any thoughts or questions, please feel free to reach out on Twitter!