Subscribe to Elucidations:
       

Episode post here. Caroline Wall does it again! Thanks to Caroline for her excellent transcriptionage.


Matt Teichman:
Hello, and welcome to Elucidations, an unexpected philosophy podcast. I’m Matt Teichman.

Dominick Reo:
I’m Dominick Reo.

Matt Teichman:
With us today is James Koppel, PhD student in computer science at the Massachusetts Institute of Technology and professional mentor to experienced software engineers at jameskoppelcoaching.com. You may also have heard of him in connection with the security analysis of the Voatz voting app, which was recently covered in the New York Times. And he is here to discuss counterfactual inference and automated explanation. James Koppel, welcome to Elucidations.

James Koppel:
Thank you. I feel very welcome.

Matt Teichman:
Excellent. Okay, so our listeners may have heard of the topic of counterfactuals from our previous episode, Episode 91 with Paolo Santorio on the logic of counterfactuals. But for people who didn’t listen to that episode, maybe we could just introduce the topic. So what is a counterfactual conditional statement? What would be an example of one?

James Koppel:
An example of a counterfactual—if it had rained today, I would have brought an umbrella. So there’s a few features that make a counterfactual a counterfactual. And to really iron out the difference between a counterfactual and a different kind of statement, I have to explain what’s called the causal hierarchy. So the simplest kind of statements you can make are something like, based on looking at the sky, it will rain later today. And this is a question that you can answer, and get a statistical estimate on it, just by making a giant table of how many days were there this kind of cloud, and then: was there rain?

Matt Teichman:
So it’s like you’re predicting the future, in that case, based on prior observations.

James Koppel:
Yes. So it’s like the things that are happening today are drawn from the same distribution as the things that happened yesterday and the day before. So that is prediction—level one of the causal hierarchy. Level two of the causal hierarchy is intervention. So you might not know this, but humanity has invented weather control quite a while ago, and understands (just from reading news articles) that we basically know how to do one thing in weather control, which is to shoot silver iodide into the sky. Silver iodide is a nucleating site for clouds. And for some meteorological reason, it can both be used to create and destroy clouds. So for instance, in the 2008 Beijing Olympics, they had cannons of this stuff situated outside the city because everything had to be perfect. They did not want it to rain.

So let’s intervene on the weather, now. Let’s ask the question, if I shoot silver iodide into the sky, now will there be rain? And this you can no longer answer by looking at the table of what’s happened in the past because you’re changing the correlations. Maybe yesterday, there’s naturally a lot of silver iodide because of lightning. This time, there’s silver iodide without the lightning. And so all the other things it’s correlated with are messed up. And there’s a whole field of causal inference which is dedicated to how to answer this kind of prediction in the face of intervention without having to have randomized controlled experiments. But the gold standard is still a randomized controlled experiment. So with this prediction, you’re just observing some facts and then making inferences about the future—or also about the past. Like: given the sky, did it rain yesterday? There is no time in statistics.

Matt Teichman:
So it’s more like we’re just observers, but we’re not actually making stuff happen.

James Koppel:
Yeah, like I have a giant table that lists all the things that happen, and I just count them up and see what goes together.

Matt Teichman:
Right. Whereas if you’re doing something, it’s not just a table you’re looking at. It’s as though you’re putting stuff into the table.

James Koppel:
Yes. So then there is counterfactual. Now, this is a question like, given what I see today, what would the sky look like if I had shot silver iodide into the sky yesterday? So there’s a little more going on here. So now, we’re both making some predictions about various pieces of the state of the world yesterday, then doing an intervention there, and then looking at today. I mean, say you rewind, go back in time—rather, you predict the past based on your observation today. Then, you intervene in the past. And you play time forward and look at the new future.

Matt Teichman:
Yeah. It’s almost as though you’re running the clock forward again to see if the present comes out a certain way—like if you were God, and you could run the simulation of the universe backwards, and then change something, and then run it forwards again, what would happen?

James Koppel:
Yes, exactly.

Matt Teichman:
Cool. So these are three types of statements. And I can see how we’re ascending further and further into what-if scenarios as we go closer and closer up the hierarchy to the third place in the hierarchy, which is counterfactual statements.

Dominick Reo:
So why are computer scientists generally interested in counterfactuals?

James Koppel:
So I can give two answers to that. First, I’ll backpedal and say, when you say that computer scientists are interested in counterfactuals, it’s more that we, my collaborators and I, are interested in counterfactuals. So we do have a little bit of a fight to convince the rest of the world. So I’ll give you two answers as to why we care as computer scientists. One answer is from the AI perspective that my colleagues have, and one answer is from the programming tools builder’s perspective that I have.

So I’ll start with the AI perspective, because that one is more about building systems that affect the real world. The reason that AI builders should be interested in counterfactuals is because there are a lot of things in life that are counterfactual questions. So the very basic ones are: looking at the efficacy of a drug versus placebo is a counterfactual question. This person got better—is it because of the drug? Similarly, in law, there’s a ton of legal theory about the allocation of blame. It’s murder if you cause someone to die. And so, there was a case where a police officer was jumping after a guy who was running away and fell and died. And the guy he was running after was charged with murder. So these are counterfactual questions, and we want computers to be able to handle them.

Matt Teichman:
It seems like in all these examples, too, there’s a notion of cause and effect. There’s the sense that if we observe this effect, how can we tell that something was the cause of it? Well, we go back, and we remove the cause, and see if we still got the effect. That’s maybe one heuristic.

James Koppel:
Yeah. So we’re touching on the distinction between general causation and actual causation. General causation is statements such as ‘lightning causes fire’. Actual causation is statements about a specific configuration of events, such as ‘lightning caused the fire last night’. The thing you talked about just now was A caused B in a certain configuration, because if A hadn’t happened, B wouldn’t have happened. That is called Lewis causality. And there is a ton of theory about why that’s actually an unsatisfactory definition of actual causation.

So everything that happens causally in real life also has analogues in programming. So I just talked now about allocation of blame. Well, where’s the blame when something goes wrong in the world? Well, there’s all the things that go wrong in computers, that we care about—where’s the blame there?

Matt Teichman:
Yeah.

James Koppel:
So where’s the bug in the system? Those can be counterfactual questions.

Matt Teichman:
It’s interesting both from a ‘how do we fix it’ point of view and from the ‘who’s responsible’ point of view—so both a practical and a moral point of view.

James Koppel:
Yeah.

Matt Teichman:
Right. So computer programs have a lot of bugs. Bugs are mistakes in the computer code that make the program behave differently from how it’s supposed to behave. So there are all kinds of famous examples of this where you go to a website, and you intend to log in and do whatever, check your social media, and the website won’t let you log in. It doesn’t accept your password even your password is correct. Or, whatever—Obamacare rolls out, and the website crashes when people try to use it, et cetera, et cetera. Whenever stuff on the computer doesn’t work, it’s usually the result of a bug. And then, the thing we want to do to get it to work again is find the bug. What’s an example of a counterfactual statement involving a bug in computer code?

James Koppel:
So I want to talk a little bit about a project out of Stanford 15 or so years ago called the CBI—Cooperative Bug Isolation—Project, which was done by a guy named Ben Liblit, who was a student of Alex Aiken at Stanford and is now faculty at Wisconsin. So the idea of the cooperative bug isolation is: I’m a programmer at Microsoft, I’m working on Word, or I’m at Mozilla working on Firefox. I can spend a lot of time testing and trying to see if I can do stuff that makes it crash, or does something else. But there are already millions of people in the world who are running this day to day. Can I somehow use their data from them running the app to help me find the bugs?

And so what they did is they would instrument the code. So they basically put a lot of little probes in the code saying: record whether at this point x was greater than 10, or record whether at this point these two things were equal—a ton of stuff like that. Collect all this data about facts that are true at different points when the program’s running. Then, they would have to report these back to home. And then sometimes, it would crash on someone’s machine. And then, they would try to correlate this information they’ve collected with the existence of these crashes.

And they had decent results. But what they found is that because they’re doing a purely statistical approach, they’re not really able to distinguish which are the useful correlating facts to report. What they actually want to answer is a question about causality. So something they’d find is, like, when I run this program on a thousand-megabyte image, then it’s more likely to crash. And this is true because whatever weird artifacts were in this image that’s causing it to crash, it’s more likely to occur in a bigger image. So there were a lot of things like that that they’d pick up on, like: everything that’s correlated with the input being big is predictive of a bug. And that was not useful. So what they need is a way to distinguish the predicates—the facts that are correlated with the thing crashing, versus the things that are actually causal—so that they can better minimize and understand the real problem.

And if we really wanted to do the best job possible, there are a number of counterfactual questions to ask here—actually, even not as a programmer, but as a user, you can ask debugging questions. So I’m over your shoulder. You’re having trouble with your program. And you’re like: help, Jimmy. My thing is freezing. I can’t save the file. And I just look at your screen, and I see what’s on your screen right now, and I’m like, I bet you clicked this wrong button 10 minutes ago. You shouldn’t have clicked it. And then, we’ll be fine. That’s a counterfactual statement.

Matt Teichman:
Yeah. And this kind of careful analysis of what a program does at runtime can be really tricky, because every time you do something with the program in real life, you often do something slightly different each time, and you don’t necessarily know what exactly you did different each time. So it can be often difficult to reproduce a bug if you don’t know what caused it. So often, what you’ll do is you’ll begin with a very general statement: okay, well, I know sometime in this time range, the bug happened. Something I did here probably—because I swear it wasn’t ever doing it before this. And then, we’re going to try to narrow it down to something more specific to get to the cause.

James Koppel:
Yes. So debugging does involve a lot of this detective work, thinking I see the states—what might’ve happened in the previous state?

Dominick Reo:
So I’m wondering—is the idea behind trying to fix these computer bugs to want to find a better way than just looking at these correlations and doing the detective work?

James Koppel:
So first, two of the things we were just talking about—there is a normal way that people will do debugging—there’s a lot of different techniques they use. But they all involve some variant of what you call detective work: looking at stuff, figuring out why it’s happening. When I was talking about the correlations earlier, that is not the detective work. That is this guy at Stanford for his PhD thesis trying to write a tool to collect a lot of data to help people with this, and then discovering that there’s this missing piece of causality where with the information they’re finding, it’s hard to tease out the signal from the noise.

Dominick Reo:
So the idea is: instead of looking for correlations, to try and pin down the causality.

James Koppel:
So when I started working on that project in 2015, that was the idea—still being able to collect a lot of data from users that can help in debugging, while also getting the causal information that’s more directly effective at isolating the problem. And long story short, I could not actually get that to work.

Matt Teichman:
So what was it about that project that ended up not working, and what are some of the lessons that you drew from that?

James Koppel:
So a big lesson I drew from it is that the field of causal inference is still pretty immature. And there are a number of basic ways that we develop a theory of causal inference that are a mismatch to applying to a programming context. Let’s talk a bit about these broader issues with causal inference.

So one is the definition of an intervention. So I want to say: this program crashed because this list was too big. It was greater than length 10. That means if I go back in time and set the length of the list to greater than 10, it will crash. Well, what does it mean to set the length of a list? It’s kind of like saying, how heavy would your car be if there were 10 items in it? It depends on what those 10 items are. What the program does depends on what the list of length 10 is.

So you can make an analogy to talking about a different kind of derived property. Temperature is not a primordial property of the universe. It’s a statistical summary of the amount of energy in each molecule in the air. So I want to say, what will happen if I set the temperature of this room to 100 degrees? And a lot of the time, we can work with a model. We can say that. But maybe we need something that’s super precise, that might talk about chemical reactions happening in different places in the room. Then, it actually matters how you impart energy to the molecules in this room. So setting the temperature to 100 degrees corresponds to this astronomical number of possibilities for how you put energy into each molecule. And so for some questions, it matters which one.

Another problem is that a lot of the theory of causal inference breaks down in the presence of determinism—which is kind of a weird thing. So most of the time, when we have something that’s always run the same way—it’s not random or deterministic—that makes it easier. But in this setting, it makes it harder. So a lot of the causal inference stuff developed by Judea Pearl relies on every combination of events being observable. So if you want to be able to predict whether creating rain will cause mud, you need to have seen rain and mud before. So forget the fact that we know a little bit of physics and engineering and can predict what will happen. Suppose I see a one-story straw building, and I want to intervene on it in the world and set the number of stories to 10. Now, I want to look at my table of stuff that’s happened in the past to predict what will happen. And if you’ve never seen a 10-story straw building before, then you get a giant “cannot divide by zero” error.

And the third thing about determinism is that determinism can actually introduce a lot more spurious lack of correlations. In causal inference, there’s something called the faithfulness condition. So suppose I have a deck of cards, then I deal half the deck to Dominick, and then he deals half of his half to Matt. So my cards caused Dominick’s cards, which caused Matt’s cards. And so there is going to be a correlation between what I have and between what Matt has, which is mediated through Dominick. But there’s a strategy I can use where, say, I’m going to give a bunch of cards to Dominick, but it’s always going to contain the ace of spades. And he’s just going to give the ace of spades to Matt. And now, so long as they always have the ace of spades, there’s absolutely no correlation between what I have and what Matt has. So that is determinism which is destroying a correlation. But if we just look at the causal graph of: ‘what I have caused what Dominick has caused what Matt has’, you just look at this graph and you think, oh, these things should be correlated.

Matt Teichman:
Hmm. So it’s like if they’re guaranteed to be correlated for some other reason other than what’s causing what—

James Koppel:
So if I were to do this process randomly, then what I have correlates with what Matt has. But if I were to say I’m always going to do this, he’s always going to do this, then I can actually hide information about what I have and destroy this connection. And so that’s something which, in all the settings where people like Pearl were doing causal inference, was not really a thing. But when you’re doing normal programming, everything is deterministic by default.

Matt Teichman:
Yeah. That is a surprising result. Because you would think if the results of the evolution of a system in a simulation, or just the results of some computer code—like what it’s going to output—are more predictable, that would reign the problem in and make it easier. But yeah, this reveals a way in which since that’s introducing further constraints not having to do with things causing each other about what the outcomes are going to be, that’s going to interfere with our ability to identify the causes.

So one of your research projects has been to develop a computer language that has the ability to describe retroactive interventions into the past. What are some of the design features of that language? How’s it set up? And how do you get this effect where you’re able to describe these hypothetical retroactive reality-changing scenarios to get a better feel for what the causes and effects are?

James Koppel:
Sure. So I’ll give some extra context to this. So I told you about some of the stuff I was doing back in 2015, and failing, on applying causal inference to bug finding, to analyzing programs, understanding what they do. So fast forward from 2015 to 2018, and my office mate, Zenna Tavares, starts to be interested in causality And much of his thesis—he recently finished his PhD—much of his thesis is on probabilistic programming. Probabilistic programming is a pretty new, hot thing. So a lot of people are creating these fancy statistical models for describing things. So say I want to model how fraud works. So I say, somewhere out there, there’s some number of people who have some motives, and X number of credit cards get stolen. And then, some decisions happen that cause someone to go to a place that’s not too far from them. So I can describe a ton of unknowns and random choices, or unmodeled non-deterministic choices, but that still can give me a very rich picture of how fraud works.

So probabilistic programming is a language specifically designed for building these models, where every variable is a statistical random variable, one of many possibilities. And I just write down: let some unknown fraction of people be fraudsters. Let there be some distribution. Let there be some number of places that are nearby where you might try to buy stuff. So I can play these four normally. And if you played the four normally, just like a program with randomness, it just gives you a random state of the world.

But the interesting thing about probabilistic programming is that it also allows for inference. It also allows you to play this backwards. It also allows you to say, of all the ways of running this program in which I see someone spend 100 dollars at the Apple store, whereas before, they’d spend 100 dollars at a different Apple store on the other side of the country, what fraction of those are fraud? So probabilistic programming is a programming language specifically designed for building these models and doing inferences on them.

So we realized that this is a much more promising setting for doing causal reasoning, for a number of reasons. One is that the programs are a lot smaller than the real programs I was interested in finding bugs in. And another is that you now do have more randomness and less determinism, although the problem with determinism still applies. You still have some.

So Zenna a built a language called Omega. And long story short, I think Omega is built in a more elegant way than most probabilistic languages, directly based on measure theory, which is the leading mathematics behind probability. And he discovered that he actually had a pretty clean way of defining intervention in that once we did this, we actually got counterfactuals for free. So I and my other officemate Xin Zhang came in to help him on this project, and try to give a rich, well-defined formal semantics.

So I guess I can go into a bit about how we define counterfactuals in a probabilistic programming setting. So the people were doing counterfactuals and intervention causal graphs. The way it works is called the twin network construction. There’s the question: if it had rained, I would have brought an umbrella. So we have some causal network. It’s like, these are the factors that cause rain. These are the factors that influenced my decision to bring an umbrella. We create two copies of that network—one for the factual world, one for the counterfactual. Then, we link them in saying the background facts, like about how sunny it was—those are the same in both the factual and the counterfactual worlds.

Now, for the counterfactual half of the network, we modify it. So we have a node in there—we have a variable that says whether it rains, and it depends on all these other things. We just change that node, cut everything that influenced it, and just say, did it rain? Yes. Then, once we’ve done that, we have a big causal graph that contains both the actual world, and facts about how sunny it was, and the counterfactual world. And then we say, here are statistical facts about how sunny it is. And we do normal probabilistic inference. So that’s the twin network construction.

And this breaks down when you start to go into programs because it assumes that I can make a graph where—say I want to talk about whether it rained today, I want to talk about whether it rained every day for the last year, or every day for an unbounded amount of time into the past. Now, instead of having one variable for whether it rained, I add n of them for unknown n. So that’s moving from a fixed, static setting to a program setting. But really, when we’re doing these graphs, they can be thought of as programs that are straight-line code, which means that they don’t have loops. It’s just do this, do this, do this, do this. So we’re thinking, how can we adapt this definition to a setting that does have loops?

And so the principle works the same. It’s very elegant. So now we’re saying: I have this variable which is defined in this way, and it depends on that, which depends on that, which depends on that, which depends on that. So to define a counterfactual, I’m going to create another copy of this, reach back in the past, and change something it depends on transitively. Now, I can run that and get a new value. I can do the same kind of statistical inference.

So to make a very concrete example, let’s play a game. I’m going to flip a seven-sided die. You can choose a number between zero and six. Before I show you the results, I want you to guess a number.

Matt Teichman:
Three.

James Koppel:
Okay. So the rule is that if your guess is within one of the value of the die, then you win, else you lose. So say you guess three, and I just tell you you lost.

Matt Teichman:
Oh, man. Okay.

James Koppel:
So given that you played three and lost, suppose that you had played one instead. What’s your probability of winning? I still haven’t told you what the real value was. I’ve just told you that three was not within one of the real answer.

Matt Teichman:
Ah. I dunno.

James Koppel:
So you played three and lost. The value can’t be two, three or four. But it could be five, six, zero, or one. So if you played one instead, then you win if it’s either zero or one, and lose if it’s five or six. So you have a one-half chance of winning.

Matt Teichman:
Ah. Okay.

James Koppel:
So the way that you write this in Omega—or the way that it runs in Omega—is: I define a function, which is my game. First, somewhere else, you haven’t seen it, I can set the true value of the die. Now, let’s say I have some program that takes your guess. You gave me three. I define a variable, set a variable to three. And I have a subprogram which takes in the true random value of the die, and spits out whether you won or lost. So I’m going to condition on this thing evaluating to ‘you lost’. And then, I’m going to create another copy of this, and say in this one, I’m going to set your guess to one, and, conditioned on the real execution having lost, run this forward and tell me how likely you are to win. And conceptually, what it’s doing is that I’ve written all this stuff that depends on what your guess was. And I’m basically just running that code again, but retroactively setting your guess to something else.

Matt Teichman:
Right. And it’s exploiting the fact that we have some information about the algorithm—namely, that you can be within one.

James Koppel:
It’s exploiting the information that I actually have written down in code exactly what the whole process was—of going from your guess to win or lose.

Matt Teichman:
So one potential future direction for this line of research that people are interested in is: getting computer software to actually furnish us with explanations of why something happens. So how does this twin approach, using the actual history and the counterfactual history, help us with that?

James Koppel:
Yes, thanks for that. So a lot of this is future work we haven’t done yet. So I don’t want to sell it too hard. We haven’t proven that what we want to do is actually going to work. But we’re pretty excited about it. So earlier, I talked about general causation versus actual causation, general causation being ‘lightning causes fire’, actual causation being what caused the fire last night—because the theory of explanation is built on the theory of actual causation.

So there’ve been a number of definitions proposed for actual causation. Again, in the setting of causal graphs, we have a fixed number of events. Some unpublished work that I’ve done is trying to generalize this to a program setting. So I have a working definition, which is—I’m not sure how easy it will be to actually get computers to do this; it might be inefficient—but a definition where I can have two tic-tac-toe AIs play each other, and then say that the cause of the first AI having lost is the first mistake it made.

Matt Teichman:
Okay, right—so strategic causes, in that example.

James Koppel:
Yeah. So there’s another definition from someone else that I looked at. And I thought about it, and I realized that it would say the actual cause of the AI having lost was every single move it made. But I came up with a definition where the actual cause would be the thing where it could have actually done something different, which is the first mistake it made—where it switched from a potentially winning position to a ‘losing if the other AI plays well’ position.

Matt Teichman:
And I guess in certain cases you can mathematically determine:having made this move, it’s impossible to win, now.

James Koppel:
Yeah. Assuming the other AI plays optimally, it’s impossible to win now.

Matt Teichman:
Yeah.

James Koppel:
In combinatorial game theory, these are called winning positions versus losing positions. So it’s a winning position if you can make a move that will be a losing position for the other player. It’s a losing position if no matter what move the other player makes, it will give you a winning position to play from.

Matt Teichman:
Hmm, okay. So the input to this, then, I guess, is a full game of tic-tac-toe with two players.

James Koppel:
Really, the input is going to be the entire game tree of all possibilities, which is determined by having the code of at least the opposing AI.

Matt Teichman:
Okay.

James Koppel:
So this works well on paper. I’m not sure I can actually program this efficiently.

Matt Teichman:
I see. So we have the whole, basically, headspace of the opponent, essentially, as part of the input.

James Koppel:
Yeah.

Matt Teichman:
We have the rules of the game—

James Koppel:
It’s basically saying the input is now a fixed, deterministic process, whereas you, the player or the AI you’re analyzing, has free will.

Matt Teichman:
I see, I see.

Dominick Reo:
So does that mean that the input would be: if this move is made, then we know that the opponent will make this move?

James Koppel:
Yeah, exactly.

Dominick Reo:
Okay.

James Koppel:
So at that point, it kind of devolves into a one-player game. So for a given event, there are many actual causes. So when you’re talking about explanations, you’re going from actual causes—why did this happen?—to a useful answer to ‘why did this happen’? If a ball hits you in the face and you ask, ‘Why?’ a useless answer is because it was next to your face when it was going 90 miles an hour. A more useful explanation is that I threw it at you.

So what we want to do is based on a theory of explanations given by Joseph Halpern and Judea Pearl in the British Journal for The Philosophy of Science about 15 years ago. So they say you, as the listener, have a subjective model of the world. The ball hits you in the face. You know some things about how balls fly. So you’re fully aware that its being next to you and going towards you will cause it to hit you. And then you have a distribution over many other factors. So maybe it hit you in the back of the head. You know how the world works. You have some idea of what might be behind you. But you don’t know if there are people there. You don’t know if there is, say, a ball-spitting machine there. So you have a distribution over causal models. You only have partial information about the world and how it works.

Matt Teichman:
And the distribution is how likely you think all the different possible outcomes are—how likely each of them is?

James Koppel:
It’s not just outcomes. It’s more about the structure. So it’s less about being uncertain over how likely I am to throw a ball at you vs. being uncertain about whether there is even someone around who could throw a ball. Maybe you’re also uncertain about gravity, were you to go a step further back. So from this distribution you have about the world, you can talk about information. Information theory is something that basically went from zero to a complete theory in one paper in the ‘50s by Claude Shannon. So if there’s a very low probability event that happens and I tell you about it, that’s very high information. It’s basically the one-sentence summary.

So if I tell you that there’s a baseball-spitting practice machine right behind you, that’d be a very surprising fact with very high explanatory power. It’s a great explanation. And if I tell you there’s a baseball right behind you going 90 miles an hour, given that it hit you in the head pretty hard, it’d be a very unsurprising fact. So that’s low information. So an explanation is revealing some fact about the causal model that caused the event to occur which greatly reduces your uncertainty.

Matt Teichman:
So maybe we could walk through those two scenarios and think about how one of them is more uncertainty-reducing than the other. So in the case of the baseball-launching machine, that’s going to reduce my uncertainty less than in the case of someone throwing it at me.

James Koppel:
It’s going to reduce your uncertainty more.

Matt Teichman:
Oh, sorry.

James Koppel:
It’s going to give you more information in that given that there’s a baseball-throwing machine right behind you—first, it’s a very surprising fact. We’re in a conference room at a university right now. It’s not a place where you’d expect to find one of these.

Matt Teichman:
Yes.

James Koppel:
Second, not much more needs to be said. Given that it’s there, you might place decent credence on it being on. You know that a pretty likely outcome is being hit in the back of the head, given that it’s there, and given all that we know about how balls fly. Whereas if I just told you there’s a person standing behind you, there’s still a lot more information you’d want to know that would lead to this outcome. It’s like you still have the uncertainty of, okay, is this person angry me for some reason? And is he a major league pitcher?

Matt Teichman:
Yeah. And the question, ‘How did that person get here?’ is different, too. Right? In the case of the machine, it’s just so weird that the machine would be here in the first place. That’s sort of less salient.

James Koppel:
Yeah. But once you know it’s there, there’s not much more you need to be told about why you were hit in the head.

Matt Teichman:
Yeah. That’s interesting. Right. But it’s a little bit more normal for a person to be here, in a strange way. Because it’s more normal for a person to be here, it’s more in line with our expectations.

James Koppel:
Yes. And given that you were hit in the back of the head pretty hard by a baseball, it’s very normal a split second earlier for it to have been flying towards your head at 90 miles an hour.

Matt Teichman:
Yeah.

James Koppel:
And because it’s so normal, that’s not a useful explanation.

Matt Teichman:
Mm-hmm. So what are some potential applications of this approach to generating explanations for remarkable phenomena?

James Koppel:
So we pitched this giant project to the Air Force about using it in disaster relief scenarios, and trying to get an AI to infer why someone is walking by the side of the road, or using it to answer the question, why did this car crash? But our grant application was rejected. And so I don’t know. But there’s still lots of scenarios where you want explanations, and if you just get a trace of all the things that led up to the events, then you’ll be very unhappy.

Matt Teichman:
Hmm. So maybe scenarios where there are too many factors, it’s difficult for a human to wrap their mind around all the possible factors that could have led to—like maybe a meteorological explanation is too complicated. There’s too much stuff to take into account. But it might be easier to—

James Koppel:
Well, if a meteorological explanation involves all the movements of water molecules, then no. So a lot of the more near-term applications we’re talking about trying to get this to work in are these small mechanical settings, like billiards or Jenga. Why did my tower crash? The base wasn’t stable, or someone shook the table. But you can imagine that if you can get it to work there, then maybe you have a shot at getting it to work on more interesting questions like, why did the car crash? I’m sure there are plenty of people who would love an AI that can answer that question and give an informative answer.

Matt Teichman:
Right, right. Or if there’s a tragic accident at the factory, or any of this kind of stuff where we’re just trying to—almost like the physical version of what we were talking about earlier with the software bugs.

James Koppel:
Yeah. So coming full circle, I talked about it explaining car crashes, and that’s great. But back to my main specialty, I would also like to be able to explain program crashes.

Matt Teichman:
Right. Any attempt to retroactively understand a disaster situation, which unfortunately comes up when there are disasters, this is potentially useful for. I think this gets at something that I find really interesting about your work, which is that you’re providing a communication line between really abstract, theoretical, computer science stuff and practical, real-world issues. And I guess this is just one example of that.

James Koppel, thanks so much for joining us.

James Koppel:
Thank you.


Elucidations isn't set up for blog comments currently, but if you have any thoughts or questions, please feel free to reach out on Twitter!