Evelyn Lamb: Hello and welcome to My Favorite Theorem. This is a podcast about math where we invite a mathematician in each episode to tell us about their favorite theorem. I’m one of your hosts, Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.
Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. I’m excited about this one.
EL: Yes, I’m very excited. I’m too excited to do any banter. We’re coming up on our one-year anniversary, and we are very honored today to have a special guest. She is a professor at Duke. She has gotten a MacArthur Fellowship, won many prizes. I was just reading her Wikipedia page, and there are too many to list. So we are very happy to have Ingrid Daubechies on the show. Hi, Ingrid. Can you tell us a little bit about yourself?
Ingrid Daubechies: Hi, Evelyn and Kevin. Sure. I have just come back from spending several months in Belgium, in Brussels. I had arranged to have a sabbatical there to be close to and help set up arrangements for my very elderly parents. But I also was involved in a number of fun things, like the annual contest for high school students, to encourage them to major in mathematics once they get to college. And this is the year I turn 64, and because 64 is a much more special number than 60 for mathematicians, my students had arranged to organize some festivities, which we held in a conference center in my native village.
KK: That’s fantastic.
ID: A lot of fun. We had family and friends, we opted to have a family and friends activity instead of a conference where we tried to get the highest possible collection of big name marquee. I enjoyed it hugely. We had a big party in Belgium where I invited via Facebook everybody who ever crossed my timeline. There were people I went to high school with, there was a professor who taught me linear algebra.
KK: Oh, wow.
ID: So it was really a lot of fun.
KK: That’s fantastic.
EL: Yeah, and you have also been president of the International Mathematical Union. I meant to say that at the beginning and forgot. So that is also very exciting. I think while you were president, you probably don’t remember this, but I think we met at a conference, and I was trying to talk to you about something and was very anxious because my grandfather had just gone to the hospital, and I really couldn’t think about anything else. I remember how kind you were to me during that, and just, I think you were talking about your parents as well. And I was just thinking, wow, I’m talking to the president of the International Mathematical Union, and all I can think about is my grandpa, and she is being so nice to me.
ID: Well, of course. This is so important. We are people. We are connected to other people around us, and that is a big part of our life, even if we are mathematicians.
EL: But we have you on the show today to talk about theorems, so what is your favorite theorem?
ID: Well, I of course can’t say that I have one particular favorite theorem. There are so many beautiful theorems. Right now I have learned very recently, and I am ashamed to confess how recently, because it’s a theorem that many people learn in kindergarten, it’s a theorem called Tutte’s embedding theorem about graphs, meshes, in my case it’s a triangular mesh, and the fact that you can embed it, meaning defining a map to a polygon in the plane without having any of the vertices cross, so really an embedding of the whole graph, so every triangle on the complicated mesh that you have, it’s a disk-type mesh, meaning it has no holes, a boundary, lots of triangles, but you can think of it as a complicated thing, and you can embed it under certain conditions in a convex polygon in the plane, and I really, really, really love that. I visualize it by thinking of it as a complicated shape and applying a hair dryer to it to kind of, like Saran wrap, and a hair dryer will flatten it nicely, will try to flatten it, and I think the fact that you can always do it is great. And we’re using it for an interesting, actually we are extending it to mappings, the theorem is originally formulated for a convex polygon in the plane, you can always map to a convex polygon in the plane, and we are extending it to the case where you have a non-convex polygon because that’s what we need, and then we have certain conditions.
KK: Sure. Well, there have to be some conditions, right, because certainly not every graph, every mesh you would draw is planar.
ID: Yeah.
KK: What are those conditions?
ID: It has to be 3-connected, and you define a set of weights on it, on the edges, that ensure planarity. You define weights on the edges that are all positive. What happens is that once you have it in the polygon, you can write each one of the vertices as a convex combination of its neighbors.
KK: Yeah.
ID: And those define your weights. You have to have a set of weights on the edges on your original graph that will make that possible.
KK: Okay.
ID: So you define weights on the original graph that help you in the embedding. What happens is that the positive weights are then used for that convexity. So you have these positive weights, and you use them to make this embedding, and so it’s a theorem that doesn’t tell you only that it is planar, but it gives you a mechanism for building that map to the plane. That’s really the power of the theorem. So you start already with something that you know is planar and you build that map.
KK: Okay.
ID: It’s really powerful. It’s used a lot by people in computer graphics. They then can reason on that Tutte embedding in the plane to build other things and apply them back to the original mesh they had in 3-space for the complicated object they had. And that’s also what we’re trying to use it for. But we like the idea of going to non-convex polygons because that, for certain of the applications that we have, will give us much less deformation.
EL: So, is this related to, I know that you’ve done some work with art reconstruction, and actually in the back of the video here, I think I see some pictures of art that you have helped reconstruct. So is it related to that work?
ID: Actually, it isn’t, although if at some point we go to 3-D objects rather than the paintings we are doing now, it might become useful. But right now this collaboration is with biologists where we’re trying to, well we have been working for several years and we’re getting good results, we are quantifying similarity of morphological surfaces. So the people we work with are working on bones and teeth. They’re paleontologists. Well, they’re interested in evolutionary anthropology, but they work a lot with teeth and bones. And there’s a lot of domain knowledge they have because they’ve seen so many, and they remember things. But of course in order to do science with it, they need to quantify how many similar or dissimilar things are. And they have many methods to do that. And we are trying to work with them to try to automate some of these methods in ways that they find useful and ways that they seek. We’ve gotten very good results in this over the many years that we’ve worked with them. We’re very excited about recent progress we’ve made. In doing that, these surfaces already for their studies get scanned and triangulated. So they have these 3-d triangulations in space. When you work with these organs and muscles and all these things in biology, usually you have 3-d shapes, and in many instances you have them voxelized, meaning you have the 3-d thing. But because they work with fossils, which often they cannot borrow from the place where the fossil is, they work with casts of those in very high-quality resin. And as a result of that, when they bring the cast back, they have the surface very accurately, but they don’t have the 3-d structure. So we work with the surfaces, and that’s why we work with these 3-d meshes of surfaces. And we then have to quantify how close and similar or dissimilar things are. And not just the whole thing, but pieces of it. We have to find ways in which to segment these in biologically meaningful ways. The embedding theorem comes in useful.
But it’s been very interesting to try to build mathematically a structure that will embody a lot of how biologists work. Traditionally what they do is, because they know so much about the collection of things they study, is they find landmarks, so they have this whole collection. They see all these things have this particular thing in common. It looks different and so on. But this landmark point that we mark digitally on these scanned surfaces is the same point in all of them. And the other point is the same. So they mark landmarks, maybe 20 landmarks. And then you can use that to define a mapping. But they asked us, “could we possibly do this landmark-free at some point?” And many biologists scoffed at the idea. How could you do this? At the beginning, of course, we couldn’t. We could find distances that were not so different from theirs, but the landmarks were not in the right places. But we then started realizing, look, why do they have this immense knowledge? Because they have seen so many more than just 20 that they’re now studying.
So we realized this was something where we should look at many collections, and there we have found, with a student of mine who made a breakthrough, if you start from a mapping between, so you have many surfaces, and you have a first way of mapping one to the other and then defining a similarity or not, depending on how faithful the mapping is, all these mappings are kind of wrong, not quite right. But because you have a large collection, there are so many little mistakes that are made that if you have a way of looking at it all, you can view those mistakes as the errors in a data set, and you can try to cancel them out. You can try to separate the grains from the chaff to get the essence of what is in there. A little bit like students will learn when they have a mentor who tells them, no, that point is not really what you think, and so on. So that’s what we do now. We have large collections. We have initial mappings that are not perfect. And we use the fact that we have the large collection to define, then, from that large collection, using machine learning tools, a much better mapping. The biologists have been really impressed by how much better the mappings are once we do that. The wonderful thing is that we use this framework, of course we use machine learning tools, we use all these computer graphics and dealing with surfaces to be efficient. We frame it as a fiber bundle, and we learn. If you think of it, every single one, if you look at a large collection, every one differs by little bits. We want to learn the structure of this set of teeth. Every tooth is a 2-d surface, and similar teeth can map to each other, so they’re all fibers, and we have a connection. And we learn that connection. We have a very noisy version of the connection. But because we know it’s a connection, and because it’s a connection that should be flat because things can be brought back to their common ancestor, and so going from A to B and B to C, it should not matter in what order you go because all these mappings can go to the common ancestor, and so it should kind of commute, we can really get things out. We have been able to use that in order to build correspondences that biologists are now using for their statistical analysis.
KK: So differential geometry for biology.
ID: Yes. Discrete differential geometry, which if there is an oxymoron, that’s one.
KK: Wow.
ID: So we have a team that has a biologist, it has people who are differential geometers, we have a computational geometer, and he was telling me, “you know, for this particular piece of it, it would be really useful if we had a generalization of Tutte’s theorem to non-convex polygons,” and I said, “well, what’s Tutte’s theorem?” And so I learned it last week, and that’s why it’s today my favorite theorem.
EL: Oh wow, that’s really neat.
KK: So we’ll follow up with you next year and see what your favorite theorem is then.
EL: Yeah, it sounds like a really neat collaborative environment there where everybody has their own special knowledge that they’re bringing to the table.
ID: Yes, and actually I have found that to be very, very stimulating in my whole career. I like working with other people. I like when they give you challenges. I like feeling my brain at work, with working together with their different expertise. And, well, once you’ve seen a couple of these collaborations at work, you get a feel for how you jump-start that, how you manage to get people talking about the problems they have and kind of brainstorm until a few problems get isolated on which we really can start to get our teeth dug in and work on it. And that itself is a dynamic you have to learn. I’m sure there are social scientists who know much more about this. In my limited setting, I now have some experience in starting these things up, and so my students and postdocs participate. And some of them have become good at propagating. I’m very motivated by the fact that you can do applications of mathematics that are really nontrivial, and you can distill nontrivial problems out of what people think are mundane applications. But it takes some investing to get there. Because usually the people who have the applications—the biologists, in my case—they didn’t say, “we had this very particular fiber bundle problem.”
EL: Right.
ID: In fact, it’s my student who then realized we really had a fiber bundle, and that helped define a machine learning problem differently than it had been before. That then led to interesting results. So you need all the background, you need the sense of adventure of trying to build tools in that background that might be useful. And I’m convinced that for some of these tools that we build, when more pure mathematicians learn about them, they might distill things in their world from what we need. And this can lead to more pure mathematics ultimately.
KK: Sure, a big feedback loop.
ID: Yes, absolutely. That’s what I believe in very, very strongly. But part of my life is being open to when I hear about things, is there a meaningful mathematical way to frame this? Not just for the fun of it, but will it help?
EL: Yeah, well, as I mentioned, I was amazed by the way you’ve used math for this art reconstruction. I think I saw a talk or an article you wrote about, and it was just fascinating. Things that I never would have thought would be applicable to that sphere.
ID: Yeah, and again it’s the case that there’s a whole lot of knowledge we have that could be applicable, and in that particular case, I have found that it’s a wonderful way to get undergraduates involved because they at the same time learn these tools of image processing and small machine learning tools working on these wonderful images. I mean, how much cooler is it to work on the Ghent altar piece, or even less famous artwork, than to work on standards of image analysis. So that has been a lot of fun. And actually, as I was in Belgium, the first event of the week of celebration we had was an IP4AI, which is Image Processing for Art Investigation, workshop. It’s really over the last 10-15 years, as a community is taking off. We’re trying to have this series of workshops where we have people who are interested in image processing and the mathematics and the engineering of that talk to people who have concrete problems in art conservation or understand art history. We try to have these workshops in museums, and we had it at a museum in Ghent, and it again was very, very stimulating exhilarating.
KK: So another thing we like to do on this podcast is ask our guest to pair their favorite theorem with something. So I’m curious. What do you think pairs well with Tutte’s theorem?
ID: Well, I was already thinking of Saran wrap and the hair dryer, but…
KK: No, that’s perfect. Yeah.
ID: I think also—not for Tutte’s theorem, there I really think of Saran wrap and a hair dryer—but I also am using in some of the work in biology as well what people call diffusion, manifold learning through diffusion techniques. The idea is if you have a complicated world where you have many instances and some of them are very similar, and others are similar to them, and so on, but after you’ve moved 100 steps away, things look not similar at all anymore, and you’d like to learn the geometry of that whole collection.
KK: Right.
ID: Very often it’s given to you by zillions of parameters. I mean, like images, if you think of each pixel of the image as a variable, then you live in thousands, millions of dimensions. And you know that the whole collection of images is not something that fills that whole space. It’s a very thin, wispy set in there. You’d like to learn its geometry because if you learn its geometry, you can do much more with it. So one tool that was devised, I mean 10 years ago or so—it’s not deep learning, it’s not as recent as that—is manifold learning in which you say, well, in every neighborhood if you look at all the things that are similar to me, then I have a little flat disc, it’s close enough to flat that I can really approximate it as flat. And then I have another one, and so on, and I have two mental images for that. I have one mental image: this whole kind of crochet thing, where each one of it you make with a crochet. You cover the whole thing with doilies in a certain sense. You can knit it together, or crochet it together and get the more complex geometry. Another image I often have is sequins. Every little sequin is a little disc.
EL: Yeah.
ID: But it can make it much more complex. So many of my mental images and pairings, if you want, are hands-on, crafty things.
KK: Do you knit and crochet yourself?
ID: Yes, I do. I like making things. I use metaphors like that a lot when I teach calculus because it’s kind of obvious. I find I use almost no sports metaphors. Sports metaphors are big in teaching mathematics, but I use much more handicraft metaphors.
KK: So what else should we talk about?
ID: One thing, actually, I was saying, I had such a lot of fun a couple of weeks ago when there was a celebration. The town in which I was born happens to have a fantastic new administrative building in which they have brought together all different services that used to be in different buildings in the town. The building was put together by fantastic architects, and it feels very mathematical. And it has beautiful shapes.
It’s in a mining town—I’m from a coal mining town—and so they have two hyperboloid shapes that they used to bring light down to the lower floors. That reminds people of the cooling towers of the coal mine. They have all these features in it that feel very mathematical. I told the mayor, I said, “Look, I’ll have this group of mathematicians, some of whom are very interested in outreach and education. We could, since there will be a party on Saturday and the conference only starts on Monday, we could on the Sunday have a brainstorming thing in which we try to design a clue-finding search through the building. We design mathematical little things in the building that fit with the whole design with the building. So you should have the interior designers as part of the workshop. I have no idea what will come out, but if something comes out, then we could find a little bit of money to realize it, and that could be something that adds another feature to the building.”
He loved the idea! I thought he was going to be…but he loved the idea. He talked to the person who runs the cafeteria about cooking a special meal for us. So we had a tagine because he was from Morocco. We wanted just sandwiches, but this man made this fantastic meal. We toured the building in the morning and in the afternoon we had brainstorming with local high school teachers and mathematicians and so on. We put them in three small groups, and they came up with three completely different ideas, which all sound really interesting. And then one of them said, “Why don’t we make it an activity that either a family could do, one after the idea, or a classroom could do? You’d typically have only an hour or an hour and a half, and the class would be too big, but you’d split the class into three groups, and each group does one of the activities. They all find a clue, and by putting the clues together, they find some kind of a treasure.”
KK: Oh, wow.
ID: So the ideas were great, and they link completely different things. One is more dynamical systems, one is actually embodying some group and graph theory (although we won’t call it that). And what I like, one of the goals was to find ideas that would require mathematical thinking but that were not linked to curriculum, so you’d start thinking, how would I even frame this? And so on, and trying to give stepwise progression in the problems so that they wouldn’t immediately have the full, complete difficult thing but would have to find ways of building tools that would get you there. They did excellent work. Now each team has a group leader that over email is working out details. We have committed to in a year working out all the details of texts and putting the materials together so it can actually be realized. That was the designers’ part. Can we make something like that not too expensive? They said, oh yeah, with foam and fabric. And I know they will do it.
A year from now I will see whether it all worked on that.
EL: So will you come to Salt Lake next and do that in my town?
ID: Do you have a great building in which it work?
EL: I’m trying to think.
ID: We’re linking it to a building.
EL: I’ll have to think about that.
KK: Well, we have a brand new science museum here in Gainesville. It’s called the Cade Museum. So Dr. Cade is the man who invented Gatorade, you know, the sports drink.
ID: Yes.
KK: And his family got together and built this wonderful new science museum. I haven’t been yet. It just opened a few months ago.
ID: Oh wow.
KK: I’m going to walk in there thinking about this idea.
ID: Yeah, and if you happen to be in Belgium, I can send you the location of this building, and you can have a look there.
KK: Okay. Sounds excellent. Well, this has been great, Ingrid. We really appreciate your taking your time to talk to us today.
ID: Well thank you.
KK: We’re really very honored.
ID: Well it’s great to have this podcast, the whole series.
KK: Yeah, we’re having a good time.
EL: We also want to thank our listeners for listening to us for a year. I’m just going to assume that everyone has listened religiously to every single episode. But yeah, it’s been a lot of fun to put this together for the past year, and we hope there will be many more.
ID: Yes, good luck with that.
KK: Thanks.
ID: Bye.
Show notes
Create your
podcast in
minutes
It is Free