Select Excerpt from video above:
How natural selection operates: What's really going on in the end is computational irreducibility lets one go from these quite simple rules to these very elaborate kinds of behavior. The fitness functions, the things that are determining whether you survive or not, those are fairly coarse in biology. But let's imagine you have a coarse fitness function, like what's the overall lifetime of the pattern before it dies out, let's say, or how wide does the pattern get? You're not saying how it gets wide. You're not saying particularly, but you're saying how wide does it get. It turns out, with those kinds of coarse fitness functions, you can successfully achieve high fitness, but you achieve it in this very complicated way. You achieve it by sort of putting together these pieces of irreducible computation. And that means so that in the end, the answer I think to why biological evolution works is that it is the same story as what happens in physics and mathematics, actually. It is an interplay between underlying computational irreducibility and the computational boundedness of “observers” of that computation. So in the case of physics, the observers are us, doing experiments in the physical world. In the case of mathematics, it's mathematicians looking at the structure of mathematics. In the case of biology, the observer is kind of the environment. It's the fitness function. The fitness function is kind of the analog of the observer. And the fitness function is saying, you're a success if you achieve this kind of coarse objective. And the reason that biological evolution works is that there's so much power in the underlying irreducible computation that you're able to achieve many of these coarse fitness functions.
Michael Levin on Irreducible Computations in Platonic Space
Select excerpt from above video:
ML: And so I think what happens in biology is that it is very comfortable, in fact, it depends on the idea that the substrate will change. You will mutate. Some cells will die. New cells will be born. Material goes in and out. Unlike in our computational devices, you're not committed to the fidelity of information the way that we are in our computation. You are committed to the salience of that information. So you will need to take those memory traces and reinterpret them for whatever your future situation is. In the case of the butterfly (metamorphosis), completely different. In the case of the adult human, somewhat different than your brain when you were a child. But even during adulthood, the context, your mental context, your environment, everything changes. And I think you don't really have an allegiance to what these memories meant in the past. You reinterpret them dynamically. And so this gives a kind of view of the self, kind of a process view of the self, that what we really are is a continuous dynamic attempt at storytelling, where what you're constantly doing is interpreting your own memories in a way that makes sense, a coherent story about what you are, what you believe about the outside world. And it's a constant process of self-construction. So that, I think, is what's going on with selves.
CJ: If it's the case then that we interpret our memories as messages from our past selves to our current selves, then can we reverse that and say that our current actions are messages to our future self?
ML: Yeah, I think that's exactly right. I think that a lot of what we're doing now at any given moment is behavior that is going to enable or constrain your future self. You're setting the conditions in which the environment in which your future self is going to be living, including changing yourself. Anything you undertake as a self-improvement program or conversely, when people entertain intrusive or depressive thoughts, that changes your brain. That literally changes the way that your future self is going to be able to process information in the future. Everything you do radiates out not only as messages and a kind of niche construction, you know, where you're changing the environment in which you're going to live and which everybody else is going to live. We are constantly doing that to ourselves and to others. And so that, you know, really forces a kind of thinking about your future self as kind of akin to other people's future selves. And I think that also has important ethical implications because once that symmetry becomes apparent that your future self is not quite you, and also others' future selves are also not you, that suggests that the same reason you do things so that your future self will have a better understanding of what you're doing, you might want to apply that to others' future selves. Like breaking down this idea that—and I'm certainly not the first person—the larger the cognitive light cone, the better for the organism.
CJ: Well, what does better mean?
ML: I mean, I think that certainly there are extremely successful organisms that do not have a large cognitive light cone. Having said all that, you know, the size of an organism's cognitive light cone is not obvious. We are not good at detecting them. It's an important research program to find out what any given agent cares about because it's not easily inferable from measurements directly. You have to do a lot of experiments. So assuming we even know what anything's cognitive light cone is, I think lots of organisms do perfectly well, but then what's the definition of success? So in terms of the way we think about, well, the way many people think about evolution in terms of, you know, how many copy numbers, like how many of you are there, that's your success. So just persistence and expansion into the world. From that perspective, I don't think you need a particularly large cognitive light cone. You know, bacteria do great. But from other perspectives, if we sort of ask ourselves what the point is and why do we exert all this effort to exist in the physical world and try to persist and exert effort in all the things we do in our lives, one could make an argument that a larger cognitive light cone is probably better in the sense that it allows you to generate more meaning and allows you to bring meaning to all the effort and the suffering and the joy and the hard work and everything else. From that perspective, one would want to enlarge one's cognitive light cone. And, you know, we collaborate, well, I collaborate with a number of people in this group called CSAS, the Center for the Study of Apparent Selves. And we talk about this notion of something, for example, in Buddhism, they have this notion of a bodhisattva vow. And it's basically a commitment to enlarge one's cognitive light cone so that over time one becomes able to have a wider area of concern or compassion, right? The idea is you want to work on changing yourself in a way that enlarges your ability to really care about a wider set of beings.
Ever Creating Your Future Self
No comments:
Post a Comment