davv: (Corvid)
[personal profile] davv
The "Great Concept" investigation cycle (of either my creatures, or a particular CTC) goes like this:

- Imagine a concept that you want to investigate.
- Build an (imperfect) materialization of this concept.
- Compare the materialization to the concept to learn more about both.
- Repeat.

Now I'm going to do something slightly different, but reminiscent of it, with regards to the general optimizer I wrote about.

It's not a Great Optimizer, because there can't be any single Great Optimizer, and I'm not comparing it to the concept of intelligence, but to ourselves. Still.

Let's say we could build the impractical optimizer, or something close enough to it to be indistinguishable from our point of view. How would it differ from our own "reasoning selves"?

Well, first of all, it would have no problem defining the meaning of life. As a utilitarian optimizer, it knows perfectly well what existence is all about. Existence is all about maximizing the fitness function. In fact, it has no way of even considering other ethical systems, except as subordinate to optimization. It is a utilitarian through and through. This is a very clear, and quite interesting, contrast to ourselves. People can (and do) adopt different ethical systems, and although evolution has an inherent fitness function of survival, it is nowhere as obvious to us as the designated optimization criterion would be to the general optimizer.

This reminds me of an observation of utilitarianism itself. Utilitarianism is extremely specific, at least when the utility function is also specific. It tells you not what boundaries you should act within, but exactly what you should do. To the best of its skill, the optimizer wouldn't avoid dilly-dallying. Whatever ambiguity resides in the choice of what to make is a consequence of its task (to predict and act to get closer to optimization), but not inherent to the process. That is, if its objective is to be as good at rock-paper-scissors as possible against a human player, it might need ambiguity (or randomness) in order to be unpredictable, but the ambiguity or randomness serves the goal. A mixed strategy is also a strategy.

Second: it would be incapable of knowing itself. Say the impractical optimizer was running on a computer. The only difference, to that optimizer, between the computer and any other computer in the world is that tinkering with it affects its ability to optimize the fitness function. If it were to try hitting the computer case with a hammer or something in order to learn more about reality (as part of the prediction phase), and it managed to damage itself, then it would only recognize the relation of the computer to itself by that suddenly the universe would start to become a whole lot more erratic.

For the sake of simplicity, say that any damage done to the computer makes it slow down; and say that the optimizer is trying to maximize a function that relates to the real world, using actuators connected to the real world. Then the optimizer would learn that hitting the computer makes the universe seem to speed up, making optimizing harder, so it would refrain from doing so. Similarly, it might do surgery on itself if it could speed up its perception of the universe - but the hardware better have redundancy that would penalize any changes to the hardware that would alter the objective function -- otherwise it could lose sight of the goal.

But there's something more subtle hiding behind this observation. It relates to the optimizer existing on two levels. On the one, the mathematically Platonic level, the optimizer is a function. This function does something, namely it optimizes. On the other, the material instantiation level, the optimizer is a bunch of matter arranged in such a way that it exhibits behavior consistent with that function. But as a bunch of matter, the optimizer can't reach into the Platonic realm and pull its own identity back down into the material realm. The optimizer isn't self-aware (excepting the usual possibilities that could also make a rock self-aware[1]). It just is. It's the ultimate behaviorist subject in that regard: it acts, but there's no there there.

(Materialists would say there's no there there in our case either - but that's not really related to the point I'm making.)

So that's a rather good instance of the investigation cycle, I think. By contrasting, I know more about both the AI and ourselves.

There's an even more subtle, and more general, thing to be found by looking at the optimizer-in-two-realms observation, but I'm not quite awake enough that I can state it here yet. I'm not sure I can see it in its entirety yet, either.

-

[1] Even if it were self-aware in that sense, we have no way of knowing that the self-awareness maps to what it's really doing. If the optimizer is moving a robot body out a door, we have no way of knowing that the animist connection would actually perceive that as moving. What we would call "moving" entails a lot of other changes, e.g. of states in the computer's memory, and perhaps the crossing awareness would be focused on that instead.

March 2018

S M T W T F S
    123
45678910
1112131415 1617
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 1st, 2026 09:01 pm
Powered by Dreamwidth Studios