AI, comparison stage
Jan. 15th, 2013 10:39 pmThe "Great Concept" investigation cycle (of either my creatures, or a particular CTC) goes like this:
- Imagine a concept that you want to investigate.
- Build an (imperfect) materialization of this concept.
- Compare the materialization to the concept to learn more about both.
- Repeat.
Now I'm going to do something slightly different, but reminiscent of it, with regards to the general optimizer I wrote about.
It's not a Great Optimizer, because there can't be any single Great Optimizer, and I'm not comparing it to the concept of intelligence, but to ourselves. Still.
( Read more... )
So that's a rather good instance of the investigation cycle, I think. By contrasting, I know more about both the AI and ourselves.
There's an even more subtle, and more general, thing to be found by looking at the optimizer-in-two-realms observation, but I'm not quite awake enough that I can state it here yet. I'm not sure I can see it in its entirety yet, either.
-
[1] Even if it were self-aware in that sense, we have no way of knowing that the self-awareness maps to what it's really doing. If the optimizer is moving a robot body out a door, we have no way of knowing that the animist connection would actually perceive that as moving. What we would call "moving" entails a lot of other changes, e.g. of states in the computer's memory, and perhaps the crossing awareness would be focused on that instead.
- Imagine a concept that you want to investigate.
- Build an (imperfect) materialization of this concept.
- Compare the materialization to the concept to learn more about both.
- Repeat.
Now I'm going to do something slightly different, but reminiscent of it, with regards to the general optimizer I wrote about.
It's not a Great Optimizer, because there can't be any single Great Optimizer, and I'm not comparing it to the concept of intelligence, but to ourselves. Still.
( Read more... )
So that's a rather good instance of the investigation cycle, I think. By contrasting, I know more about both the AI and ourselves.
There's an even more subtle, and more general, thing to be found by looking at the optimizer-in-two-realms observation, but I'm not quite awake enough that I can state it here yet. I'm not sure I can see it in its entirety yet, either.
-
[1] Even if it were self-aware in that sense, we have no way of knowing that the self-awareness maps to what it's really doing. If the optimizer is moving a robot body out a door, we have no way of knowing that the animist connection would actually perceive that as moving. What we would call "moving" entails a lot of other changes, e.g. of states in the computer's memory, and perhaps the crossing awareness would be focused on that instead.