Jan. 15th, 2013

davv: (Corvid)
The "Great Concept" investigation cycle (of either my creatures, or a particular CTC) goes like this:

- Imagine a concept that you want to investigate.
- Build an (imperfect) materialization of this concept.
- Compare the materialization to the concept to learn more about both.
- Repeat.

Now I'm going to do something slightly different, but reminiscent of it, with regards to the general optimizer I wrote about.

It's not a Great Optimizer, because there can't be any single Great Optimizer, and I'm not comparing it to the concept of intelligence, but to ourselves. Still.
Read more... )
So that's a rather good instance of the investigation cycle, I think. By contrasting, I know more about both the AI and ourselves.

There's an even more subtle, and more general, thing to be found by looking at the optimizer-in-two-realms observation, but I'm not quite awake enough that I can state it here yet. I'm not sure I can see it in its entirety yet, either.

-

[1] Even if it were self-aware in that sense, we have no way of knowing that the self-awareness maps to what it's really doing. If the optimizer is moving a robot body out a door, we have no way of knowing that the animist connection would actually perceive that as moving. What we would call "moving" entails a lot of other changes, e.g. of states in the computer's memory, and perhaps the crossing awareness would be focused on that instead.
davv: The bluegreen quadruped. (Default)
These are interesting.

First: the Lucas-Penrose argument.

Second: a more tangible argument based on solving the busy beaver puzzle.

Things are coming together!

March 2018

S M T W T F S
    123
45678910
1112131415 1617
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 16th, 2025 12:29 pm
Powered by Dreamwidth Studios