AI and consciousness, second half
Jan. 10th, 2013 03:49 pmAbout consciousness and computers, I gave three possibilities for the effect of consciousness:
- Consciousness might let the conscious being do things no computer can do,
- Consciousness might be one way of doing somthing an ordinary computer could also do,
- or consciousness might not have any effect on the material self at all.
Here I'll deal with the first possibility, which is both good and bad for general optimizers and artificial intelligences.
If consciousness can do things no computer can do, then we'll say consciousness is hypercomputational. (This would also refute the Church-Turing thesis.) This is good for conscious beings because it means that there's some utility function that rewards what consciousness is uniquely good at. If we implement this utility function into a General Optimizer (or AI), and give said optimizer enough power to actually implement the thing that imbues the material self with consciousness, then, in its pursuit to maximize the function, the Optimizer will construct a conscious module and use it if it can conceive of it.
Again, I'll use the example of consciousness being used for planning. The general framework is the same as before: the material self pushes and pulls at the conscious self through being a man in the middle (representing reality in a certain manner) and by more brutely coloring certain sensations bad and others good. The difference from the middle example is that no computer can be as good at (this particular brand of) planning as a conscious self, so if the Optimizer fails to implement consciousness and this kind of planning is useful for the fitness function, the Optimizer is being bad at finding out what's actually possible. It's failing to maximize well.
(We better hope that an Optimizer that does start integrating consciousness doesn't go overboard on the punishment! Otherwise these conscious selves could find themselves in figurative hell worlds. Some people think nature is actually doing that; that our real baseline could be much better than what we're experiencing now. From other angles, there's also Maya, though I know less about it.)
In this setting, non-conscious beings only exist to the degree evolution optimizes for something that doesn't have use for consciousness, or evolution is blind. So bacteria would likely not be conscious, but more creatures would be conscious than in the middle example. Animals could quite well be, for example.
So that's the good thing - there's less of a chance for consciousness to just be wiped out, since it is superior to computers in at least one respect. What's the bad thing? Well, because consciousness is hypercomputational, any kind of Optimizer that assumes reality is computational might be really surprised at best and fail to see the option entirely at worst.
Remember the impractical optimizer from the very beginning. Its prediction function is very general, but carries the inherent assumption that the universe is computational. Thus, the prediction can't predict the benefits of building consciousness: it will at best assume that the conscious self is a very effective computer (and then be even more surprised when the conscious self comes into being). At worst, it'll be blind to the configuration needed to actually integrate consciousness in the first place, because it can't conceive of the existence of consciousness itself.
Here we bump into the no-free-lunch theorem. This theorem says that every optimizer is just as good on the entire set of every possible problem that can be described, so if an optimizer is better in one place, it's worse in another. Creationists use this to argue for creation because "no natural process could do better than chance over every possible problem", but they fail to realize that "every possible problem" is a much larger space than "every possible problem in the universe".
For the universe itself, the metric of shortest computable length (the universal prior) seems to do well enough. It does well on the kind of problems that appear in the physical universe, or at least its computable parts. But here's the snag. We can't know if it would also do badly on its hypercomputational parts, because we don't know what the hypercomputation would be like. Even if an oracle told us consciousness is hypercomputational, we don't know in what sense. So there's the danger in making an AI that searches based on a very strong computational-universe assumption, that it would fail to find hypercomputers that evolution did find. The search seems impossible!
But I pull another rabbit out of my hat. Consider our own existences. We're obviously in a world where something is conscious, otherwise we would not be experiencing it subjectively or intentionally at all. So we already know there is something that is conscious (us) and a seeking mechanism that can find consciousness and integrate it (evolution). What NFL takes away, the anthropic principle returns to us.
So what we have to do in this case is simply (heh) to find out how consciousness is linked to us and what aspect of evolutionary search lets it find consciousness and integrate it. Then, if we want a lifeline, we can pattern the optimizer on this search strategy - or just enough of it that we know consciousness will be used if it's around. But to me it would seem better to take control directly: to determine the link to consciousness directly and then have it be part of what directs the optimizer.
Such an AI would be more than just intelligent. It would be conscious, and it would be consciously directed to a much greater degree than we are. If we knew the nature of consciousness, we could make sure the optimizer was very conscious indeed. We could also ensure it would have a good existence.
And perhaps such an AI, once granting control to consciousness, would no longer be utilitarian. Who knows? It depends on the nature of consciousness. (I think there's a Kantian argument for deontology somewhere there, basically going: we don't know the nature of consciousness, so we don't know what is good for consciousness, hence any utilitarian reasoning could fail, so we should instead set rules for what we shouldn't be doing so that the conscious self is free to act within the area given by those rules.)
- Consciousness might let the conscious being do things no computer can do,
- Consciousness might be one way of doing somthing an ordinary computer could also do,
- or consciousness might not have any effect on the material self at all.
Here I'll deal with the first possibility, which is both good and bad for general optimizers and artificial intelligences.
If consciousness can do things no computer can do, then we'll say consciousness is hypercomputational. (This would also refute the Church-Turing thesis.) This is good for conscious beings because it means that there's some utility function that rewards what consciousness is uniquely good at. If we implement this utility function into a General Optimizer (or AI), and give said optimizer enough power to actually implement the thing that imbues the material self with consciousness, then, in its pursuit to maximize the function, the Optimizer will construct a conscious module and use it if it can conceive of it.
Again, I'll use the example of consciousness being used for planning. The general framework is the same as before: the material self pushes and pulls at the conscious self through being a man in the middle (representing reality in a certain manner) and by more brutely coloring certain sensations bad and others good. The difference from the middle example is that no computer can be as good at (this particular brand of) planning as a conscious self, so if the Optimizer fails to implement consciousness and this kind of planning is useful for the fitness function, the Optimizer is being bad at finding out what's actually possible. It's failing to maximize well.
(We better hope that an Optimizer that does start integrating consciousness doesn't go overboard on the punishment! Otherwise these conscious selves could find themselves in figurative hell worlds. Some people think nature is actually doing that; that our real baseline could be much better than what we're experiencing now. From other angles, there's also Maya, though I know less about it.)
In this setting, non-conscious beings only exist to the degree evolution optimizes for something that doesn't have use for consciousness, or evolution is blind. So bacteria would likely not be conscious, but more creatures would be conscious than in the middle example. Animals could quite well be, for example.
So that's the good thing - there's less of a chance for consciousness to just be wiped out, since it is superior to computers in at least one respect. What's the bad thing? Well, because consciousness is hypercomputational, any kind of Optimizer that assumes reality is computational might be really surprised at best and fail to see the option entirely at worst.
Remember the impractical optimizer from the very beginning. Its prediction function is very general, but carries the inherent assumption that the universe is computational. Thus, the prediction can't predict the benefits of building consciousness: it will at best assume that the conscious self is a very effective computer (and then be even more surprised when the conscious self comes into being). At worst, it'll be blind to the configuration needed to actually integrate consciousness in the first place, because it can't conceive of the existence of consciousness itself.
Here we bump into the no-free-lunch theorem. This theorem says that every optimizer is just as good on the entire set of every possible problem that can be described, so if an optimizer is better in one place, it's worse in another. Creationists use this to argue for creation because "no natural process could do better than chance over every possible problem", but they fail to realize that "every possible problem" is a much larger space than "every possible problem in the universe".
For the universe itself, the metric of shortest computable length (the universal prior) seems to do well enough. It does well on the kind of problems that appear in the physical universe, or at least its computable parts. But here's the snag. We can't know if it would also do badly on its hypercomputational parts, because we don't know what the hypercomputation would be like. Even if an oracle told us consciousness is hypercomputational, we don't know in what sense. So there's the danger in making an AI that searches based on a very strong computational-universe assumption, that it would fail to find hypercomputers that evolution did find. The search seems impossible!
But I pull another rabbit out of my hat. Consider our own existences. We're obviously in a world where something is conscious, otherwise we would not be experiencing it subjectively or intentionally at all. So we already know there is something that is conscious (us) and a seeking mechanism that can find consciousness and integrate it (evolution). What NFL takes away, the anthropic principle returns to us.
So what we have to do in this case is simply (heh) to find out how consciousness is linked to us and what aspect of evolutionary search lets it find consciousness and integrate it. Then, if we want a lifeline, we can pattern the optimizer on this search strategy - or just enough of it that we know consciousness will be used if it's around. But to me it would seem better to take control directly: to determine the link to consciousness directly and then have it be part of what directs the optimizer.
Such an AI would be more than just intelligent. It would be conscious, and it would be consciously directed to a much greater degree than we are. If we knew the nature of consciousness, we could make sure the optimizer was very conscious indeed. We could also ensure it would have a good existence.
And perhaps such an AI, once granting control to consciousness, would no longer be utilitarian. Who knows? It depends on the nature of consciousness. (I think there's a Kantian argument for deontology somewhere there, basically going: we don't know the nature of consciousness, so we don't know what is good for consciousness, hence any utilitarian reasoning could fail, so we should instead set rules for what we shouldn't be doing so that the conscious self is free to act within the area given by those rules.)
no subject
Date: 2013-05-09 06:13 am (UTC)I'll quibble with this. One example of hypercomputation would be the accurate prediction of chaotic systems for arbitrarily long times. The universe can do this (and apparently does so, given how many things will diverge from the best available simulations given enough time), our computers can't, and the difference is quite observable. As for consciousness: people are chaotic systems, but people are generally good at predicting what they'll do next, and to a lesser extent what certain others will do. I can predict with high confidence that a friend will meet me for lunch Friday, chaos notwithstanding. Maybe that's hypercomputation. :P
Here we bump into the no-free-lunch theorem. This theorem says that every optimizer is just as good on the entire set of every possible problem that can be described, so if an optimizer is better in one place, it's worse in another.
Er, is that actually a theorem? It seems to me it would be a hypothesis at best.
The main reason I'm cagy about the question of AI is that I don't know what sorts of things will be called "computers" in the future...
no subject
Date: 2013-07-06 07:49 pm (UTC)Does the universe predict chaos? I only know of the universe making chaos happen rather than predicting it.
Granted about the other examples, though. A very simple example would be some kind of hypercomputational effect that permits one to make a halting oracle. If the halting oracle is correct for all the things one has thrown at it, one can be reasonably certain that hypercomputation is going on.
Er, is that actually a theorem? It seems to me it would be a hypothesis at best.
Quoting from www.no-free-lunch.org :
"The no free lunch theorem for search and optimization (Wolpert and Macready 1997) applies to finite spaces and algorithms that do not resample points. All algorithms that search for an extremum of a cost function perform exactly the same when averaged over all possible cost functions. So, for any search/optimization algorithm, any elevated performance over one class of problems is exactly paid for in performance over another class."
The general idea is that if every function is equally likely, then the probability that the optimization function will go through a given sequence of outputs is independent on the nature of the optimization function.
The paper itself is here.
The main reason I'm cagy about the question of AI is that I don't know what sorts of things will be called "computers" in the future...
Yes, and that's a good point when applied to computationalism too. I may not have that much of a problem with a "computationalism" defined on my very general computer mentioned otherwise. What does seem to be wrong is a computationalism defined with respect to Turing machines, or to computers of equivalent power.
(Although the mind-body problem might not be one of power; and hypercomputers might not necessarily be sentient in the qualia sense of the word.)
In another way, it's analogous to the concept that any dualist system with communication can be rephrased as a monist system with rules on what the parts can do. So you can be cagy since computers do not need to be limited to Turing-equivalent systems to be called computers; and similarly, it's not the dualist aspect of dualism that's special about dualism, but rather what the mental aspect can do.