davv: (Corvid)
[personal profile] davv
The impractical optimizer layout I gave has been discovered before -- that's where I took it from, actually. It's called AIXI. The predictor system is called an "universal prior" (though it does depend on the assumption that the universe is computable - i.e. the Church-Turing thesis - to be truly universal). The attempt at a more practical system is called MC-AIXI (MC for Monte Carlo). I was dimly aware of it (I even linked to it earlier), but I hadn't actually read the paper.

Upon doing just that, between lots of mathematics I wasn't really equipped to reason about, I found out that the authors had already thought of my idea. They make use of the clever predictor itself, and say that (more or less):

If we have some program that consistently shows up with a high weight in our predictions of reality, then go through this program (and its variants) first.

This is a very nice solution. If reality starts to diverge from the already-stored predictors, they'll drop in weight and disappear. If not, they're handled by the same kind of process as the baseline, so there's never an abrupt jump between the baseline reasoning and "baseline, augmented" reasoning.

In the plain old impractical optimizer, memoizing like this serves no purpose because the system goes through every possible program anyway. But to be workable in the real world, MC-AIXI samples. That's the Monte Carlo part. And when sampling, one doesn't go through every program, only some of them... so storing what has worked in the past proves useful indeed.

And unlike the impractical optimizer, the Monte-Carlo version isn't uncomputable. It actually works on toy problems with current computers, though it takes a long time still. (Here it plays limited-observable Pacman.)

Will this thing take over the world? Probably not. Say the impractical optimizer is like looking at a game and divining optimal play from the entire tree of "if I do this, he does that, then I do this, then he does that" in an instant. The Monte-Carlo variant is like an early computer Go player. More cleverness will probably be needed to make a working General Optimizer... and there's always the question of whether we should be given that kind of power :)

What we really got out of it is:
- From the impractical optimizer, we have an idea of how to do general optimization, even if the tasks are impossible as we know them.
- From the Monte-Carlo optimizer, we have something that works, albeit very slowly.

These are footholds. The people who made MC-AIXI haven't climbed the mountain, but given some ideas of what kind of climbing could work.

Date: 2013-02-25 03:13 am (UTC)
lhexa: (literate)
From: [personal profile] lhexa
though it does depend on the assumption that the universe is computable - i.e. the Church-Turing thesis - to be truly universal

Er, that's not the Church-Turing thesis. It asserts that every sufficiently powerful computing paradigm (for instance, recursive functions, Turing machines, Post machines, lambda calculus) is equivalent in terms of what problems it can solve. "Sufficiently powerful" more or less means having unbounded, random-access memory, from what I can tell.

The universe itself is not computable, what with chaotic dynamical systems showing up all other the place. But approximations suffice in a thankfully large regime.

Date: 2013-05-09 05:43 am (UTC)
lhexa: (literate)
From: [personal profile] lhexa
"Digital physics?" Sheesh, Wikipedia. :P I think even the term "Strong Church-Turing Thesis" is non-standard...

In some cases, randomness could be compatible with a computable universe. (Heh.) In computability theory, there are a couple of basic computer types, the deterministic and non-deterministic finite automata. The non-deterministic one is free to random-walk through any permissible paths, and a string is in the solution set if _any_ random path can reach an end state. But despite this weirdness, the class of things the non-deterministic ones can compute is identical to the class of things the deterministic ones can. (This is shown by constructing an NDFA to model a DFA, and a DFA to model an NDFA.) So you could have something like that.

Personally, though, I think that the continuity of physical quantities is a bigger hurdle to a computable universe than randomness is. Any observable (like, say, the expectation value of an operator in QM) is some real number, and there's no evidence said numbers are discretized, except where demanded by quantization... and the quantization is needed for convergence of differential equations, not because of discrete physical units. Moreover, even when there is quantization, the quantized values only have rational ratios in special cases: the energy levels of the Helium atom's electrons, for instance, are not spaced according to rational numbers. (The hydrogen atom's are: the energies go as 1/n^2). Quantum mechanics also makes this hindrance considerably worse, because whereas a classical system can (sometimes) be parametrized by a finite number of real variables, a quantum system requires either a countable (for bound systems) or uncountable (for unbound) number of real values to be specified.

March 2018

S M T W T F S
    123
45678910
1112131415 1617
18192021222324
25262728293031

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 1st, 2026 08:15 pm
Powered by Dreamwidth Studios