Mar. 31st, 2013

BOBW

Mar. 31st, 2013 09:10 pm
davv: The bluegreen quadruped. (Default)
There are artificial intelligence algorithms that deal with a problem where you're facing a choice and don't fully know the outcome, yet you have to pick one of these choices. After picking, you learn a little about the choice you picked, but nothing about the choices you didn't, and then the problem is to find the best succession of choices: neither dwelling too much on what seems to be best so far, nor spreading yourself too widely by going to the other extreme.

In certain situations, these algorithms do better than people.

I wonder if one could use these algorithms to teach people to be better at this kind of choice. It seems a pretty fundamental problem: stick with the known or try something new? And so it might be useful.

In practice, I suspect the hidden assumptions will make it hard. Kind of like how game-theoretical optimal decision theory says people act suboptimally, yet when you get rid of some of the assumptions, it casts doubt on what you were trying to optimize in the first place.
davv: (Seemingly) technical contradiction (Code)
And here's something I'm curious about, though I probably shouldn't ask here.

Say F is a boolean statement in disjunctive normal form, and the input boolean values are x_1 ... x_n, and we're also given some k so that 0 < k <= n.

Then, is it possible to describe the boolean function
"exactly k of x_1 ... x_n must be true" AND F

in DNF form without the thing blowing up exponentially? (I suspect not.)

March 2018

S M T W T F S
    123
45678910
1112131415 1617
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Oct. 17th, 2025 12:30 pm
Powered by Dreamwidth Studios