Incidentally
I think the utility function that tells you the least about the person having it, is: "act in such a way as to maximize the ways you can change the world later"[1]. Lots of utility functions could incorporate this one as a subordinate goal, because one possible way to get where you want is to acquire enough of that which lets you change the world, and then use it.
I wonder if there's a turnpike theorem for utility :)
People (critters, AIs) holding this function would act naturally - if singlemindedly - up until the point where they're going to cash in their figurative chips. Then they would be like movie killer robots who manage to rid the world of mankind: "Okay, we won. Now what?".
[1] Or formally speaking: "act as to maximize the number of different states you can have the world assume at some later point".
I wonder if there's a turnpike theorem for utility :)
People (critters, AIs) holding this function would act naturally - if singlemindedly - up until the point where they're going to cash in their figurative chips. Then they would be like movie killer robots who manage to rid the world of mankind: "Okay, we won. Now what?".
[1] Or formally speaking: "act as to maximize the number of different states you can have the world assume at some later point".
no subject