First off, if there’s a better place to ask this, I’d appreciate a nudge in that direction.

I’ve seen a lot of chatter on YouTube with Newcomb’s paradox lately (MinutePhysics Veritasium Wikipedia) and I’ve been dwelling on it more than I probably should.

To explain the problem briefly for the uninitiated: there is a super intelligent being that knows you to the core and can accurately (with 99.99+% accuracy) predict your actions/decisions. It has 2 boxes. You have the option to take either just the first box, or both boxes. In the first it always puts $1,000. In the second it will put either $1 million if it thinks you’ll take just the first box; or $0 if it thinks you’ll take both.

The apparent contradiction is explained in the videos.

So the solution to the problem I’ve come to is that you should remove your own ability to decide from your “decision” on whether to take the second box.

That is, you walk in the room, you flip a coin (or some similar random chooser) and on heads take both; on tails just take the first.

I think I’m failing to imagine all the consequences of this, but I can’t decide on what this would imply about the super intelligence’s choose of wether to put the $1 million into the box.

Any thoughts on this?

  • DahGangalang@infosec.pubOP
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    Hmmm, that’s a good and interesting follow on thought. No idea where we’d land for p, nor how we’d begin to calculate it. But interesting line of thinking.