A discussion at the mathfuture google group ended up with two playful questions:
Throw two dice. I win if the difference is . You win if it is . Wanna play? (Linda Fahlberg-Stojanovska)
Throw two dice. I win if a or a shows on either die. (Not a sum of or , just an occurrence of a or a .) Otherwise, you win. Wanna play? (Michel Paul)
While simple, the questions provide food for thought (both are unfair, but how can they be adapted to become fair?), and certainly serve good exercises for a beginning probability class.
The first is massively unfair - the presence of , , or differences is overwhelming in the sample space of pairs of integers from to :
The second problem is subtler and sounds rather deceptive: of possible outcomes of one throw of a single die, or come up with the probability of . However, the situations changes drastically when two dice are tossed:
Now the probability of "a or a " becomes !
As an extension, consider an -sided die. How many 1-toss outcomes should one pick so as to have more than a fair chance of winning when betting on any of those outcomes on a dice toss? The answer to this question is that any will do. Since , the minimum for is . For , .