# Playing Codenames: When Frequentist Statistics Becomes Optimal

## The Game

Codenames is a popular card game designed by Vlaada Chvátil published in 2015. To play this game, each turn a clue giver in each of the two competing teams announces a one-word clue and a number indicating how many words on the table are related to this clue and the guesser(s) have to guess as many words that belong to their team as possible. About 8 of the 25 words on the table belong to each of the teams. The rest of the words are neutral but guessing the black assassin loses the game. Only the clue givers know which words belong to which team. The team who has all their words guessed wins the game.

The following pictures show a game in progress where all the guessed words are covered by the team’s color (beige is neutral). The color key at the bottom is only shown to the clue givers.

## The Bayes formula

The guessers guess the words in sequence, up to one more than the number specified by the clue giver. For each guess, the guessers need to find the word that has the highest probability of belonging to their team. For each candidate word, this probability is calculated as follows, using the Bayes formula.

Since each word is randomly assigned to each team, the quantity $P(Guessed \, word \in Own \, team's \, words)$ is the same for every word. Additionally, the denominator P(Clue) is fixed for any given clue. Therefore we can simplify the equation as:

That is, given a clue, the probability that a word belongs to the guessers’ team is proportional to the probability that this clue is used given this word needs to be guessed. The left side is what we need, but is extremely hard to calculate given the strategies of the game. The right side is more straightforward. Often, the guessers have to consider the few target words for this turn as a whole set and plug into the formula.

## An Example

Consider the following example with eight words left and the clue is animal:3The words are laid out in a semantic map to facilitate thinking. Which three words related to animal should you guess?

The first three words that pop up are probably {COW, PIG, LION} because they are most similar to our prototype of “animal”. But this is probably not the correct set since if it were the case, the clue giver would have said “mammal:3” to be more clear. This correct way of thinking evaluates the right side of the formula: P(Clue|Guessed word ∈ Own team’s words). Next, we can evaluate the set {COW, PIG CHICKEN}. “Animal” is unlikely to be given as a clue since “domestication:3” would be a much better clue here. What about {COW, CHICKEN, DINOSAUR}? Since there is no simple way to describe this divide, the clue giver would probably give up and break it down into two turns as “bird:2” for {DINOSAUR, CHICKEN} and “beef:1” for {COW}. After more evaluation, we can conclude that the sets {LEECH, DINOSAUR, LION} and {LEECH, DINOSAUR, CHICKEN} are the best sets to guess as the probabilities of them yielding “animal:3” are the highest. Since the guessers guess one word at a time, they should guess LEECH and DINOSAUR first followed by either LION or CHICKEN.

## Conclusion

Our Bayesian inference above turns out to be identical to the frequentist answer too. By randomly setting up the game, we make the prior distribution of correct guesses uniform*, which is the case when Bayesian inference is reduced to frequentist inference. This is remarkable since a true uniform prior is not that common in real world problems (although Codenames is only a card game).We are trying to maximize the likelihood function: P(Clue | Guessed word ∈ Own team’s words) which is the same as finding the word (parameter) that gives the highest probability of observing the given clue assuming the word belongs to our team and this probability is called the p-value for frequentists.

For most people, the intuitive way to guess is to pick the word(s) that are most similar to the clue. This fails because the clue giver does not always give clues that are the most similar to the words to be guessed as we have seen in the example. We have assumed the clue giver is extremely intelligent and gives the best clue. In reality the clue giver is rarely this optimal or may have little-known knowledge of animals that links another set of three animals. Thus the game ultimately comes down to a coordination game of guessing the mental capacity of the clue giver while being aware of the conditional probabilities.

*The distribution of correct words may not be uniform after the first turn in the game. For example, easier words tend to be guessed in the beginning of the game.