Thursday 27 March 2014

Scheduling group presentations using Graph Theory and Sage.

Yesterday (2014-03-26) I spoke at the Embedded Enterprise Exchange about my new first year module. I used a Hangout on air during the talk and you can see it here:

That's not what I'm going to talk about here though. The course I talked baout ends with all the 'companies' (groups of 4 students) giving a $\approx 25$ minute talk.

I need to schedule 40ish talks and this needs to fit around the student availability as well as my own. In this post I'll describe how I did this using a combination of Doodle, +Sage Mathematical Software System and Graph Theory.

The beginning of this post will be some of the boring details but towards the end I start talking about the mathematics (so feel free to skip to there...).

First of all I needed to know my students availability.

For this I simply used Doodle: https://doodle.com. I kind of use Doodle for every meeting I have to schedule (they also offer a cool tool that lets you show your availability so students/colleagues have an indication of when I might be able to meet with them).

Here's a screenshot of the responses:


You can't really see anything there as I had to zoom out a lot to grab the whole picture. Doodle allows you to download the information for any given poll in .xls format so I could relatively easily obtain the biadjacency matrix $M$ for my problem. Where $M_{ij}$ is 1 if group $i$ is available for schedule slot $j$ and 0 otherwise.


The mathematics and code needed.

Once I've got a .csv file (by tweaking the .xls file) of the biadjacency matrix I import that in to +Sage Mathematical Software System  and convert it to an instance of the `Matrix` class using the following:

import csv 
f=open(DATA+'availabilitymatrix','r') 
data = [[int(j) for j in row] for row in csv.reader(f)] 
f.close() 
M = Matrix(data)

I then need to remove any particular scheduled slots that are not picked by any company:

M = matrix([row for row in M.transpose() if max(row) != 0]).transpose()

Once I've done this I can define the bi-partite graph (bi-partite simply means that the vertices can be separated in to 2 non adjacent collections):

g = BipartiteGraph(M)

We can then get a get a picture of this, I do this using a 'partition' (a graph colouring) that will colour the groups (red) and the schedule slots (blue):

g = BipartiteGraph(M)
p = g.coloring()
g.show(layout='circular',dist=1,vertex_size=250, graph_border=True, figsize=[15,15],partition=p)

The various options I pass to the `show` command are simply to get the circular arrangement (and other minor things):



The above looks quite messy and what I essentially want is get as many pairwise matchings between the blue vertices (slots) and red vertices (companies) so that each schedule slot is attributed at most 1 company and every company has at least 1 schedule slot.

On any given graph $G=(V,E)$ this problem is known as looking for a maximal matching and can be written down mathematically:

Max:  $\sum_{e \in E(G)} m_e$
Such that:  $\forall v$ $\sum_{e \in E(G) \atop v \sim e} m_e \leq 1$

We are in essence finding a subset of edges of our original graph in such a way as to maximise the number of edges such that no vertex has more than 1 edge.

This is all explained extremely well at the +Sage Mathematical Software System documentation pages here.

Furthermore at the documentation the code needed to solve the problem is also given:

p = MixedIntegerLinearProgram()  
matching = p.new_variable(binary=True)
p.set_objective(sum(matching[e] for e in g.edges(labels=False)))
for v in g:
      p.add_constraint(sum(matching[e]
          for e in g.edges_incident(v, labels=False)) <= 1)
p.solve()

When I run the above, `p` is now a solved Mixed Integer Linear Program (corresponding to the matching problem described). To obtain the solution:

matching = p.get_values(matching)
schedule = [e for e,b in matching.iteritems() if b == 1]

Calling `schedule` gives a set of edges (denoted by the corresponding vertex numbers):

[(5, 57), (0, 45), (23, 50), (4, 42), (38, 60), (26, 56), (34, 62), (16,
68), (1, 43), (7, 40), (9, 44), (36, 58), (12, 49), (35, 71), (28, 66),
(25, 47), (24, 53), (6, 46), (3, 64), (39, 67), (17, 69), (22, 55), (13,
48), (33, 41), (10, 63), (21, 61), (30, 52), (29, 65), (37, 70), (15,
54), (19, 51), (11, 59)]

It is then really easy to draw another graph:

B=Graph(schedule)
p = B.coloring()
B.show(layout='circular',dist=1,vertex_size=250, graph_border=True, figsize=[15,15],partition=p)

Which gives:



You can see that the obtained graph has all the required properties and most importantly is a lot less congested.

Some details.

I'm leaving out some details, for example I kept track of the names of the companies and also the slots so that the final output of all this looked like this:

4InARow: Fri1100
Abacus: Tue0930
Alpha1: Thu1130
Alpha2: Mon0930
AusfallLtd: Fri1230
AxiomEnterprise: Thu1000
Batduck: Thu1430
CharliesAngles: Thu1500
CwmniRhifau: Mon1330
EasyasPi: Fri1130
Evolve: Thu1300
HSPLLtd.: Mon1030
JECT: Tue1200
JJSL: Thu1030
JNTL: Mon1400
JennyCash: Tue1630
KADE: Fri1330
MIAS: Fri1300
MIPE: Thu1100
MLC: Tue1600
Nineties: Mon0900
Promis: Fri1400
R.A.C.H: Tue1530
RYLR: Tue1230
SBTP: Fri1030
Serendipity: Mon1230
UniMath: Tue1300
VectorEnterprises: Tue1330
Venus: Thu1400
codeX: Mon1300
dydx: Wed1630
eduMath: Thu0930

(BatDuck is my favourite company name by far...)

Why did I do this this way?

There are 3 reasons:

1. I shared the schedule with my students through a published Sage sheet on our server. That way they can see a direct applied piece of mathematics and can also understand some of the code if they wanted to.
2. "Point and click doesn't scale" - I'm sure I could have solved this one instance of my problem with pen and paper and some common sense faster than it took me to write the code to solve the problem. The thing is next year when I need to schedule these talks again it will at most take me 3 minutes as the code is all here and ready to go. (Most readers of this blog won't need that explained but if any of my students find their way here: that's an important message for you).
3. It was fun.

Sunday 23 March 2014

Basketball and cooperative games in class

Here is the state of my class after my students learnt about cooperative game theory (Chapter 16 of my course):


Cooperative game theory is concerned with situations where coalitions are allowed to form in game theoretic situations. In my class the particular aspect we consider is the Shapley value which is of interest when sharing a particular commodity amongst a group of player who each contribute to the commodity differently.

I use this in practice when dealing with attributing individual marks to a group project: you can read about that here.

To introduce this I asked 3 students to come down and play a game: $S$, $L$ and $C$ volunteered (or at least gave a lazy nod when I asked if anyone had played basketball before) to come down.

They had 20 seconds to get as many crumpled up pieces of paper as possible in the bin above (it was on the table). The paper was stapled and it needed to be separated and crumpled so I suggested that perhaps 1 or 2 of them could spend some of the 20 seconds separating and crumpling.

We played this 3 times and it was all good fun. After $S$, $L$ and $C$ had their go 3 other students thought they could do better (no comment).

After that I asked the class how we could share out the 10 chocolates between $S$, $L$ and $C$: what would be the fair way? After some discussion it was clear that L had made the biggest contribution to the group for example and so this was not a trivial question.

After a little while I started to ask 'how many do you think $L$ and $C$ would have had together? etc... We wrote down the characteristic function for our game:


$$v(C)=\begin{cases}4, & C=\{S\}\\2, & C=\{C\}\\6, & C=\{L\}\\4, & C=\{S,C\}\\8, & C=\{S,L\}\\10, & C=\{C,L\}\\10, & C=\Omega\end{cases}$$


where $\Omega$ denotes the grand coalition.

At this point one student suggested that we somehow needed to take all this information in to account. He couldn't quite articulate how to do that but he's completely correct.

The way to share the chocolates 'fairly' is to consider all permutations of the players and consider marginal contributions for each permutation (take a look at my notes to read about that some more).

Let me try and explain with an example:

For the first permutation $\pi=(S,C,L)$ we calculate the contribution of $S$ first: $4$. We then calculate the marginal contribution of $C$ to the coalition $\{S,C\}$, which from above has value $4$ so $C$ needs to contribute 0. Finally $L$ needs to contribute $6$.

For the second permutation $\pi=(S,L,C)$ we calculate the contribution of $S$ first: $4$ (again). We then calculate the marginal contribution of $L$ to the coalition $\{S,L\}$, which from above has value $8$ so $L$ needs to contribute 4. Finally $C$ needs to contribute $2$.

Here's all the marginal contributions for each permutation:

  • $(S,C,L)$ gives: $(4,0,6)$
  • $(S,L,C)$ gives: $(4,2,4)$
  • $(C,S,L)$ gives: $(2,2,6)$
  • $(C,L,S)$ gives: $(0,2,8)$
  • $(L,S,C)$ gives: $(2,2,6)$
  • $(L,C,S)$ gives: $(0,4,6)$
We calculate the average values of the contribution vector (indexed by the players) to give the Shapley value:

$$\phi=(2,2,6)$$

This was good fun and I'm really happy with the discussion that lead us to the calculations and it was also fun to get students to mess around in class.

Here's one (of my most viewed!) +YouTube videos that I put together on it:


Saturday 22 March 2014

Matching games in class

Last week after looking at stochastic games we looked at matching games (corresponding to Chapter 15 of my course). There's an awesome video on +YouTube describing the problem:



I thought about possibly getting 6 students to form two groups (a set of reviewers and a set of suitors); ranking each other and then getting the class to put together a stable matching. I thought twice about this though as I'm sure it's the kind of thing that could get me fired so instead I brought 3 toys along to class with me:


Both of those you can see in this picture:


  • The third toy I brought was my favourite nerf gun (the 'Praxis'):



I actually named 'her' Zoe; after my wife (+Zoƫ Prytherch).

Three students ('A', 'S' and 'R') came down and agreed to put together their ranking of which toy they preferred:

- A: Donatello, Zoe, Deck
- S: Zoe, Donatello, Deck
- R: Donatello, Zoe, Deck

I came up with a pretty random preference for the toys:

- Donatello: R, S, A
- Deck: A, R, S
- Zoe: R, S, A


We then discussed possible matchings and allocations of the toys (if we came to a stable one A, R and S could stick with that toy for the rest of the lesson). Here is one example that's close to what was suggested (I didn't record it and I don't quite remember it):

- A gets Donatello;
- S gets Zoe;
- R gets Deck

The thing with this is that Donatello prefers R and R prefers Donatello to their current match so the above is not stable.

After some more discussion we came up with the following stable matching:


We see that whilst A is with his 'least preferred' option (the Deck): no one prefers A to their current matching so A does not have an incentive to change his current matching.

The entire matching is stable because we can't find any pair that has an incentive to move.

In class we then went over the Gale-Shapley algorithm which is an algorithm that guarantees a stable matching. A cool thing is that this algorithm gives (out of all possible stable matchings) the one that is optimal from the suitors point of view and also the worst possible one from the reviewers point of view.

In our case above the matching found is also the matching found through the Gale-Shapley algorithm so sadly for 'A' there is no stable way of matching him to anything but his worst preferred toy and 'S' gets my wife no matter what.


Thursday 20 March 2014

Playing Stochastic/Markov games in class

In class on Monday my students and I looked at Stochastic games (Chapter 14 of my course notes). I decided to play a tournament similar to the iterated games that we played previously (corresponding to Chapters 9 and 10). I've blogged about them here and here.

In short, a stochastic game is a collection of states that themselves correspond to games. After players select strategies at a given state/game the next game they play (or state they are in) is given by a probability distribution.

This is a bit like playing in a casino where the table you play at depends on what happened at the previous table you played.

*The game we played in class corresponds to a casino with two tables:*

- At the first table we play a simple prisoner's dilemma (players aiming to maximise) their utility:

$$\begin{pmatrix}
(2,2)&(0,3)\\
(3,0)&(1,1)
\end{pmatrix}$$

The probability distribution at this table (indicating what game we played next) is given by:

$$\begin{pmatrix}
(.75,.25)&(0,1)\\
(0,1)&(.5,.5)
\end{pmatrix}$$

    - If both players cooperate (getting a utility of $2$ each) then there is a 75% chance that they play at this same table again.
    - If only one player cooperates and the other defects (the defector getting a utility of $3$ and the cooperator a utility of $0$) then the players move to the other game with 100% probability.
    - If both players defect (each getting a utility of $1$) then they will play the same game with 50% chance.

- The second table was a dummy game that basically ended the game:

$$\begin{pmatrix}
(0&0)
\end{pmatrix}$$

The probability distribution for this game is $(0,1)$.

The assumption made in this class is that the games are played infinitely and as such a discounting factor $\delta$ is used. When playing this in class we interpreted the discounting factor as the probability of the game continuing. So the game ended when the probabilities indicated that we were in the second game or when the discounting factor said so (we used $\delta=1/2$).

The class formed 4 teams and we played a round robin.

The first round had Tom's Team go up against Team Roy and Barmy go up against 4:



We see that Tom's team and Team Roy cooperated both times before the game ended whilst 4 defected from the start against Barmy.

The next round had Barmy go up against Tom's Team and Team Roy go up against 4:



We see that Barmy and Team Roy both cooperated before the game ended and Team Roy and 4 both defected before the game ended.

The last game had Tom's Team go up against 4 and Team Roy go up against Barmy. Below you can also see the final cumulative scores:


As you can see Tom's Team won (the name is after +Paul Harper son Tom joined the class during which we played the iterated prisonder's dilemma).

To decide who came second (they could choose the second prize to either be apples or party rings) a best of three game of Rock, Paper, Scissors, Lizard, Spock was played with Joe winning the party rings for Team Roy ('Scissors decapitates lizard').




I think this was good fun and hopefully enabled the students to have a sound grasp of what is meant by a state, a transition, etc..

Interestingly all teams happened to play Markov Strategies (i.e. in any given game they all did the same thing in each state) although this might have been different if the probabilities allowed the game to last longer. In the course we are restricting ourselves to Markov strategies.

After playing this we went through obtaining 2 equilibria for this game (both players defecting and both players cooperating).

Here are some photos that a student grabbed during the class and also a re-share of the gif of +Paul Harper's son Tom using a Nerf gun on those who defected against his team during the Prisoner's dilemma tournament:





Monday 10 March 2014

Playing a game with incomplete information in class

In class today we took a look at games of incomplete information. Loosely this relates to any kind of game that involves the players not knowing 'everything'.

Next week we'll be looking at stochastic/Markov games (C13 of my course) but this week we took a look at incomplete information in extensive form games (C12 of my course).

We played the following game (a generalization of the matching pennies game that I've blogged about before):

"Player 1 picks Heads or Tails, a coin is flipped and Player 2 is aware of the result of that coin (but not the choice of Player 1), Player 2 then picks Heads or Tails. The is akin to a matching pennies game except that if Player 2 loses whilst picking the same as the random coin then the outcome is $(0,0)$."

Here is a game tree that describes the game:


On a slight tangent our School operates a mid-module feedback system which is not 'official' but allows us to get some feedback from students with regards to how the course is going so far. One piece of feedback I got was something like:

"The chocolates are making me fat bring fruit instead."

I'm pretty sure this was tongue in cheek but I brought in the following as a prize for the winner of a 3 player round robin (using the above game):

(A melon and some fizzy cherries)

As I said, I got 3 students (referred to as S, A. R) to play a round robin tournament where each would play each other twice (swapping being player 1).

The results are here (thanks to +Jason Young for collecting them):


In that photo we see the following match ups:

- Game 1 and Game 2: A versus S
- Game 3 and Game 4: R versus S
- Game 5 and Game 6: A versus R

What we see is that R actually won with a final cumulative score of 3, A finished with -1 and S with -2. Looking through the details we see that A and S both went "against" the coin both times whereas R went "against" the coin once.

I asked the class if anyone had an idea of how one should play the game. One student (who I think was from the team Roy crowd) yelled out that one should always go with the coin.

This is exactly correct, the way to 'prove it' it is to obtain the corresponding normal form game for the extensive form game. We do this by using expected values over the random nodes:

We have $S_1=\{H,T\}$ and $S_2=\{HH, HT, TH, TT\}$. The strategy set for player 2 indicates what player 2 should do for each possible flip of the random coin: so $HT$ indicates playing $H$ if the coin flips Heads and playing $T$ if the coin flips Tails (so $HT$ corresponds to agreeing with the coin, $TH$ corresponds to going against the coin etc...).

Using the above strategy ordering we get the following normal form game:

$$
\begin{pmatrix}
(1/2,-1/2)&(-1/2,1/2)&(0,0)&(-1,1)\\
(-1,1)&(-1/2,1/2)&(0,0)&(1/2,-1/2)
\end{pmatrix}
$$

I will skip the analysis of this game (which we did in class) but it can be shown that the Nash equilibrium is for player 1 to play Heads with probability between 1/3 and 2/3 and player 2 to agree with the random coin.

Importantly, at the end of the game the victor: R gave the melon to A. I hope he enjoys it and also that my students found this exercise slightly useful and perhaps slightly more memorable.

Sunday 9 March 2014

A (very) brief interactive look at evolutionary game theory in class

In class last week my students and I started looking at evolutionary game theory (Chapters 11 and 12 of my class). One concept of evolutionary game theory that is important to understand is that in a sense the barrier between strategies and players becomes a bit fuzzy.

To try and illustrate this in class I brought in 2 packs of cards. I actually ended up only using the cards as binary markers (whether or not they were facing 'UP' or 'DOWN'). I then proceeded to describe the following:

"If an UP card interacts with a DOWN card in any given round the DOWN card changes to UP on the next round. Otherwise everything else stays the same."

I used this game to illustrate how a strategy $\sigma$ can induce a population vector $\chi$ and I also touched upon what we would mean by $\sigma$ being stable.

We played the following games:

---------------

The left half the class was told to play UP and the other half to play DOWN. Thus our initial value of $\chi=(.5,.5)$. This was given by the fact that half the population was playing $\sigma=(1,0)$ and the other half playing $\sigma(0,1)$.


We played a couple of rounds (which was fairly academic as the outcome is obvious) and arrived at a final population vector of $\chi=(1,0)$ (all the DOWN cards had been changed to UP cards). This is a stable population.

I asked what would happen if I nudged the population by introducing some more DOWN cards in to the population, to perhaps $\chi=(.9,1)$. Everyone realised that the population would swiftly move right back to $\chi=(1,0)$.

We also talked about what would happen if we started with $\chi=(0,1)$ (all cards started as DOWN), everyone realised that $\chi$ would not change over time as we played (since there were no UP cards to force a change). This is also a stable population.

It's obvious though that if we introduce some UP cards (say nudging the population to $\chi=(.1,.9)$) then the population would swiftly move to $\chi=(1,0)$.

The difference between these two stable populations is that one stays stable under evolutionary conditions.

That basically leads us to the definition of an evolutionary stable strategy.


---------------

The next game we played was to ask students to randomly (with equal probability) assign themselves a strategy: so everyone was playing a strategy $\sigma=(.5,.5)$.

I won't go in to the details of what we did with that (mainly re-confirming the above conclusions) but the important part is to see that any given population vector $\chi$ can be induced by a strategy vector $\sigma$. This leads to the idea of considering whether or not a strategy is evolutionary stable which corresponds to whether or not it is stable in the population that it induces.

---------------

With more time I would have liked to do more with the cards and played more games...

---------------

Tomorrow we'll be talking about pairwise contest games and the connection between normal form games and evolutionary games.

Here's a video that I put together a while ago that shows some code that allows us to investigate emergent behaviour: