Friday, 27 June 2014

Pointing at a blog post about Sage and a treasure hunt

It looks like I might be moving... I've been playing with jekyll + github and really love the workflow so it looks like I might be moving this blog over there.

It's not official yet as I still need to think about a couple of things but in the meantime here's a post I just wrote over there about how the team I was in used Sage in a mathematical treasure hunt:

Monday, 19 May 2014

Reflecting On Evidence For The Benefits Of Active Learning

A paper entitled 'Active Learning Increases Student Performance in Science, Engineering and Mathematics' (by Freeman and 6 co-authors) was recently published and got a fair bit of quasi-popular press exposure as the message seemed to be quite simple: Students learn better when they are not simply an audience during contact time (or in other words: lectures aren't the best). I decided that this was a paper that I really wanted to read so I went through the pain of having to google twice (thus making it 'officially' hard to find) and have just finished reading it.

This is a great paper.

Firstly there is a nice definition of what active learning is:
"Active learning engages students in the process of learning through activities and/or discussion in class, as opposed to passively listening to an expert. It emphasizes higher-order thinking and often involves group work."
The definition given in the paper for a traditional lecture is given as:
"[...] student activity is assumed to be limited to taking notes and/or occasional and unprompted questions of the instructor." 
The definition for active learning is not obtained out of thin air but takes in to account over 300 audience members at departmental seminars on active learning throughout North America.

What I particularly like about the definition is that it specifically makes a point of mentioning the higher-order thinking.

I was recently lucky enough to attend a talk by Dr. Collin Jones who teaches in Tasmania using an active learning approach. Here is a diagram that I've put together trying (humbly) to get across what he said on the subject of active learning (you might want to open the image in a new tab...):

In that I'm trying to say that if we want students to access the higher levels of Bloom's taxonomy across the various topics in a course, with our help through contact time then a more active learning approach seems logical.

It looks like Freeman's paper confirms this.

It tells us lots of great stuff.

The first section quickly summarsies the findings of rigorously (as far as I can tell the rigour in this paper is great) analysing the results of 225 studies that obeyed the following criteria:
  1. Involved a comparison between traditional lecturing and active learning;
  2. Involved a regular class;
  3. Only looked at comparison where the delivery method was the main change;
  4. Involved a STEM subject;
  5. Included an evaluation of academic performance
This later point is what has made this paper really stand up and get noticed. In fact this and 2 other papers (here and here) are the only papers that offer a meta analysis of student performances. This is not a paper reflecting on teaching methodologies but on the actual effects that teaching methodologies have on student learning.

There are a variety of well presented findings and I will try and summarise them very briefly here:
  1. Students will perform by just under half a standard deviation better in an active learning environment;
  2. Students are 1.5 times more likely to fail in a traditional lecture environment;
  3. 1. and 2. are not affected by the particular STEM subject;
  4. The gain in 1. for concept inventory style assessment are lower than for 'instructor-written' type assessment. This is argued (and I think I agree) to be in line with my arrows going through Bloom's taxonomy. In an active learning environment students are more likely to perform well on tasks that require a deeper learning;
  5. The gain in 1. is more prominent in smaller classes. This also makes sense (taking a class with a minimum number of students of 1, it would be idiotic to lecture and similarly for an infinite class perhaps an active learning approach would not be appropriate).
All of the above is just plain cool to have a rigorous basis for. Potential weaknesses of the study are confidently rebutted in the manuscript itself (for example the number of papers showing no improvement whatsoever that would be needed to lower the impact of the work is shown to be high).


+Stan Yoshinobu has written a brief post about Freeman's paper at The IBL blog and I really like:
"The perspective I like to take is one that is used in medicine. When new techniques or treatments are shown through evidence to provide better care for patients, then the medical community adopts those new practices."
This is something alluded to by Freeman et al. They talk about their paper (and others) being the 'first generation' of research in to active learning and that now that there is evidence for the benefits perhaps the second generation of research would start looking at best types of active learning for given students etc...

1 last thing.

The final thing I want to talk about is the potential weakness of this research. This is pointed out by the authors:
"The instructors who implemented active learning in these studies did so as volunteers. It is an open question whether student performance would increase as much if all faculty were required to implement active learning approaches."
My personal opinion is that that really hits the issue on the head. Instructors who innovate with their teaching so as to move away from the tried and tested lecture that has been in place for 100s of years are most probably the academics who have the energy and conviction to put a lot of effort in to their teaching.

Often in academia where the pressure to publish and obtain funding are immense it is sad to say that it's very easy to simply reach for the same set of notes that has always passed down and lecture. This leaves students to attempt to reach deeper levels of learning on their own.

It must also be said though, (as alluded to in the paper) active learning means a variety of things. For example I use problem based learning, flipped classrooms and a combination of active game playing (that I would like to think has some element of inquiry based learning) in my own teaching however active learning in Freeman's paper has a breadth of meanings which include the simple use of clickers to get some form of student engagement with the learning. So I don't necessarily think that an active learning approach implies a huge distancing from classic lecturing for all instructors (see the earlier definitions of the active learning and traditional lecture).

As the authors' state it's a case of now identifying the best pedagogic approaches for instructors as well as students: getting that pairing right.


I really enjoyed something that Dr Sue Rigby said at the opening of the 2014 HEA STEM conference:
"We have to teach in a way that is fun. We get to choose how we teach"
As educators I think it's very important to remember that we do indeed get to choose how we teach and ultimately active learning is often simply far more fun than traditional lectures (I sure had a smile on my face when trying to teaching cooperative game theory by having students play waste paper basketball).

We should always choose a pedagogic approach for a better reason than "that's how it has always been done". Sound research like the one in this paper seems like a good a reason as any.


Edit: On the 7 and 8th of July (2014) +Paul Harper and I are hosting a workshop on active learning techniques. Please take a look :)

Sunday, 11 May 2014

Wizards, Giants, Linear Programming and Sage

I've been playing a game on my phone for a couple of months now called Clash of clans. It's one of those freemium games that tells me to do something on my phone every now and then:

In essence during the game you build your base with buildings to obtain resources, create troops and defend against attacks.

Here's a screen shot of my base at the moment:

As well as building your base, you can also put together a raiding party and attacking other bases.
In this post, I'll describe the use of a mathematical technique called linear programming so as to get the best combination of Giants, Wizards etc in a raiding party.

To train a warrior costs Elixir (a resource that you mine and/or plunder) but also costs space in your army camps.

Here are the various warriors available to me at the moment (as you build better buildings you get more warriors etc...).

1. Barbarian (Damage per second: 18, Hitpoints: 78, Housing Space: 1, Elixir Cost: 80):

2. Archer (Damage per second: 16, Hitpoints: 33, Housing Space: 1, Elixir Cost: 160)

3. Goblin (Damage per second: 19, Hitpoints: 36, Housing Space: 1, Elixir Cost: 60)

4. Giant (Damage per second: 24, Hitpoints: 520, Housing Space: 5, Elixir Cost: 2000) 

 5. Wall Breaker (Damage per second: 32, Hitpoints: 35, Housing Space: 2, Elixir Cost: 2500)

6. Balloon (Damage per second: 72, Hitpoints: 280, Housing Space: 5, Elixir Cost: 3500)

7. Wizards (Damage per second: 125, Hitpoints: 130, Housing Space: 4, Elixir Cost: 3000)

8. Healer (Damage per second: NA, Hitpoints: 600, Housing Space: 14, Elixir Cost: 6000)

9. Dragon (Damage per second: 140, Hitpoints: 1900, Housing Space: 20, Elixir Cost: 25000)

To summarise the information about the players I'm going to put the costs in a vector \(c\) and the housing space in a vector \(h\) (indexed using the same order as the images above):


Currently I have a capacity of 50 housing spaces per camp and I have 4 camps so I have an upper bound on the total amount of housing space of \(K=200\).

If I let the vector \(x\) denote a raiding party so that \(x_i\) is the number of warriors of type \(\i) (using the ordering above) then we have the corresponding housing space constraint:

\[\sum_{i=1}^9h_ix_i\leq K\]

I can now try and do two things:
  1. Get a full set of camps by minimising the overall cost.
  2. Get the most powerful raiding party that can fit in my camps.
I will solve both of these problems using what is called Integer Linear Programming. The integer part of that is due to the fact that I must have whole parts of warriors.

  1. Minimising my costs

The cost of a given rading party \(x\) is given by:

\[C(x) = \sum_{i=1}^9c_ix_i \]

Thus to minimise this cost and fill the camps we need to solve the following minimisation problem:

\[\text{minimise }C(x) = \sum_{i=1}^9c_ix_i\]

such that:

\[H(x)=\sum_{i=1}^9h_ix_i= K\]

How do we solve this? I've blogged about a similar type of optimisation problem that I use to schedule my class assessment and for that I used the awesome sagemath. You can read that blog post here. Here I will do the same thing.

The following Sage code is what is needed to create the linear program and also obtain the solution:

# Set up parameters:
K = 200  # Set the upper bound
c = [80,160,60,2000,2500,3500,3000,6000,25000] # The the cost vector
h = [1,1,1,5,2,5,4,14,20]  # Set the 'housing space' constraints vector
n = len(c)  # Get the number of warriors

C = lambda x: sum([x[i]*c[i] for i in range(n)])  # Define the cost function
H = lambda x: sum([x[i]*h[i] for i in range(n)])  # Define the housing capacity function

# Create and solve the optimisation problem
p = MixedIntegerLinearProgram(maximization=False)  # Create an optimisation problem with minimization as the goal
x = p.new_variable(integer=True)  # Create a variable with the requirement that it must be integer
p.add_constraint(H(x)==K)  # Set the housing constranit
p.set_objective(C(x))  # Set the objective function # Show the optimisation problem
p.solve()  # Solve the optimisation problem

# Solve the optimisation problem

print 'Has solution:' 
for i, v in p.get_values(x).iteritems(): 
    print 'x_%s = %s' % (i, int(round(v))) 
print 'Housing spaces used: %s' % H(p.get_values(x))

The output is (disappointing but expected): \(x=(0,0,200,0,0,0,0,0)\) which implies that I should have 200 Goblins.

If you're familiar with the game this is not a very sensible way to go as Goblins are really only good at getting loot from defenceless bases.

We can add a couple more constraints so that we don't have too many goblins (let's limit it to 10) and also so that we have at least one healer:

p = MixedIntegerLinearProgram(maximization=False)  # Create an optimisation problem with minimization as the goal
x = p.new_variable(integer=True)  # Create a variable with the requirement that it must be integer
p.add_constraint(x[2]<=10)  # Don't want more than 10 goblins
p.add_constraint(x[7]>=1)  # Want 1 healer at least
print 'Has solution:'
for i, v in p.get_values(x).iteritems():
    print 'x_%s = %s' % (i, int(round(v)))
print 'Housing spaces used: %s' % H(p.get_values(x))

This gives us \(x=(176,0,10,0,0,0,0,1,0)\).

This is again not very satisfying as we're basically filling our raiding party with the cheapest options. That was the purpose of this approach though. 

The next thing we'll look at is how to actually get a good raiding part.

  1. Getting the 'best' raiding party.

As you can see on the images, every warrior is different: some fly, some are better against buildings, some have more hitpoints than others etc...

One way of getting the 'best' raiding party could be to maximise the damage per second of the raiding party. To do this, let's define the following vector (which is just the damage per second that can be read off of the images above):

\[d=(18, 16, 19, 24, 32, 72, 125, 0, 140)\]

Now our maximisation problem can be written as:

\[\text{maximise } \sum_{i=1}^9d_ix_i\]

such that:

\[H(x)=\sum_{i=1}^9h_ix_i\leq K\]

Here's the Sage code to do this:

damagepersecond = [18, 16, 19, 24, 32, 72, 125, 0, 140]
C = lambda x: sum([x[i]*damagepersecond[i] for i in range(n)])

p = MixedIntegerLinearProgram()  # Create an optimisation problem with minimization as the goal
x = p.new_variable(integer=True)  # Create a variable with the requirement that it must be integer
p.add_constraint(x[2]<=10)  # Don't want more than 10 goblins
p.add_constraint(x[7]>=1)  # Want 1 healer at least
print 'Has solution:'
for i, v in p.get_values(x).iteritems():
    print 'x_%s = %s' % (i, int(round(v)))
    print 'Housing spaces used: %s' % H(p.get_values(x))

The result of the above is \(x=(0,0,2,0,0,0,46,1,0)\). I.e take 2 goblins, 46 wizards and 1 healer.
The problem with this raiding party is that wizards are actually quite fragile (archers towers and cannons can pretty much just pick them off).

So perhaps instead of maximising the damage per second we could maximise the total hit points of the party.

To do this we need to define the vector of hitpoints:

\[p=(78, 33, 36, 520, 35, 280, 130, 600, 1900)\]

Now our maximisation problem can be written as:

\[\text{maximise } \sum_{i=1}^9p_ix_i\]

such that:

\[H(x)=\sum_{i=1}^9h_ix_i\leq K\]

This gives us \(x=(0,0,0,0,93,0,0,1,0)\): a raiding part of wall breakers.

The problem with this is that wall breakers are good for just that: breaking walls. They carry a bomb up to a wall and then they explode. So let's add some more constraints to our optimisation problem so as to get something a bit more useful.
  • Let's make sure that we always have at least 20 housing capacity worth of distance attackers (Archers or Wizards):
\[x_1h_1+x_6h_6 \geq 20 \]
  • Let's ensure that we have at least 1 air attacker (so that once the air defenses are out of the way we can finish off the base):
  • Let's ensure that we always have some wall breakers to clear the walls for our ground units (1 per 5 giants and 1 per 20 barbarians say):
\[5x_4\geq x_3\]

\[20x_4\geq x_0\]
The code to solve this is given here:

p = MixedIntegerLinearProgram()  # Create an optimisation problem with minimization as the goal
x = p.new_variable(integer=True)  # Create a variable with the requirement that it must be integer
p.add_constraint(H(x) <= K)
p.add_constraint(x[2] <= 10)  # Don't want more than 10 goblins
p.add_constraint(x[7] >= 1)  # Want 1 healer at least
p.add_constraint(x[1] * h[1] + x[6] * h[6]>=20)  # Want at least 20 capacity worth of distance attackers
p.add_constraint(x[5] + x[8]>=1)  # Want at least 1 of air attackers
p.add_constraint(5 * x[4] >= x[3])  # Want at least 1 wall breaker per 5 giants
p.add_constraint(20 * x[4] >= x[0])  # Want at least 1 wall breaker per 20 barbarians
print 'Has solution:'
for i, v in p.get_values(x).iteritems():
    print 'x_%s = %s' % (i, int(round(v)))
print 'Housing spaces used: %s' % H(p.get_values(x))

This gives \(x=(1,20,0,23,5,0,0,1,2)\): so take 1 Barbarian, 20 Archers, 23 Giants, 5 Wall Breakers, 1 Healer and 2 Dragons.

The above would give us the most hit points but now we're missing out on the damage power of wizards.

Perhaps we could try and balance both hitpoints damage.

In essence we have what's called a multi objective optimisation problem. One approach to solve this is to simply weight both objectives.

We're going to use \(\alpha\) to denote the importance of the hitpoints and \(1-\alpha\) the importance of the damage.

Our optimisation function then becomes:


Using the same constraints as before and taking \(\alpha=.5\) (so that both damage and hit points have equal weighting) we get \(x=(11,0,0,25,5,0,5,1,1)\). Thus we should be taking in to battle: 11 Barbarians, 25 Giants, 5 Wall Breakers, 5 Wizards, 1 Healer and 1 Dragon.

This feels much more balanced than anything we've had so far and not far off what I've been using as my own raiding party.

Given that everything is nicely coded we can quickly obtain the make up of the optimal raiding party for a range of values of \(\alpha\):

We see that as \(\alpha\) increases (so that we care more about hit points) the number of Wizards we should use slowly decreases (at one point the constraint for ranged attacking units kicks in and we replace the Wizards with Archers).

You could easily tweak the constraints to perhaps ensure your party always has some Goblins or otherwise. I think this is a nice example where a mathematical model can be used to solve a problem but at the same time must not be taken far from the problem itself. There might be some players who only choose to attack bases with a particular type of defence so they can tweak/add constraints to the problem accordingly.

Feel free to play around with the code in the Sage cell below:

The mathematical techniques used in this post rely on Linear Algebra, Higher Dimensional Geometry and obviously the ability to write code :)

(On another note I'm still not in a clan on the game so if anyone has any space in a clan let me know!)

Saturday, 3 May 2014

Debugging Python with pdb at the command line

I've just put up a short video (7 mins) describing one way of debugging in Python using the pdb module (here's the docs):

In this video I show how to step through code, view values of variables and also insert various types of break points.

All this is done interactively at the command line but you can also use the pdb library to include debugging commands in your Python script itself.

Friday, 2 May 2014

Reflecting on Pechu Kucha

In this previous blog post I described my frustration with the fact that the Pechu Kucha presentation style was imposed on all speakers at the #HEASTEM14 conference.

That was 3 days ago and with the conference behind me, I am going to write down my thoughts.
First of all: the whole conference was awesome. I had a great time and learnt a lot through the awesome discussions that arose after a session of talks. Each session involved a triplet of Pechu Kuchas (with no gaps) followed by a lengthy discussion.

These discussion were awesome.

Sadly I don't believe these discussions had anything to do what so ever with Pechu Kuchas.

I actually enjoyed my own Pechu Kucha but didn't really feel that it was anything different to a normal presentation (apart from no interaction with the audience and that it was shorter than usual).
I overheard a few speakers say things like: 'it really makes you rehearse' but personally rehearsing is something I always do (I probably rehearsed less for this as I was only talking for 3 minutes and 20 seconds).

The reason the discussions were so awesome was simply that there was enough time for them. The reason there was enough time for them was that Pechu Kucha ensured that everyone spoke for no more than 7 minutes.

This could have been achieved by ensuring that sessions were chaired strictly without the need to further constrain the speakers.

One comment on my previous post said: 'Talks overrun because moderators are not doing their job'.

I realise that it's not always easy to be a chair but I think that imposing Pechu Kucha is not the right solution.

One private comment on my previous post said that 'I also think that this format is not helpful for speakers with slight speech impediments'. Indeed, forcing everyone to use a given delivery style is not terribly inclusive.

I mentioned my opinion to a couple of organisers (again: who did a truly awesome job with the conference) and after clarifying that I wasn't worried about my personal experience of Pechu Kucha but more that I didn't think it was great that everyone had to use it, I received responses along the lines of:

- 'Well I suppose people knew that this was the way so they could choose to not give a presentation';
- 'There are other options such as presenting a poster'.

I perhaps misunderstood and/or am taking that out of context (in which case I apologise) but at an education conference attended by people for whom inclusivity is a high priority I can't say I was terribly satisfied with that response.

Finally to return to the great thing about this conference: the post talk discussions. These were truly awesome and it was so great to be in a place where everyone cared and these discussions offered a great opportunity to transfer ideas and opinions.

Sadly (as is to be expected) having a single discussion session at the end of 3 talks often meant that 1 or 2 of the talks received no discussion what so ever. This could of course be because the talk in question did not instigate enough interest for any questions but I also think that it could be because after the first question is asked and answered, further discussion just snowballed it's way through the room (which is awesome). This is a minor point for me, as in a way if the discussion didn't cover a particular talk but was very interesting perhaps it's not a problem (people could always approach the speakers after the talk).

To summarise:

- It was fun to try out the Pechu Kucha style.
- It worked (indeed there is no need for long talks, short talks are awesome)
- I don't think it should be enforced (this is NOT inclusive, what if I did not want to use PowerPoint? 'Not presenting' or 'presenting a poster' are not acceptable alternatives)

I would (humbly) suggest that next time Pechu Kucha is not imposed (perhaps a Pechu Kucha session or 2 would be a good idea) but I certainly think having a 7 minute limit on talks is a GREAT idea.

This would require strict chairs that kept the conference to time. Something similar to the 'Grand Council' I used in my first year class this year. It involved 40 1 minute pitches from my students: they each had a minute and I was quite brutal in my chairing of the session. I had a timer go off and would yell "YOU'RE DONE!" but mostly it was the next student talking that would usher the previous off the stage. In practice I think I actually only cut 5 students short as all the others kept to time brilliantly. +Paul Harper wrote this post about it.

I loved the conference. 

Pechu Kucha certainly didn't make it worse than it would have been otherwise as everyone made the system work.

I just don't feel that it was an inclusive way of doing things (nor that it was necessary). Next year I will hopefully attend and either use one of the Pechu Kucha workarounds (discussed in the comments on my previous post), present using the delivery system of my choice or I suppose: present a poster.

Finally similar to my image of Machu Picchu in my other post here is a picture of Pickachu (which is what I have been calling Pechu Kucha for the past 3 days):

Tuesday, 29 April 2014

My thoughts on the Pechu Kucha presentation style BEFORE the HEA STEM Conference

This week I'm attending the HEA STEM Conference 2014.

+Paul Harper  and I will be presenting some work we undertook with +Louise Orpin and +Noel-Ann Bradshaw (the title of the talk is 'Operational research ambassadors in schools' but that's for another time).

All of the presenters have been given strict constraints to use the 'Pechu Kucha' style.

This involves using 20 slides that are set on a timer so that every slide is shown for 20 seconds with no control from the speaker. (All slides have had to be sent to the organisers quite a while ago and will be ran from a central computer.)

I am writing this post before the conference so perhaps my opinion will change but at present: I don't like it.

There are obvious benefits to using this style:
  • It's something different;
  • It will be fast paced;
  • It is easy to control (I have heard that this format was chosen because multiple talks overran in past conferences).
I'm sure I'm missing other good points (feel free to add them in the comments).

Here comes my negative points:

From a technical point of view it's a shame to not be trusted

We had a virtual meeting yesterday where +Noel-Ann Bradshaw suggested something that really should have been on our slides but we cannot modify them.

Forcing everyone to use a given content delivering format is not appropriate.

This is an education conference with talks entitled: 'Dancing statistics - communicating statistical ideas to non-mathematical students' and 'iLectures - designing and developing interactive lectures using cloud-based broadcasting solutions'. These talks sound like they will be discussing various innovative content delivery formats. There is (in my opinion) no single best content delivery format but usually an ok content-audience-speaker triplet.

Why should everyone have to use the same format? (For some Pechu Kechu might be great but for others it might be terrible).

This is enforcing a lecture style delivery with a disconnection between audience and speaker

I try my best to make my presentations (and lectures) as interactive as possible. I encourage interruptions and if we (the audience and I) go down a particular lane that isn't what I had planned: that's often just fine!

There is time planned for questions at the end of the session but what if I want to ask the audience a question or if I welcomed a discussion midway through the presentation? Without being able to carefully ensure that this fits in the 20 second per slide constraint this is something that I simply won't be able to use.

Why am I here?

When my frustrations with the format first began I thought that perhaps Paul and I should simply screencast our talk which would enable us to edit it carefully and have the 6 minutes and 40 seconds perfectly cut to a high standard. Thus, when it was our turn to 'talk' we could simply press play and sit back.

Sadly this is not an option as everything is being run from a central computer through PowerPoint. In essence though, there is no point in us being here apart from taking the potential questions towards the end of the entire session (by which time the audience as a whole will have completely disconnected from our talk).

Looking forward to the conference

I suppose this post stems from the fact that I'm a spoilt only child and don't like being told what to do. I think that I'm just unsure about everyone having to use the same thing (whatever that thing may be). We are all different with different presentation skills and styles. We should be 'allowed' to express ourselves.

Ultimately I am very much looking forward to the conference and also am (despite what this post might indicate) optimistic about trying a different delivery format. I will try and write a reflective post after the conference: it would be awesome if I'm completely incorrect and Pechu Kechu is actually a great initiative from the organisers (who I'm sure have worked extremely hard to put on what will be a great conference).

I have spoken to Paul about this but in no way mean to bring my co-authors in to this rant: this is my personal opinion :)

Finally, so that there's something nice to look at here is a picture of Machu Picchu (from wikipedia) as that's what I've been calling Pechu Kucha for the past month or so as I've not been able to remember it's correct name:

Saturday, 26 April 2014

Thinking about moving my blog...

Over the Christmas break I moved my website that sat on a Google sites page to my own server and did this mainly to learn Django. I'm now really happy with my personal site (modulo css which I'll fix when I have time) and like the workflow involved with updating the underlying database and the templates taking care of everything (a lot of my students in particular have commented on the ease of access to teaching resources on there).

I'm thinking of moving this blog over there too.

The way I see it there are good points and bad points to this.

Here is how I see the bad points:

  • It'll be 'hard': blogger just takes care of everything for me.
  • I have a modest number of readers and it would be a shame to 'lose' them.
  • comments are great but I'm a huge +Google+ fan and like the interaction I get on comments with the +Blogger / G+ thing.
Here is how I see the good points:
  • It'll be 'hard': I have a slight addiction to putting myself out of my comfort zone so learning how to set up my own RSS feed (which in practice should take 20 minutes of googling) would be good fun. A part from that it will actually be very easy.
  • I'll have more control: I've had issues with +MathJax not playing nice with +Blogger so if it's all on my server I won't have these problems.
  • I'll prefer the workflow: at the moment I write my posts in markdown, pandoc them and then copy the html and fix it in +Blogger. With my own site, posting will be a simple git push away...
On reflection I think the only real bad side would be through potential loss of interaction / engagement but I also think that by simple sharing on Twitter and G+ not much would be lost.

Perhaps I'm wrong, if anyone has moved their blog and could offer me some advice that would be great. Also, how important is the RSS feed nowadays (personally most of the blogs I read get pushed to me by social networks and/or Google Now - I wonder how many people wouldn't have read this if it hadn't gotten to them via RSS)?

Friday, 4 April 2014

A list of stuff for my student to look at before getting down to some Sage development

+James Campbell will be working with me this Summer on a project aimed at developing game theoretical stuff in to +Sage Mathematical Software System. James just emailed me asking for a list of stuff he could/should read up on before he starts. I thought more knowledgeable people than me might be able to contribute so I've lazily copied my email to him here: 

------------ email ------------


- git
- sage development (tickets, documentation etc... This is something I don't know much about myself so read about it on the Safe site and watch videos on youtube there are a bunch of them)
- cython ( - watch this intro to Sage lecture by +William Stein that's the first lecture in a class he's currently giving you also could watch the rest)
- C (to help with cython - you don't necessarily need to be an expert I think)
Test driven development: (watch all this and you will know what I mean:
- ssh and *nix (so that you're comfortable to jump on to one of my machines if necessary - depending on time we might also get you to tweak the Cardiff server)
- matplotlib (the python library that Sage uses to plot stuff, good to know it from a python pov so as to be able to get Sage to make it do what we want - we might or might not use this)
- How Sage plots graphs (graph theory graphs like I used for this: - we might or might not need this)

Game Theory:

We'll talk about this but 1 of the above (easy to code: 30 minutes of work) will be a gentle appetiser to the 'piece de resistance': normal form games,

- Normal form games (first 6 chapters of
- The lrs algorithm (there is an implementation of this written in c that we either want to re-write to get working in Sage so you'll need to understand it or get Sage to talk to it / use it, I know Sage kind of has this as an optional library but I'm not entirely sure how to 'get at it'
- Polytopes, you want to be comfortable-ish with the vocabulary around polytopes to be able to understand the lrs algorithm a bit. 


- In general I'd say don't spend much money on Python books. Like most OSS stuff there's an awesome amount of stuff online. Take a look at: (a list of free Python books). There are various exceptions to this rule though.

- With regards to Sage I don't think you need a book for this project (as it's about building stuff for Sage so mainly reading about the development process and watching youtube videos is the way to go), I have a kindle copy of, it's not bad but really everything is online. If you haven't already take a look at and join the relevant discussion groups on there.

- With regards to Game Theory there's a reading list on my website (all that follow are linked to on there). Webb's book is a gentle introduction, Algorithmic Game Theory is great and more about what you will be looking at. Finally there's a newish book by Maschler which looks really nice but I've not had time to work through it yet. In general my course site should suffice (reading/working through those books could take you years) with regards to what you need to know for game theory and I am certainly not asking you to spend money on a book. If there's a book (GT or code) that you really think would be useful let's talk.


James is a pretty good coder with a good knowledge of a git based workflow already as his first year project (during which he actually learnt to code) has led his group to develop: which is also on github (if you've made your way this far, please click on that and waste a minute or 2 of your life).

If there's anything missing from this list please add it to the comments :)

I'm looking forward to this project.

Thursday, 27 March 2014

Scheduling group presentations using Graph Theory and Sage.

Yesterday (2014-03-26) I spoke at the Embedded Enterprise Exchange about my new first year module. I used a Hangout on air during the talk and you can see it here:

That's not what I'm going to talk about here though. The course I talked baout ends with all the 'companies' (groups of 4 students) giving a $\approx 25$ minute talk.

I need to schedule 40ish talks and this needs to fit around the student availability as well as my own. In this post I'll describe how I did this using a combination of Doodle, +Sage Mathematical Software System and Graph Theory.

The beginning of this post will be some of the boring details but towards the end I start talking about the mathematics (so feel free to skip to there...).

First of all I needed to know my students availability.

For this I simply used Doodle: I kind of use Doodle for every meeting I have to schedule (they also offer a cool tool that lets you show your availability so students/colleagues have an indication of when I might be able to meet with them).

Here's a screenshot of the responses:

You can't really see anything there as I had to zoom out a lot to grab the whole picture. Doodle allows you to download the information for any given poll in .xls format so I could relatively easily obtain the biadjacency matrix $M$ for my problem. Where $M_{ij}$ is 1 if group $i$ is available for schedule slot $j$ and 0 otherwise.

The mathematics and code needed.

Once I've got a .csv file (by tweaking the .xls file) of the biadjacency matrix I import that in to +Sage Mathematical Software System  and convert it to an instance of the `Matrix` class using the following:

import csv 
data = [[int(j) for j in row] for row in csv.reader(f)] 
M = Matrix(data)

I then need to remove any particular scheduled slots that are not picked by any company:

M = matrix([row for row in M.transpose() if max(row) != 0]).transpose()

Once I've done this I can define the bi-partite graph (bi-partite simply means that the vertices can be separated in to 2 non adjacent collections):

g = BipartiteGraph(M)

We can then get a get a picture of this, I do this using a 'partition' (a graph colouring) that will colour the groups (red) and the schedule slots (blue):

g = BipartiteGraph(M)
p = g.coloring()'circular',dist=1,vertex_size=250, graph_border=True, figsize=[15,15],partition=p)

The various options I pass to the `show` command are simply to get the circular arrangement (and other minor things):

The above looks quite messy and what I essentially want is get as many pairwise matchings between the blue vertices (slots) and red vertices (companies) so that each schedule slot is attributed at most 1 company and every company has at least 1 schedule slot.

On any given graph $G=(V,E)$ this problem is known as looking for a maximal matching and can be written down mathematically:

Max:  $\sum_{e \in E(G)} m_e$
Such that:  $\forall v$ $\sum_{e \in E(G) \atop v \sim e} m_e \leq 1$

We are in essence finding a subset of edges of our original graph in such a way as to maximise the number of edges such that no vertex has more than 1 edge.

This is all explained extremely well at the +Sage Mathematical Software System documentation pages here.

Furthermore at the documentation the code needed to solve the problem is also given:

p = MixedIntegerLinearProgram()  
matching = p.new_variable(binary=True)
p.set_objective(sum(matching[e] for e in g.edges(labels=False)))
for v in g:
          for e in g.edges_incident(v, labels=False)) <= 1)

When I run the above, `p` is now a solved Mixed Integer Linear Program (corresponding to the matching problem described). To obtain the solution:

matching = p.get_values(matching)
schedule = [e for e,b in matching.iteritems() if b == 1]

Calling `schedule` gives a set of edges (denoted by the corresponding vertex numbers):

[(5, 57), (0, 45), (23, 50), (4, 42), (38, 60), (26, 56), (34, 62), (16,
68), (1, 43), (7, 40), (9, 44), (36, 58), (12, 49), (35, 71), (28, 66),
(25, 47), (24, 53), (6, 46), (3, 64), (39, 67), (17, 69), (22, 55), (13,
48), (33, 41), (10, 63), (21, 61), (30, 52), (29, 65), (37, 70), (15,
54), (19, 51), (11, 59)]

It is then really easy to draw another graph:

p = B.coloring()'circular',dist=1,vertex_size=250, graph_border=True, figsize=[15,15],partition=p)

Which gives:

You can see that the obtained graph has all the required properties and most importantly is a lot less congested.

Some details.

I'm leaving out some details, for example I kept track of the names of the companies and also the slots so that the final output of all this looked like this:

4InARow: Fri1100
Abacus: Tue0930
Alpha1: Thu1130
Alpha2: Mon0930
AusfallLtd: Fri1230
AxiomEnterprise: Thu1000
Batduck: Thu1430
CharliesAngles: Thu1500
CwmniRhifau: Mon1330
EasyasPi: Fri1130
Evolve: Thu1300
HSPLLtd.: Mon1030
JECT: Tue1200
JJSL: Thu1030
JNTL: Mon1400
JennyCash: Tue1630
KADE: Fri1330
MIAS: Fri1300
MIPE: Thu1100
MLC: Tue1600
Nineties: Mon0900
Promis: Fri1400
R.A.C.H: Tue1530
RYLR: Tue1230
SBTP: Fri1030
Serendipity: Mon1230
UniMath: Tue1300
VectorEnterprises: Tue1330
Venus: Thu1400
codeX: Mon1300
dydx: Wed1630
eduMath: Thu0930

(BatDuck is my favourite company name by far...)

Why did I do this this way?

There are 3 reasons:

1. I shared the schedule with my students through a published Sage sheet on our server. That way they can see a direct applied piece of mathematics and can also understand some of the code if they wanted to.
2. "Point and click doesn't scale" - I'm sure I could have solved this one instance of my problem with pen and paper and some common sense faster than it took me to write the code to solve the problem. The thing is next year when I need to schedule these talks again it will at most take me 3 minutes as the code is all here and ready to go. (Most readers of this blog won't need that explained but if any of my students find their way here: that's an important message for you).
3. It was fun.

Sunday, 23 March 2014

Basketball and cooperative games in class

Here is the state of my class after my students learnt about cooperative game theory (Chapter 16 of my course):

Cooperative game theory is concerned with situations where coalitions are allowed to form in game theoretic situations. In my class the particular aspect we consider is the Shapley value which is of interest when sharing a particular commodity amongst a group of player who each contribute to the commodity differently.

I use this in practice when dealing with attributing individual marks to a group project: you can read about that here.

To introduce this I asked 3 students to come down and play a game: $S$, $L$ and $C$ volunteered (or at least gave a lazy nod when I asked if anyone had played basketball before) to come down.

They had 20 seconds to get as many crumpled up pieces of paper as possible in the bin above (it was on the table). The paper was stapled and it needed to be separated and crumpled so I suggested that perhaps 1 or 2 of them could spend some of the 20 seconds separating and crumpling.

We played this 3 times and it was all good fun. After $S$, $L$ and $C$ had their go 3 other students thought they could do better (no comment).

After that I asked the class how we could share out the 10 chocolates between $S$, $L$ and $C$: what would be the fair way? After some discussion it was clear that L had made the biggest contribution to the group for example and so this was not a trivial question.

After a little while I started to ask 'how many do you think $L$ and $C$ would have had together? etc... We wrote down the characteristic function for our game:

$$v(C)=\begin{cases}4, & C=\{S\}\\2, & C=\{C\}\\6, & C=\{L\}\\4, & C=\{S,C\}\\8, & C=\{S,L\}\\10, & C=\{C,L\}\\10, & C=\Omega\end{cases}$$

where $\Omega$ denotes the grand coalition.

At this point one student suggested that we somehow needed to take all this information in to account. He couldn't quite articulate how to do that but he's completely correct.

The way to share the chocolates 'fairly' is to consider all permutations of the players and consider marginal contributions for each permutation (take a look at my notes to read about that some more).

Let me try and explain with an example:

For the first permutation $\pi=(S,C,L)$ we calculate the contribution of $S$ first: $4$. We then calculate the marginal contribution of $C$ to the coalition $\{S,C\}$, which from above has value $4$ so $C$ needs to contribute 0. Finally $L$ needs to contribute $6$.

For the second permutation $\pi=(S,L,C)$ we calculate the contribution of $S$ first: $4$ (again). We then calculate the marginal contribution of $L$ to the coalition $\{S,L\}$, which from above has value $8$ so $L$ needs to contribute 4. Finally $C$ needs to contribute $2$.

Here's all the marginal contributions for each permutation:

  • $(S,C,L)$ gives: $(4,0,6)$
  • $(S,L,C)$ gives: $(4,2,4)$
  • $(C,S,L)$ gives: $(2,2,6)$
  • $(C,L,S)$ gives: $(0,2,8)$
  • $(L,S,C)$ gives: $(2,2,6)$
  • $(L,C,S)$ gives: $(0,4,6)$
We calculate the average values of the contribution vector (indexed by the players) to give the Shapley value:


This was good fun and I'm really happy with the discussion that lead us to the calculations and it was also fun to get students to mess around in class.

Here's one (of my most viewed!) +YouTube videos that I put together on it:

Saturday, 22 March 2014

Matching games in class

Last week after looking at stochastic games we looked at matching games (corresponding to Chapter 15 of my course). There's an awesome video on +YouTube describing the problem:

I thought about possibly getting 6 students to form two groups (a set of reviewers and a set of suitors); ranking each other and then getting the class to put together a stable matching. I thought twice about this though as I'm sure it's the kind of thing that could get me fired so instead I brought 3 toys along to class with me:

Both of those you can see in this picture:

  • The third toy I brought was my favourite nerf gun (the 'Praxis'):

I actually named 'her' Zoe; after my wife (+Zoƫ Prytherch).

Three students ('A', 'S' and 'R') came down and agreed to put together their ranking of which toy they preferred:

- A: Donatello, Zoe, Deck
- S: Zoe, Donatello, Deck
- R: Donatello, Zoe, Deck

I came up with a pretty random preference for the toys:

- Donatello: R, S, A
- Deck: A, R, S
- Zoe: R, S, A

We then discussed possible matchings and allocations of the toys (if we came to a stable one A, R and S could stick with that toy for the rest of the lesson). Here is one example that's close to what was suggested (I didn't record it and I don't quite remember it):

- A gets Donatello;
- S gets Zoe;
- R gets Deck

The thing with this is that Donatello prefers R and R prefers Donatello to their current match so the above is not stable.

After some more discussion we came up with the following stable matching:

We see that whilst A is with his 'least preferred' option (the Deck): no one prefers A to their current matching so A does not have an incentive to change his current matching.

The entire matching is stable because we can't find any pair that has an incentive to move.

In class we then went over the Gale-Shapley algorithm which is an algorithm that guarantees a stable matching. A cool thing is that this algorithm gives (out of all possible stable matchings) the one that is optimal from the suitors point of view and also the worst possible one from the reviewers point of view.

In our case above the matching found is also the matching found through the Gale-Shapley algorithm so sadly for 'A' there is no stable way of matching him to anything but his worst preferred toy and 'S' gets my wife no matter what.

Thursday, 20 March 2014

Playing Stochastic/Markov games in class

In class on Monday my students and I looked at Stochastic games (Chapter 14 of my course notes). I decided to play a tournament similar to the iterated games that we played previously (corresponding to Chapters 9 and 10). I've blogged about them here and here.

In short, a stochastic game is a collection of states that themselves correspond to games. After players select strategies at a given state/game the next game they play (or state they are in) is given by a probability distribution.

This is a bit like playing in a casino where the table you play at depends on what happened at the previous table you played.

*The game we played in class corresponds to a casino with two tables:*

- At the first table we play a simple prisoner's dilemma (players aiming to maximise) their utility:


The probability distribution at this table (indicating what game we played next) is given by:


    - If both players cooperate (getting a utility of $2$ each) then there is a 75% chance that they play at this same table again.
    - If only one player cooperates and the other defects (the defector getting a utility of $3$ and the cooperator a utility of $0$) then the players move to the other game with 100% probability.
    - If both players defect (each getting a utility of $1$) then they will play the same game with 50% chance.

- The second table was a dummy game that basically ended the game:


The probability distribution for this game is $(0,1)$.

The assumption made in this class is that the games are played infinitely and as such a discounting factor $\delta$ is used. When playing this in class we interpreted the discounting factor as the probability of the game continuing. So the game ended when the probabilities indicated that we were in the second game or when the discounting factor said so (we used $\delta=1/2$).

The class formed 4 teams and we played a round robin.

The first round had Tom's Team go up against Team Roy and Barmy go up against 4:

We see that Tom's team and Team Roy cooperated both times before the game ended whilst 4 defected from the start against Barmy.

The next round had Barmy go up against Tom's Team and Team Roy go up against 4:

We see that Barmy and Team Roy both cooperated before the game ended and Team Roy and 4 both defected before the game ended.

The last game had Tom's Team go up against 4 and Team Roy go up against Barmy. Below you can also see the final cumulative scores:

As you can see Tom's Team won (the name is after +Paul Harper son Tom joined the class during which we played the iterated prisonder's dilemma).

To decide who came second (they could choose the second prize to either be apples or party rings) a best of three game of Rock, Paper, Scissors, Lizard, Spock was played with Joe winning the party rings for Team Roy ('Scissors decapitates lizard').

I think this was good fun and hopefully enabled the students to have a sound grasp of what is meant by a state, a transition, etc..

Interestingly all teams happened to play Markov Strategies (i.e. in any given game they all did the same thing in each state) although this might have been different if the probabilities allowed the game to last longer. In the course we are restricting ourselves to Markov strategies.

After playing this we went through obtaining 2 equilibria for this game (both players defecting and both players cooperating).

Here are some photos that a student grabbed during the class and also a re-share of the gif of +Paul Harper's son Tom using a Nerf gun on those who defected against his team during the Prisoner's dilemma tournament:

Monday, 10 March 2014

Playing a game with incomplete information in class

In class today we took a look at games of incomplete information. Loosely this relates to any kind of game that involves the players not knowing 'everything'.

Next week we'll be looking at stochastic/Markov games (C13 of my course) but this week we took a look at incomplete information in extensive form games (C12 of my course).

We played the following game (a generalization of the matching pennies game that I've blogged about before):

"Player 1 picks Heads or Tails, a coin is flipped and Player 2 is aware of the result of that coin (but not the choice of Player 1), Player 2 then picks Heads or Tails. The is akin to a matching pennies game except that if Player 2 loses whilst picking the same as the random coin then the outcome is $(0,0)$."

Here is a game tree that describes the game:

On a slight tangent our School operates a mid-module feedback system which is not 'official' but allows us to get some feedback from students with regards to how the course is going so far. One piece of feedback I got was something like:

"The chocolates are making me fat bring fruit instead."

I'm pretty sure this was tongue in cheek but I brought in the following as a prize for the winner of a 3 player round robin (using the above game):

(A melon and some fizzy cherries)

As I said, I got 3 students (referred to as S, A. R) to play a round robin tournament where each would play each other twice (swapping being player 1).

The results are here (thanks to +Jason Young for collecting them):

In that photo we see the following match ups:

- Game 1 and Game 2: A versus S
- Game 3 and Game 4: R versus S
- Game 5 and Game 6: A versus R

What we see is that R actually won with a final cumulative score of 3, A finished with -1 and S with -2. Looking through the details we see that A and S both went "against" the coin both times whereas R went "against" the coin once.

I asked the class if anyone had an idea of how one should play the game. One student (who I think was from the team Roy crowd) yelled out that one should always go with the coin.

This is exactly correct, the way to 'prove it' it is to obtain the corresponding normal form game for the extensive form game. We do this by using expected values over the random nodes:

We have $S_1=\{H,T\}$ and $S_2=\{HH, HT, TH, TT\}$. The strategy set for player 2 indicates what player 2 should do for each possible flip of the random coin: so $HT$ indicates playing $H$ if the coin flips Heads and playing $T$ if the coin flips Tails (so $HT$ corresponds to agreeing with the coin, $TH$ corresponds to going against the coin etc...).

Using the above strategy ordering we get the following normal form game:


I will skip the analysis of this game (which we did in class) but it can be shown that the Nash equilibrium is for player 1 to play Heads with probability between 1/3 and 2/3 and player 2 to agree with the random coin.

Importantly, at the end of the game the victor: R gave the melon to A. I hope he enjoys it and also that my students found this exercise slightly useful and perhaps slightly more memorable.

Sunday, 9 March 2014

A (very) brief interactive look at evolutionary game theory in class

In class last week my students and I started looking at evolutionary game theory (Chapters 11 and 12 of my class). One concept of evolutionary game theory that is important to understand is that in a sense the barrier between strategies and players becomes a bit fuzzy.

To try and illustrate this in class I brought in 2 packs of cards. I actually ended up only using the cards as binary markers (whether or not they were facing 'UP' or 'DOWN'). I then proceeded to describe the following:

"If an UP card interacts with a DOWN card in any given round the DOWN card changes to UP on the next round. Otherwise everything else stays the same."

I used this game to illustrate how a strategy $\sigma$ can induce a population vector $\chi$ and I also touched upon what we would mean by $\sigma$ being stable.

We played the following games:


The left half the class was told to play UP and the other half to play DOWN. Thus our initial value of $\chi=(.5,.5)$. This was given by the fact that half the population was playing $\sigma=(1,0)$ and the other half playing $\sigma(0,1)$.

We played a couple of rounds (which was fairly academic as the outcome is obvious) and arrived at a final population vector of $\chi=(1,0)$ (all the DOWN cards had been changed to UP cards). This is a stable population.

I asked what would happen if I nudged the population by introducing some more DOWN cards in to the population, to perhaps $\chi=(.9,1)$. Everyone realised that the population would swiftly move right back to $\chi=(1,0)$.

We also talked about what would happen if we started with $\chi=(0,1)$ (all cards started as DOWN), everyone realised that $\chi$ would not change over time as we played (since there were no UP cards to force a change). This is also a stable population.

It's obvious though that if we introduce some UP cards (say nudging the population to $\chi=(.1,.9)$) then the population would swiftly move to $\chi=(1,0)$.

The difference between these two stable populations is that one stays stable under evolutionary conditions.

That basically leads us to the definition of an evolutionary stable strategy.


The next game we played was to ask students to randomly (with equal probability) assign themselves a strategy: so everyone was playing a strategy $\sigma=(.5,.5)$.

I won't go in to the details of what we did with that (mainly re-confirming the above conclusions) but the important part is to see that any given population vector $\chi$ can be induced by a strategy vector $\sigma$. This leads to the idea of considering whether or not a strategy is evolutionary stable which corresponds to whether or not it is stable in the population that it induces.


With more time I would have liked to do more with the cards and played more games...


Tomorrow we'll be talking about pairwise contest games and the connection between normal form games and evolutionary games.

Here's a video that I put together a while ago that shows some code that allows us to investigate emergent behaviour: