Agent-based modelling get-together

Joyce Klu (Statistics Department); Sean O’Heigeartaigh (Future of Humanity Institute); Himanshu Kaul (Institute of Biomedical Engineering, Department of Engineering Science); David Zeitlyn ISCA (Anthropology); Chris Farmer Mathematics Institute; Joshua Kahn (Engineering Sciences); Ben Johannes (Anthropology); Justin Lane (Anthropology); Anders Sandberg Future of Humanity Institute; Andreas Duering (Archaeology); Olaf Bochmann (Maths/Finance); Fabio Caccioli (Maths/Finance); Emre Can (Zoology); Ken Kahn (IT Services); Howard Noble (IT Services); Martin Gould (Mathematics); Christoph Aymanns (Mathematics); Pieter Francois (Anthropology)

First up Ken gave us an update on his model of the spanish influenza pandemic. His emphasis for this talk was on how to use the BehaviourSpace, a NetLogo tool to help sweep the parameter space of the model to look for tipping points and other relevant features in the data. In particular Ken is investigating how useful the model could be in discussing the likilihood that the pandemic started in  Camp Funston, Kansas or Etaples, France. Since there is a much narrower range of model parameters that result in a pandemic when the index case is set to be in Etaples when compared to Camp Funston, Ken suggests this is evidence for the predominant theory that the Spanish flu started in Funston. There was quite a bit of healthy debate about this rationale (1) how do we know the model captures all the important parameters and behaviours (2) how do we know the range of settings for Etaples is in fact small.

Discussion then moved onto a topic that bubbled away throughout the afternoon: what is different about agent-based modelling. A choice quote that got me thinking at least: “is NetLogo a user-friendly interface to a tool that is helping people do numerical analysis of a set of differential equations.” (Please use the comments section of this blog post!)

Proofs might be hard to find but the fact that cake oiled the wheels of collaboration between Andreas and Anders during the last ABM get-together was in evidence during the next talk. Anders presented a set of ideas for a model that build on the model being built by Andreas about the kinds of populations that we can infer from medieval graveyard records. This meeting’s morbid tale was focused on the more general issue of the survivability of small population groups. One of the main interests in this line of thought is the extent to which human populations are subjected the Alee effect, whether small populations of humans are necessarily less fit e.g. does our intelligence allow us to cope better with tough years than might otherwise be expected; do we need a larger population to increase the likelihood of stable technological transfer through generations; some technologies are easier to transfer than others e.g. writing is easy – once you’ve seen it you can make your own system. In relation to Andreas’s work the ABM aspect brought to light the many assumptions that must be built into population life/morbidity tables (and the research that uses them).

Another topic that arose throughout the afternoon: how do we know that the code that underpins a model is good? How can we understand the assumptions that are built into the model without reading the code? (e.g. how does ABM code relate to code that writes code and mathematical proofs). We discussed the need for greater use of ODDs (or something like) and also Stephen Wolfram’s book A New Kind of Science on how science has a new and distinct tool in computer simulation (compared to equations) i.e. “a quantitative difference in our ability to solve problems (with simulation) is an important qualitative difference.”

Olaf then gave us a short introduction to the EU Crisis Project. The is a large collaboration but the eventual goal is to standardise on a methodology and toolset that allows other groups to continue investigating new questions e.g. the Bank of England. The project team in Oxford will use  Matlab, Java and Mason. The project are about to release an online game for the general public to help people understand the mysterious world of finance (the game will also gather behavioural data e.g. about non-rationality and decision heuristics).

We then moved onto the topic of cell division, apoptosis and chemotaxis in a bio-reactor that maintains a glucose gradient. Himanshu is modeling experimental results using FLAME, a toolkit especially suitable for biology. Himanshu is particularly interested in the methodological issue of parameter selection i.e. how do you decide which of the very large number of possible agent (cells) behaviours and attributes to include in a model. We discussed two contrasting approaches: (1) add them all and use sensitivity analysis to remove as many as possible (2) theory of constitutive equations i.e. add the core theories that we know about the world (e.g. gravity, conservation of energy etc) one-by-one so start with the simplest model and build up.

Next we moved up the biological stack to whole humans making complicated decisions about their lives within their complex environment – specifically peasant farmers in Cameroon. David outlined a John Fell project that is about to start where he will work with Ken and Howard to prototype the use of models with game-like elements built into them as a tool for discussing the future. David mentioned research where something similar has been done with SimCity, work by Lucy Suchman on how people talk about systems (e.g. around the photocopier), and work by Bruno Latour to inform how we capture the way we interact with ABMs in Cameroon i.e. in-game logging (presumably as will be done for the Crisis Project).

Finally, to keep us on our toes, we moved back to modelling the movement of money between people/agencies but this time specifically within the foreign exchange spot markets. Martin’s work is investigating whether it is possible to reconstruct the social graph of transactions between traders based simply on public information about the trades. Martin’s work addresses perhaps the main question that we all ended up focusing on – how do we know a model captures the ‘essence’ of a system rather than just reproduce a data set? Martin referred us to the method of simulated moments and The Knowledge Engineering Review / Volume 27 / Special Issue 02 / June 2012, pp 187-219; DOI: http://dx.doi.org/10.1017/S0269888912000136

With our heads primed we (those of us that didn’t have a class at 4pm) shuffled down the corridor for some well-deserved cake.

Posted in Meetings | 1 Comment

One Response to “Agent-based modelling get-together”

  1. [...] Organises and contributes to a wide range of events e.g. Digital Humanities Summer School and the ABM get-together [...]

Leave a Reply