Frequently one needs to create agents based upon data. There is a new micro-behaviour where one just lists the attributes and pastes in the data (for example, copied directly from a spreadsheet) and agents are created initialised by each row of data. The behaviour can be found in the ‘Adding agents’ section of the main library.
Improvements were also made to the support for automatically applying edits made in BC2NetLogo to the Behaviour Composer model.
And, as usual, bugs were fixed.
Here are the full details of the changes.
This release includes several improvements to BC2NetLogo (which connects the Behaviour Composer directly to NetLogo). It provides support for translation, the Epidemic Game Maker, and the NetLogo view update policy (and changes the default to be tick-based). Also for those not using BC2NetLogo the Download tab was much improved and warnings were added to the Run tab about problems running Java applets. Release includes bug fixes as well including some that caused BC2NetLogo to fail on some versions of Linux and MacOS. Full details at https://code.google.com/p/modelling4all/source/list
Tuesday I attended the conference on System Risk at the Oxford James Martin School.
Doyne Farmer gave a good talk about his group’s work on modelling financial systems. He presented two ABMs: one that models price crashes in markets where some investors (hedge funds) use leverage. Shows why there are nice long periods of calm growth and profits followed by a crash and then chaos before the cycle starts again. Talked about work -in-progress for a larger financial system model that includes different kinds of banks including the central bank.
Didier Sornette gave a talk about super exponential growth caused by positive feedback leading to bubbles that burst. In response to an audience question (from me) about what effect investors who used his models to foresee crashes before they happen might have on the market itself. He said they are building an ABM to explore that question where a fraction of the investors use his predictive model.
ABM came up in other talks and conversations.
I just returned from an interesting talk in the Mathematical Institute by Hannah Fry from UCL’s CASA centre. There was lots of heavy mathematics and some interesting applications. I especially liked the model of the London riots in 2011. It combined nicely a contagion model (SIR where ‘I’ means become a rioter and ‘R’ means being arrested), a retail model (where shoppers trade-off distance to travel with size of retail outlets), and Epstein’s civil disobedience model where the probability of being arrested depends upon the ratio of police to rioters. They even did some participatory modelling where a model of riots breaking out in London was displayed on a table and the police moved toy police cars and vans around in response. There was a Kinect camera above the table that updated the model as the toys were moved around.
During cake after the talk Hannah, Chris Farmer, and I had an interesting discussion of the relationship between ABM, model output, discrete mathematics, and continuous mathematics.
Two-scale high resolution based on satellite image. Key: dark green = forest; blue = river; brown = road/path etc
10 minute GIMP sketch
Scehematic constructed entirely in NetLogo
Using GIMP to pick out features based on grey-scale
Schematic village and farmland in 3D
Some will prefer a sketch (perhaps that they have drawn?), others might be happy to use prototypical fields, village and forest and play around with the space parameters at will. It depends on the questions the model is being used to think about and a person’s literacy in terms of reading the representation. (A constructionist might say the best representation is one that a person has built themselves).
The satellite map approach might be confusing if you’ve never thought about scanning your terrain from space or used a map. I currently favour this approach though for a number of reasons:
- I don’t think it is much of a leap of imagination to key into this kind of representation, and I think there are a number of techniques I can put into the model to help people imagine themselves within the model. Tricks that might work: ask gamer to use a mouse click to put their house into the village in the right location; show farmer agents going about activities in a recognizable way e.g. walking to the fields, clearing weeds, harvesting; change the size of farmer agent depending on which map scale they are in (smaller in fields)
- One patch is equivalent to about 2m x 2m or a couple of average dinner tables (or desks) – easy to imagine but nevertheless quite amazing that it is possible to make a combined GIS/ABM at this level of detail.
- 2m x 2m is easy to imagine when thinking about doing a farming tasks i.e. answering a question – how long does it take to clear weeds
- 2m x 2m is also on the scale of a few handheld quadrat samples i.e. if we’re to measure the distribution of fauna and flora, and take soil chemistry readings then there can be a more or less direct relationship between where the data is measured and a patch in the model.
- I like the idea of using an even higher resolution in the village so that I can visualize farming agents doing different activities e.g. repairing a roof, digging a new latrine, selling at the market, going to school, recovering from illness etc.
Point 3 needs thinking through. What I’d like to do is update patch and agent attributes at a frequency that doesn’t effect the performance of the model. Perhaps there are predictive equations I can use that take into account the main factors that influence soil and crop growth e.g. temperature, rainfall and how the farmers treat the soil. Then it will be interesting to make it easy to update the model with empirical data and see how these compare to predictions – how fast is maize growing, how long did it take to mature, what was actual rainfall/temperature, how are key soil chemistry readings changing.
This approach fits the idea of ABM as dashboard i.e. a tool that can be used to repeatedly explore and update local farming strategies and communicate these strategies to others.
Ken Kahn (IT Services), Tamas David-Barrett (Social Evolutionary Neuroscience Research Group), Howard Noble (IT Services), Richard Taylor (Stockholm Environment Institute), David Zeitlyn (Institute of Social Cultural Anthropology), Anders Sandberg (Future of Humanity Institute), Doyne Farmer (iNET), Andrew Snyder-Beattie (Future of Humanity Institute), Ivvet Modinou, (co-founder of Sparkd.com), Citt Williams ( Oxford Internet Institute), Mark Gilbert (Mathematical Institute), Maja Zaloznik (Oxford institute of population aging), Kevin Burrage, (Computer Science), Stuart Armstrong (Future of Humanity Institute), Rob Axtell (iNET), Wybo Wiersma (Oxford Internet Institute)
- Ken Kahn talked about recent discussions with medical researchers at Oxford interested in ABM.
- Howard Noble gave an update on the game he is making for participatory research in Cameroon
- Richard Taylor introduced the KnETs and participatory research at the Stockholm Environment Institutes
- Tamas David-Barrett introduced us to computational evidence of the cognitive cost of sociality and how this might limit monkey and ape group size
- Anders Sandberg then asked us for data sets he could use to model the effect of the perfect epidemic i.e. the spread of a (man-made) pathogen and which isolated (?) populations might survive
- Wybo Wiersma gave us an overview of his plans to use various modeling techniques to study the role social media networks might play in revolutions (such as the Arab Spring).
The UK National Curriculum (Key Stage 3 – 11 to 14 year olds) now requires that students
design, use and evaluate computational abstractions that model the state and behaviour of real-world problems and physical systems
8 to 11 year olds are expected to
design, write and debug programs that accomplish specific goals, including controlling or simulating physical systems; solve problems by decomposing them into smaller parts
And 5 to 7 year olds are supposed to
create and debug simple programs
use logical reasoning to predict the behaviour of simple programs
We are co-hosting another ABM get-together with the Future of Humanity Institute to:
- See who’s working on what sorts of problems
- Find out what software and packages are being used
- See who has what expertise
- Create some new collaborations
- Assist those who are new to the technique
- Reinvigorate the network of existing informal contacts
Everyone will be welcome whether you’re an undergraduate, postgraduate, research fellow, or academic staff. It doesn’t matter which department you are in, what topics you’re interested in, or what level of knowledge or experience you have.
We are offering 5-minute presentation slots to anyone who wants to discuss thoughts on ABM – particularly problems that might be well-suited for ABM or work-in-progress. Please send email to email@example.com
Date: Monday 3 Feburary
Time: 2:00pm – 4:00pm (though in previous meetings some stayed past 5pm)
Future of Humanity Institute, Suite 1, Littlegate House (1st floor, on the left), 16/17 St Ebbe’s Street, Oxford, OX1 1PT
Cake and refreshments will be provided!Let your colleagues know about this.If you’re unable to come along but would like to keep in touch, please send an email to firstname.lastname@example.org
We hope to see you on the third of February.
Ken Kahn and Howard Noble (Academic IT, IT Services)
Seán Ó hÉigeartaigh (Oxford Martin Programme on the Impacts of Future Technology)
Anders Sandberg (The Future of Humanity Institute)
I read http://aeon.co/magazine/world-views/should-we-trust-scientific-models-to-tell-us-what-to-do/
and I added this comment to the blog:
To the extent feasible, computer models should be open and transparent. Source code and relevant data should be free and accessible. Models should be built to be understood as well as executed by computers. If a society’s members have ‘modelling literacy’ then they would be better prepared to understand the strengths and weaknesses of models. The fact that “ubiquitous computing power allows ordinary people to get hands-on experience of how models actually work” is a great first step. But to really learn ‘how models actually work’ one needs models that are transparent. Yes, many models are so complex that only specialists can understand them, but maybe researchers should be obligated to provide simplified models as part of their public engagement efforts.
Modelling4All software upgraded to use NetLogo 5.0.5 instead of 5.0.4.