Chronis Kynigos talked about constructionist e-books. Technically the idea is to have e-books that can have embedded in this live constructionist tools. Authoring tools, student tracking and learning analytics are part of this large EU-funded project called MCSquared.
17 participants are from Thailand and they gave a presentation of constructionism in Thailand in schools (including a special school designed around constructionism), rural communities, and learning in industry. Nice examples of each. Meditation is part of the story — seems to enhance the introspective phase of constructionist learning.
A team from the University of Cyprus presented ‘cognitive processes enacted by learners during co-construction of scientific models’ which led to a very interesting discussion about student-designed models versus the ‘correct’ model. This issue is acute in physics. Physics teachers worry that students building wrong models is counter-productive. Many pointed out how different biology or social science is in this regard. And some argued that even in physics there is a learning trajectory where students build models that are correct only in special situations and then refine and improve them iteratively.
Lots of interesting talks about programming systems for young children.
Mitchel Resnick began with a talk about the four P’s of creative learning: projects, peers, passion, and play. He argued that students should be encouraged to choose their own projects that they are passionate about. He is worried that too much emphasis is placed on puzzle solving and less about being creative and expressive. His Scratch project is succeeding exceedingly well in the ‘peer’ dimension (6 million projects uploaded, 1.5 million comments per month). The web site has areas where the more advanced children help others (and are trained to support others instead of simply giving them complete answers). He argued against gamification except where it is peer-to-peer (e.g. ‘likes’ and ‘favourites’). He praised a book call Drive about the short-term benefits and long-term costs of gamification.
Edith Ackermann talked about the role of humour in creativity and argued that the playfulness that is important in learning and creativity (and doing science?) is more than the process of tinkering — it includes fantasy, whimsy, and humour.
Karen Brennan talked about constructionism in the classroom and how to avoid technocentrism. Many teachers worry that they need to be experts on every aspect of a programming language before they can begin teaching it.
In the panel discussion there was a good deal of concern that the educational establishment is taking constructionist tools (e.g. Scratch or NetLogo) and using them in instructionalist teaching. Though several people thought this was OK so long as there was a balance between instruction and construction.
This release updates the documentation, sample models, and fixes a few minor bugs.
To use the latest NetLogo (version 5.1.0) there is a new version of BC2NetLogo.
Minor bug fixes were made to the Behaviour Composer and it was upgraded to use the latest versions of Google App Engine and GWT.
Before I was envisaging a god-game like mode which I foolishly called a godel:
- A play on words combining model and god-game but also it seems something that confused mathmo-compsci folk because of Gödel and Gödel Machines
- A questionnaire combined with a visual representation of a model of an ecosystem where answers calibrate the model
- A tool to stimulate group discussion by affording complexity i.e. the non-linear dynamics of human and natural systems
- Is it an expert system? No – we don’t know enough to tell people what to do. Of sorts – a collective configuration could be didactic, in some sense a plan or community dashboard
- The recording of a player’s interaction with a godel is hopefully an expression of their mental model or understanding of the world they live in (social + ecosystem).
- A tool to explore dissonance, and to reflect on misunderstandings
- A tool to understand when individual behaviours conflict (the moments when selfish behaviour can lead to a tragedy of the commons)
- A godel in this research context is about authentic play i.e. the aim is to learn lessons about how to behave within a specific ecosystem
- A tool to find out the moment when the scripts / everyday behaviour of human actors needs to change to avoid ecosystem collapse
- A tool to support collective decision-making e.g. to decide how to adapt at such critical moments
I met Richard Law at YCCSA, and Adrienne Tecza recently. They have given me some new ideas about how to build a fishing godel:
- Load NL HubNet patches with fishing data relating to the size and species of fish at a location on a map. The distribution would need to change over time i.e. month by month, seasonally or over years (I don’t know enough about fish movement).
- The interaction design would ask players to step through the following together i.e. a number of fishing stakeholders at their own laptop:
- Read scenario information: market prices, weather, historical fishing catch…
- Point on a map to say where player wants to start fishing
- Boat moves to that location and some time passes
- Players can see (for free) some information about the fishing activity of other players:
- the fish that are being caught
- who (their name) is making the catch
- the gear being used
- Players can ask for this information from other players within a wider range
- The players being asked for information can decide what to say i.e. truth, subset of the truth, nothing
- Some players are out of range (this may not be worth modeling now that mobiles are common place?)
- The players can choose another fishing location but they are constrained by time and petrol
- Once the fishing day is over there’s a scene in the bar where global information about the catch is shared
- To avoid discord/punch ups/ethical issues, each player can register with a pseudonym, but this would take away the chance to record how real fishermen play the game according to their knowledge of the ecosystem. (It may be that this problem would go away if the game could be played iteratively and players could learn which pseudonyms to trust. It may also be that real fishermen can guess who is who by the way they play).
- The model design takes into account:
- How the act of fishing depletes fish stocks (hardest, perhaps impossible part)
- The effect of weather on fishing (makes it dangerous) and fish stocks
- Market prices of different fish species and sizes (visualized perhaps as prices on the menus of fancy hotels)
- The rate of fish stock decline is proportional to gear type
- Gear also has an effect on the environment i.e. damages coral
- Fish stocks replenish
- The model is loaded with map data and fish stock variation over time that resembles the place where real fishers extract ecosystem services (the link between game and model i.e. authentic play)
- An additional step in the game could be to ask fishers to crowd source the fish count topology map
- Such a godel would give us interaction data relating to:
- Fisher knowledge of their ecosystem
- Game-theory like data relating to sharing of information
- The number of people who use catch information, and the number of people who risk finding new bounties (explore the ecosystem) i.e. the balanced harvest approach to sustainable fishing
In terms of implementing this, I worry about using NetLogo because of all the issues I had with the extensions when trying to build the Somie farming model i.e. time extension not working with Goo extension (user interface customisation), Replay not recording efficiently, performance very slow on high resolution maps (probably my poor coding skills). Some of these issues should be solved in the next release. It would be ideal if this model ran on the web and worked well on mobile phones. Hopefully this be possible in the web-native version of NetLogo Northwestern and partners have been working on for several years. I’d also need to create a game experience but cannot see immediately how to do this in NetLogo i.e. how to show some information for a certain amount of real time, and then let multiplayer step-wise fishing simulation play out where there’s a real time is mapped to simulation time but still making it possible to see things happening such as fishing boats moving around, and fish being caught at a realistic pace. The scientist tends to abstract this information but I think it is important to visualise the complexity / details of the system, not least because real fishermen (the players) can highlight important erroneous assumptions.
Having said that – I don’t know of a better tool in terms of ease of programming whilst not compromising on complexity.
Here’s a sketch of the game made by Adrienne:
first sketch ofmultiplayer fishing game
This release includes better error handling, minor improvements, updated documentation and links, and bug fixes.
Full details at https://code.google.com/p/modelling4all/source/list
Also includes an improved introductory video.
We had five fascinating “5 minute” talks that stimulated discussion on the theory, methodology and application of agent-based models.
Conceptual description of a multi-level system, with the high dimensional micro-level at the bottom and the reduced dimension macro-level at the top; in the case of the social sciences we imagine the bottom level is the agent level while the upper plane is the aggregate or macroeconomic level, although in reality there are many intermediate levels (e.g., firms) so in fact the bottom level could be agents and the next level firms, or the bottom could be firms and the upper level the macroeconomy.
Rob Axtell started off the proceedings by making a general point that ABMs are well suited for social scientists who want to make full use the capabilities of a modern laptop i.e. processor, RAM, graphics card, high resolution display. (Ken highlighted that researchers already have access to high performance computers facilities e.g. ARC service at Oxford). This led on to the topic – if ABMs are the answer, what are the questions. Do we build models to look for emergence (the computer scientist’s preference?) or to make predictions by removing details until the macro-behavior is deterministic (the engineer’s preference?). Rob pointed us to the work for Michael Wooldridge and the following reference: Jennings, N. R., K. Sycara and M. Wooldridge (1998). “A Roadmap of Agent Research and Development.” Autonomous Agents and Multi-Agent Systems 1(1): 7-38. Rob also sent through a paper he is working on at the moment which discusses the micro vs macro system or agent vs culture problem we discussed at the meeting: Beyond the Nash Program: Aggregate Steady-States Without Agent-Level Equilibria. Rob has just submitted ’Foundations of Agent Computing in Economics and Finance’ to the journal editor.
Wybo Wiersma then gave us an overview of his work on The Internet as a Catalyst for Social Movements: An Analysis and Simulation of Social Media Mechanisms in the Context of the Arab Spring, Indignados and Occupy Movements (link to download document). Wybo is currently improving the AgentScript code library to enable him to create a properly web-based agent-based model. (The NetLogo team at Northwestern University are also working on a web-based version of NetLogo called Tortoise).
Justin Lane’s NetLogo ABM interface (click image for larger image)
Justin Lane showed us his agent-based model about the spread of ideas through religious groups (building on work by Ken Kahn and Harvey Whitehouse). Justin will use his model to think about a large data set he is gathering with churches in Singapore and Oxford. Inevitably the discussion led back to Rob’s point about how we model the cognition of an agent in a complex social network…debate ensued. Read more about Justin’s work in his Method, Theory, and Multi-Agent Artificial Intelligence: Creating computer models of complex social interaction paper. Justin also mentioned work by Ron Sun on cognitive architectures.
Andrew Snyder-Beattie presented his efforts towards a meta-model of the insurance industry — i.e. an agent-based model of how insurance companies rely upon computer models of catastrophes. He presented data on hurricanes and discussed the effects of different kinds of modelling errors. This is a joint project with the insurer Amblin.
Finally Anders Sandberg presented his agent-based model of how civilization might recover from a disaster where knowledge is preserved but there are very few people left to learn and apply that knowledge.
After more than 4 years the old tutorial has been replaced by this: http://youtu.be/WBCwwZZ0Vn8
And there is a new video demonstrating how to use the tutorial guides: http://youtu.be/SSj1lpV5dLw
Watch this: http://youtu.be/HALiXCTaoMk
It is pretty rough — comments welcome.
Besides fixing several bugs this release provides better support for micro-behaviour web pages hosted on the Modelling4All server. The fundamental problem is that there is no owner of these pages. Consequently if one releases a model using these pages then anyone can edit the pages. In this release one can declare a page read-only so that it can no longer be edited. One can still create new pages that are edited versions of read-only pages.