Three dozen years at OUCS – vol two: 1986-1995

This decade began as the age of the BBC micro, the Amstrad Word Processor, and the Computers in Teaching Initiative. At OUCS, the ICL 1906A had been switched off in 1981 and replaced by an unloved but bright orange 2900 series mainframe, which we had the difficult task of persuading a sceptical university they liked better than the really boring but much more efficient VAX VMS system which complemented it. But the days of the mainframe, whatever colour, were clearly numbered. In my office, an Olivetti microcomputer running something called MS-DOS appeared (catch me using an IBM like everyone else); a bit later on I installed a Macintosh with an A4 monitor and started preparing overheads for my talks on that instead of Charles Curran’s ICL PERQ. Computing stopped being something you did in batches or at a terminal, and became something you did on your desk. The phrases “information technology”, “desktop publishing” and “word processing” were heard in the land. Amongst other seminal events, in November 1986, I attended a conference at the University of Waterloo on the possibilities offered by the forthcoming first ever digitized edition of the Oxford English Dictionary. In April 1987, Sebastian Rahtz organised a conference on Computers and Teaching in the Humanities at Southampton University. And in November 1987, I attended an international conference at Poughkeepsie College in upstate New York from which was born the Text Encoding Initiative.

The Internet started its insidious transformation of everyday life. From 1989 onwards, earnest intellectual discussion on the newly founded Humanist mailing list led to new acquaintances and new social networks (only we didn’t call them that). In 1993 I went to a Network Services Conference, organised by something called the European Academic Research Network. Here a man called Robert Cailliau from CERN demonstrated live a program called Mosaic which could display data from sites on three different continents, which was almost as amazing as the sight of a room full of people from Eastern and Central Europe taking advantage of Poland’s recent accession to EARN (the European end of BITNET) and consequent unwonted connectivity by sending email messages back home in dozens of funny languages. To say nothing of the crazy notion, which I first heard voiced there, that one day people would actually use this World Wide Web thing as a means of making money. In Oxford, of course, after much deliberation, at this time we had just installed something called Gopher to run our information services.

I did a huge amount of travelling in the nineties, much of it on behalf of the TEI, which I joined as European editor in 1989. Between 1990 and 1994, when the TEI Guidelines were finally published as two big green books, I must have made more than a dozen trips to the US, and as many to various places in Europe, to attend the umpteen committee and workgroup meetings whose deliberations formed the basis of the TEI Guideline, to argue with the TEI’s North American editor — one Michael Sperberg-McQueen — about how those deliberations should best be represented in SGML, and to make common cause with him in defending our decisions to the TEI’s occasionally infuriating steeering committee. Michael has described the first of these processes as resembling the herding of cats but Charles Goldfarb, self-styled inventor of SGML, called it an outstanding exercise in electro-political audacity, which I like better. My participation in the TEI as “European Editor” was financed partly through a series of European grants obtained by the ingenious, charismatic, and sadly missed Antonio Zampolli, at that time one of the first people to identify and successfully tap the rich sources of research funding headquartered in Luxembourg.

My other major activity of the nineties was the creation of the British National Corpus: another attempt to create a REALLY BIG collection of language data, this time for the use of lexicographers. This project got some serious funding due to an unusual coincidence of interest amongst commercial dictionary publishers, computational linguists, and the UK Government, which was at that time keen to develop something called the “language engineering industries”. At OUCS, it led to the installation of some massive Sun Workstations in what is now a rather soggy meeting room called Turing but was then a rather soggy basement, along with three people to run them, one of whom (not me) actually knew how to design and implement a workflow for the production of several thousand miscellaneous texts and text samples with detailed SGML markup. It seems astonishing that both the TEI and the BNC are still alive and well, despite occasional reports of their obsolescence, but they are. The history of the TEI has yet to be written, if only because it’s far from over; I have however written an article about the history of the BNC called “Where did we go wrong?” a title which regularly confuses the non-native English speakers who keep buying copies of the corpus year after year.

Almost every project or institution mentioned in this blog entry now has an entry in Wikipedia. I don’t know what to make of that, but it is certainly sobering to reflect that when the decade I’m talking about began, we were somehow muddling along without any of mobile phones, Wikipedia, Facebook, or the Channel Tunnel, most of which had become unremarkable parts of everyday life by its end. That sort of thing makes a chap feel old.

Posted in Uncategorized | Tagged | Comments Off

Comments are closed.