DiXiT supervisory meeting: Borås

Requirements for a publication infrastructure Project Description

Magdalena Turska ER3

Borås

Introduction

The original DiXiT project bid described my work shortly as ‘a requirements study for a publication architecture targeting multiple media, not only web and paper, but also mobile devices (EPUB). Progress in this field is especially important to projects without access to large supporting technical staff.’ My primary tasks were to develop a model of reusable components for a publication infrastructure – surveying the existing tools and frameworks, requesting improvements and implementing new components.

Objectives

• Creation of an index of tools for all stages of the production of digital editions to be complete by M18

• The submission of several feature requests for improvements to the TEI ODD meta-schema language where appropriate by M15

• The creation of a proof-of-concept digital edition that evinces the use of re-usable components for edition production and publication by M24

• Documentation of improvements to the oXygen-TEI framework completed by M27

Tasks and methodology:

• Survey the community for existing tools and publication frameworks, coupled with user requirements gathering

• Develop a model of reusable components for a publication infrastructure

• Where feasible implement proof-of-concept components for use in publishing digital editions

• Document and openly request improvements in the TEI ODD meta-schema language

• Further develop the oXygen-TEI framework

• Create a proof-of-concept digital edition based on the defined reusable components (in collaboration with SyncRO)

• Document and disseminate conclusions through knowledge exchange activities

Table 1. ER3 Objectives and Tasks

Specific objectives and tasks listed above strongly stress the need for appropriate software tools, but implied between the lines there is even stronger assumption that software plays only a partial role within an editorial workflow. Obviously the actual methods of creating the digital edition differ as much as the editions themselves and are guided by multiple factors: source material, type of the edition, available human resources and infrastructure to name just a few. But any workflow presumably consists of distinguishable stages or components, even if quite often they may not follow the linear succession of a waterfall production model but are undertaken concurrently or backtracked between iterations. Still, it’s not hard to enumerate at least some conceptual tasks that form part of creating and publishing a scholarly digital edition. Each of these tasks should correspond with at least one tool or agent capable of performing it. Modelling the editorial workflow as a pipeline of such tasks with built-in possibility for iteration allows to provide a framework for further pairing of the tasks with appropriate software tools.

Framework

The following diagram illustrates the top level approximation of stages of creation of digital edition I’ve identified. Stages in the diagram are color-coded to suggest the core activity. Thus green symbolizes the incorporation of knowledge in an explicit form; yellow – the visualization; blue the conversion between formats. According to this the publication is both yellow, as it re-presents the data in ultimately a visual way whether printed or online, but also blue as on the data level it converts between specific data formats such as TEI and HTML or PDF. The crucial steps are the ‘green’ phases as this is where the injection of knowledge occurs, yet without complementary conversion and visualization steps that effectively translate and decode the plethora of formats, the data itself – however beautifully modelled – will remain unusable.

 model

The aim here was not to include every tiny distinguishable procedure or step but rather to construct a very general pipeline schema that could be applied to all or at least most digital editorial workflows. Obviously the linear sequence of the diagram may not represent the real-life pragmatic that often includes backtracks from trial-and-error attempts at specific tasks. Some phases may be absent for specific projects, e.g. digitisation phase would be irrelevant for projects building upon born digital data or outcomes of previous projects. As mentioned earlier each of the stages needs to be further divided into smaller tasks, commonly with intermediary conversion between output and input formats of applicable tools and only at this more detailed level can actual pairings between tasks and tools be made.

I believe that such a framework not only could present the editors with a map of options they have when undertaking a scholarly edition thus cutting down on headaches and facilitating the adherence to good practices in the field (or, in fact actually identifying what the common practices are) but also has the potential to identify the critical gaps that will require new software to fill them. Moreover, facilitating the automated generation of the required outputs, thus freeing very substantial human and financial resources, is probably the only way to shift the focus onto the completeness, consistency, quality, and depth of the underlying data.

Due to this latter concern I am investigating not only existing publication platforms or components, but I also examine the possibilities of defining the intended outputs and publishing requirements in a formal manner and attaching this information to the encoded document or schema to allow its automated conversion into desired publication form.

Outcomes

My work now is twofold: one aspect is trying to break down the general production stages outlined in the diagram above into smallest possible tasks and create pairings between the tasks and tools that can perform them and deal with necessary conversion issues. This activity will result in updates to relevant TEI wiki pages, series of blog articles and possibly a bigger publication. The other part is more hands-on development of particular tools that can deal with one or more identified gaps. My main project is the implementation of processing model for TEI Simple in XSLT for outputs into HTML, Markdown and possibly LaTeX. Smaller projects are conversions between TEI and Markdown and customizations of oXygen framework that facilitate editing and proofing of TEI documents.

Poster

Poster for the presentation is available from here.

Posted in Uncategorized | Leave a comment

DiXiT Camp 3 Borås

In Oxford snowdrops and early daffodils are blooming already but in Borås, Sweden the lakes are still frozen and ground is sprinkled with snow. What brought me there was the third – and last – DiXiT Camp. These half-yearly gatherings are not only occasions to bring fellows together for our supervisory meetings but they are foremost dense-packed with lectures, presentations and hands-on sessions on various topics. Keywords for this week were: academia, cultural heritage, society.

boras1

I have to admit I gave barely a glance to our schedule so on Monday morning I was quite surprised when, after a brief lecture on the public of scholarly editions and consequences it should have as we create one, we were asked by Elena Pierazzo to design user interface for a web app.  In 30 minutes we barely got our act together to decide what we wanted our app to actually do, let alone thoroughly design the interface for it, but we have had some interesting ideas and definitely enjoyed the experience.

But the best was yet to come. After lunch José Miguel Vieira from King’s College Digital Humanities Department greeted us with detailed tutorials that led us through initial stages of creating an app for Android devices with MIT’s AppInventor. For quite a while our group of supposedly mature professionals behaved like children delighted with a new toy. Next step – obviously –  was to try the newly-gained technological superpowers to play pranks and shout mild abuse at our peers before we finally moved on to business of enhancing our shiny and new digital edition app with adequate functionality. The time flew by and workshop ended way before we wanted it to, leaving most of us with a strong resolution to try something on our own.

Next two days were dedicated to practical aspects of digitisation process – foremost the mass-scale creation of digital facsimile images and led by a team from National Library of Sweden. We had a number of introductory talks on selection, information capture and processing on Tuesday but Wednesday morning we were again on our own with a goodie-bag that contained rather unusual collection of items: from Hello Kitty magazine with stickers and necklace, through maps and contemporary newspapers to a historical daguerreotype in a handsome engraved case. We tried to put our librarian-archivist hats on and think about how to proceed when digitizing not only this relatively small though extremely mixed set, but how would we deal with thousands of objects like these. Again, discussion was pretty interesting and we even decided to create some video footage of opening the Hello Kitty bag for posteriority, but only the afternoon session put our vague and abstract divagations to the real test. We were given a digital camera, fancy table to mount it on, very interesting but necessarily brief introduction to practical aspects of digital photography and it was time to shoot! For those of us, who – like me – were barely skilled in taking pictures with mobile phones it was both challenging and illuminating to hear about concepts like depth of field, exposure, scene lighting or white balance (especially as it turned out to be grey balance in the end). Our tutor, Andrea Davis Kronlund, did a perfect job explaining it very clearly and showing everything in action straight away. Thus armed we were quite keen on applying all this in practice – keeping the light even, avoiding vibrations, carefully calibrating and adjusting the tiny details to ensure the best consistency across the shoots and minimizing the need for manual post-processing of our images. Our afternoon ended with general discussion that wrapped-up nicely the two days we dedicated to digitization problems and techniques.

It was the end of the DiXiT part of the workshop for me with a small exception of the ultimate Swedish cultural experience – the ice baths. In 6am winter darkness we ventured forth to warm ourselves in the sauna and then rapidly cool in the freezing waters of a lake covered with ice – bar an opening only about 10 metres across. I rather recommend it to anyone tough enough as getting back to the safety of the bank after a very brisk swim in the lake definitely makes one feel very much alive.

icebath

My colleagues ventured forth to hear about digital asset management while I had to be back in time for the 2-day workshop for international masters students of Library and Information Science School dedicated to TEI and related XML technologies organized by Mikael Gunnarsson. In this short time we covered rather a lot of ground – starting with general introduction to what a scholarly edition is and what digital scholarly edition should be, through basics of XML, the idea of TEI, its core elements, popular modules and principles of dealing with metadata, to finish with XPath, XSLT and ways of using that data not only in textual manner but as input for all sorts of visualisations. All these concepts are a real challenge, especially at first sight and not easy to teach as well so it was my great pleasure to see Swedish students eager to get their foundations straight and solid and build from there for our grand finale with XSLT. That’s quite a journey in two days, well done! All teaching materials from masters course are available here.

boras23

Borås camp was indeed special – interesting training was to be expected, but the wow factor came from the city itself – unique settings of Textile Fashion Center, numerous sculptures found unexpectedly in the streets, harsh climate and hot fika… Many thanks we owe to camp organizers Mats Dahlström and above all our own Merisa Martinez for warm and generous hospitality from pre-dawn darkness to the wee hours. This is a lady that delivers and I’m looking forward to her own perspective on Borås camp, hoping it will include her brilliant video coverage.  Hejdå!

Posted in Uncategorized | Leave a comment

TCP

The Text Creation Partnership (TCP) creates standardized, accurate XML/SGML encoded electronic text editions of early printed books. This work, and the resulting text files, are jointly funded and owned by more than 150 libraries worldwide. All of the TCP’s work will be released the public domain for anyone to use.

Starting January 1, 2015 TCP has arrived at a major milestone: all restrictions were lifted from EEBO-TCP Phase I, which consists of the first 25,000 texts transcribed and encoded by the TCP. These texts are now freely available to anyone wishing to use them, and there are no longer any restrictions on sharing these files, which are now licensed under the Creative Commons Public Domain Dedication (CC0 1.0 Universal).

To make this tranche of texts available not only according to law and theory but also in practice, the team at University of Oxford had to provide the means for accessing the HTML, ePUB and TEI P5 XML versions via the Oxford Text Archive.

catalogue

TCP is full of all sorts of advice so why go to the self-help section of the bookstore when there’s this huge collective wisdom to enjoy?

Sebastian Rahtz and James Cummings of Oxford were mainly responsible for the launch while my part in these efforts was to create the script that extracts the catalogue data from PostgreSQL relational database and presents it to the world as the searchable table that can be now enjoyed at TCP catalogue page.We wanted the complete catalogue of TCP texts, both freely available and restricted to be available and displayed in a manner that enables simple and quick search, filtering and browsing of the resources.

DataTables jQuery plugin is perfect for such application, but I had to make sure that the performance on a set of more than 60,000 records will be satisfactory. Luckily DataTables come with a server-side processing option which means that all paging, searching and ordering actions that DataTables typically perform in a browser are handed off to a server where an SQL engine (or similar) can perform these actions on the large data set much more efficiently. DataTables website hosts example implementations with PHP and various database engines. The only obstacle was that TCP catalogue records were hosted in a Postgres database and the PHP/PostgreSQL script was definitely not in a mood to work on our setup. Eventually I ended up porting one of PHP/MySQL examples to Postgres.

This involved changing all MySQL specific dialect into something that PostgreSQL can grok.

And plugging it back into the html catalogue page requires just this bit of JavaScript

$(document).ready(function() {
   $('#example').dataTable( {
       "processing": true,
       "serverSide": true,
       "ajax": "scripts/server_processing.php"
   } );
} );

even though we finally ended up with something a bit more elaborate to allow for filtering on individual columns and some automatically generated contents, including links to html version and xml sources.

md

Excerpt from the source summary

Each text has its own repository on GitHub comprising of the XML TEI P5 source plus a MarkDown readme file gathering some information extracted from TEI source. The scripts to generate the latter are again my doing and can be found in a MDown subdirectory of a special TCPTools repository https://github.com/textcreationpartnership/TCPTools. Source files of TCP texts can now be forked from gitHub to do as one pleases. If one should want more of those there’s yet another interesting repository https://github.com/textcreationpartnership/Texts that lists all TCP repositories in csv and json formats and provides scripts to clone everything at once. It would be interesting to see what people do with all this bounty!

Cheers.

Posted in Uncategorized | Leave a comment

Seeking the SIMPLicity…

The Text Encoding Initiative (TEI) has developed over 20 years into a key technology in text-centric humanities disciplines. It has been able to achieve its range of use by adopting a descriptive rather than prescriptive approach and by eschewing any attempt to dictate how the digital texts should be rendered. However, this flexibility has come at the cost of rather limited interoperability and virtual absence of tools that can publish TEI documents out-of-the-box in a sensible way. While TEI’s power, scope and flexibility are essential for many research projects there is a distinct set of more conventional uses, especially in the area of digitized ‘European’-style books that would benefit from a prescriptive recipe for digital text that comes with a ‘cradle to grave’ processing model that associates the schema with explicit and standardized options for displaying texts. This are the premises of the Mellon-funded TEI Simple project. TEI Simple both restricts the TEI tagset to a limited subset of elements and aims to provide default prescriptions for processing TEI Simple documents. It will also provide a way to customize and extend these default prescriptions, and an implementation of a processor that will generate transformations from users’ customizations.

TEI Simple started officially in September, when project PIs (Sebastian Rahtz, Brian Pytlik-Zillig and Martin Mueller) gathered in Oxford, bringing me and James Cummings for a week of intensive work, starting with analysis of existing large corpora, mostly consisting of ‘books by dead white men’, like Oxford Text Archive or EEBO-TCP. After some vivid discussions and cross-searching the reference corpora for evidence of actual usage of TEI elements we’ve been able to cut the list down to about a hundred elements that can still carry all the information that was present in original TEI P5 encoding. As we discussed I was writing the migration tool* that translates elements not available in Simple schema into their Simple equivalents wherever possible.

The next step was to decide how do we tackle the other TEI Simple goal: the processing model. In other words we needed to supply directions how TEI Simple documents are supposed to be processed into a range of output formats like HTML, PDF etc. Additional goal was not to confine the users to our ideas about the required outputs, but allow them to override default Simple rendering to achieve different results.

puzzled.png

As the question “how is it going to look like?” always weighs heavy on the minds of editors this is probably the greatest challenge of TEI Simple project. Specifying how to deal with all TEI elements (in all specific contexts they might occur in a collection of documents) can be a nightmare even in one’s native language, doing this in computer-speak is definitely not everyone’s cup of tea. Yet editors have to make these decisions at some point and either write their own programs or communicate them clearly to their tech-savvy collaborators. Both options require the editors to explicitly state what is expected to happen. Assuming that editors already know and understand TEI/XML, it would be relatively small leap of faith to hope that they both can and would add a small bits of XML to their schema that describe in a formal way the rules for intended processing. Obviously ‘relatively small bits of XML’ cannot be expected to carry the very same power of expression as full-fledged programming language, yet I hope that at least for some this can be powerful enough that the benefits will justify trade-offs.

Continue reading

Posted in Uncategorized | Tagged , , , , | Leave a comment

Sneak peek at the survey results

My publication infrastructure survey has been closed now and I expect to make report on the data collected just in time to make it a Christmas gift for the community. Raw anonymized data will be available for download as well. Many thanks to all who took the time to respond.

teiusageFor now let’s just have a sneak peek at the TEI usage. It seems like the majority of surveyed projects (and hopefully this reflects the state of the things at least in Europe and North America) does use the TEI with 89.1% or 49 out of 55 project that answered. Of those, 78.8% believes to be fully conformant with TEI Guidelines.

teimodules

When it comes to beauty contest between TEI specialized modules there’s no sure winner though perhaps modelling refererences to names, dates, people and places is most common across the projects coming at 73.5%, but manuscript description, representation of primary sources and critical apparatus also score quite high followed by verse, performance texts and language corpora. As always, the non-standard characters are both an interest and a headache so the corresponding module is used in roughly a third of the projects.

More to come, so if you can’t wait to know what the most popular editor is, stay tuned!

Posted in Uncategorized | Tagged , , , | Leave a comment

Breath of pure oXygen

One of my DiXiT objectives are the improvements to the oXygen-TEI framework. Oxygen is a very popular XML editor that offers built-in support for TEI and the plan is to make this support even better. I will be staying with the oXygen team over the summer but I thought some reconnaissance beforehand would not go amiss.

Early November weather in Romanian city of Craiova (headquarters of SyncRO, creators of the most popular editor in TEI community) is quite chilly but the warm welcome I got from George, Alex, Octavian and the rest of oXygen crew makes you not to notice the cold.

parkulromanescucastle

Romanescu Park. Not the Dracula’s Castle but stil very picturesque.

Idea for this week was to turn me into oXygen editor power-user with the plan of developing enhancements for the oXygen TEI framework. I went through the process of customizing the various aspects of oXygen behaviour – especially defining new Actions available in the Author mode to perform specific tasks.

First thing I tackled was the incorporation of dr Marjorie Burghardt’s tool called TEI Critical Edition Toolbox. It is based on TEI BoilerPlate but delivers custom stylesheets and additional JavaScript operations to allow users to check their consistency in encoding textual variations with parallel segmentation method. Toolbox available from http://ciham-digital.huma-num.fr/teitoolbox/ requires user to upload a file for it to be converted and presented as html page. What I wanted to do was to pack it into TEI oXygen framework, so all the user needs to do is hit a button while editing the file and get the results immediately – no need to upload the file after every change.

The necessary steps were to isolate the relevant bits from php-based TEI Critical Edition Toolbox: the xslts used to convert the original TEI source to feed into the BP, the CSS files and JS libraries. I had to add an ANT transformation scenario in oXygen and enable it in TEI framework to actually perform the operation.

Next step was to create a custom button to trigger the transformation. So far oXygen allows this in Author mode only. Every custom button or menu option can invoke one of built-in or custom-made oXygen operations. Operation that can invoke transformations is called ro.sync.ecss.extensions.commons.operations.ExecuteTransformationScenariosOperation. This is one of standard oXygen operations that does exactly what it’s name says – executes a transformation scenario. As a parameter it takes a name of the transformation scenario and I used the ANT based scenario I added to the framework before. Having an oXygen action ready it’s just a click of a button to add a button triggering that action.

The detailed description of the above process that can be repeated to create other custom button can be found on github. Once done it’s not that scary! Cheers!

oxygen

 

Posted in Uncategorized | Leave a comment

TEI Workshop: Coming soon to the University near you…

This is a very busy autumn for me and the main theme seems to be teaching and learning. Some say (and I very much agree) that the former is the best way to do the latter so in precisely this spirit I taught three sessions comprised of a talk and following practical exercise during the second DiXiT Camp in Graz:

  1. Transcription and Editorial Interventions  exercise
  2. An introduction to Critical Apparatus in the TEI exercise
  3. Encoding names and named entities exercise

It was a real pleasure to see the Camp students work so dedicatedly to grasp the principles of XML, XPath, quirks of oXygen editor and intricacies of TEI encoding and even greater one to see it immediately applied to participant’s own project and also constantly scrutinized, questioned and its feasibility for research evaluated. I really believe the Camp served to lower both real technical barriers that prevent scholars from adopting XML encoding and in particular TEI Guideline and also the perceived, mental barrier that sometimes can be even harder to cross. Hopefully such outreach events provide a solid base for participants to build upon.

In this spirit I am organizing a 3 day workshop in Warsaw, Poland (November 10-13 2014) to be hosted jointly by Faculty of Artes Liberales of the University of Warsaw and Centre for Digital Humanities of the Polish Academy of Science as a pre-conference event before the Respublica Litteraria in Action 3: New Sources – New Paths of Research. The workshop led by me and James Cummings aims to give a “foot-in-the-door” of the TEI world, covering the territory from introduction to XML and XPath to actual publishing of TEI encoded documents on the Web. Our goal is to keep it simple while really going cradle-to-the-grave path: from the first <TEI> tag to the functioning website.

The workshop is free to attend and there are still few places available, the programme and further information can be found here.

Main gate of the University of Warsaw

Posted in Uncategorized | Leave a comment

Pottery class 1: Loading the kiln

This is the first installment of what is meant to be a case-study/learn-by-example/step-by-step tutorial on Kiln publication framework developed and maintained by a team at the Department of Digital Humanities (DDH), King’s College London. Introductory post on this subject is available here.

For the sake of familiarizing myself and the audience of this post (hello, Mum!) with the Kiln framework let’s embark on the quest of publishing the body of 16th century letters from the correspondence of Ioannes Dantiscus – taken from a project that I’ve been working on the past few years at the University of Warsaw.

Continue reading

Posted in Uncategorized | 3 Comments

Publication infrastructure survey

The requirements study for a publication architecture targeting multiple media is one of my research priorities for the DiXiT Network.

My task in this regard is to create an index of tools for all stages of the production of digital editions to be complete, survey the community for existing tools and publication frameworks, gather the user requirements and eventually develop a model of reusable components for a publication infrastructure.

To this end I’ve created a survey that hopefully will help to assess the software and technologies used for creation and publishing of digital scholarly editions. Here it is!

We are inviting scholars, young researchers, teachers and students involved in any part of the process of creation and publishing of editions to participate in this study. If you have doubts whether your project is an edition or archive project, please complete the survey anyway. If you feel you lack the technical expertise to answer all the questions, do it anyway as best you can and consider asking someone else to complete the survey as well.

We are primarily interested with the tools and workflows associated with processes of creation and publishing of digital scholarly resources. Therefore we are especially interested in descriptions of the bespoke tools and pipelines employed in your project so we’d appreciate answering the open questions as fully as possible. The results of the study will be used to help create requirements and develop tools for digital scholarly editions in particular and digital humanities in general.

The content written by you in the survey is strictly anonymous and your participation is entirely voluntary.
It should take approximately 20 minutes to complete.

If you have any questions or require more information about this study, please contact me using the following details:

Magdalena Turska – University of Oxford
Researcher for Digital Scholarly Editions
magdalena.turska@it.ox.ac.uk

This study is funded by the European Commission through DiXiT Marie Curie Actions research programme.

dixit-wide

Posted in Uncategorized | Leave a comment

What prevents people from firing their own Kiln?

Kiln is an open source multi-platform framework that integrates various software components (Apache Cocoon, Solr and Sesame) for creating websites whose source content is primarily in XML. Kiln is developed and maintained by a team at the Department of Digital Humanities (DDH), King’s College London. Over the past years and versions, Kiln has been used to generate more than 50 websites which have very different source materials and functionality.” – this short description from its authors gives all the important information about what the Kiln is and can do.

Kiln seems a very robust piece of software, beautifully designed to meet the need of publishing a corpus of XML files, especially TEI ones. It’s been actively developed through numerous versions (counting in its predecessor xMod) and is in constant use at King’s College. Still, despite general lack of tools for publishing that only recently has been somewhat diminished it didn’t see much use outside the Digital Humanities Department at King’s. Does it lack functionality, appeal, advertising or is it simply too scary for general breed of textual editors?

Upon short investigation we may find that indeed Kiln is easily obtainable from gitHub. It is (unfortunately) essential that prospective user reads concise documentation available or at least short Tutorial to stand any chance of successful installation. Conciseness may be seen as a virtue as it doesn’t take long to read, but is also quite off-putting at least for less technical users.

Installation seems pretty straightforward once the system requirements for Kiln are met (that is there’s Java 1.7 running on your system). What may cause a bit of a headache is the fact that Kiln runs by default on quite an exotic port number 9999 which may lead to it being blocked by local network setup. Changing this is again pretty simple and well-described in the tutorial but requires the user to edit manually some obscure configuration file.

Afterwards it’s actually quite impressive how all that is needed to run ‘vanilla’ Kiln service is to download your TEI files (plus corresponding images) into prescribed locations in the Kiln filesystem. From there it’s just a click of the indexing button and the service runs like magic.

Kiln basic screen

Admittedly it’s rather dull kind of magic out-of-the-box that might be even called one-size-doesn’t-actually-fit-anyone by someone less awed by Kiln’s potential than I am. What we are presented with is the list of uploaded files that we can browse and perform textual search on. Kiln also offers a couple of facets like document title, author and so on, everything grabbed from document teiHeader part.

From then on user is basically on her own if she wants to force Kiln to bend to her will. This process usually involves a fair amount of head-bangin’ and asking around for help. First contact with Cocoon pipelines, SOLR query system and XSLTs that together form the Kiln framework can be not only scary but, I believe, in fact prohibitive for the non-developer.

Yet, the power and merit of Kiln are in my opinion undisputable.

What can be done then to make it easier to start on Kiln adventure? The general idea is to fit Kiln with a GUI interface that will lead the user through the steps of creating customizations for several types of publications (eg. diplomatic edition of document or set of documents or print-like reading edition), where user could benefit of being guided through the process of uploading files, choosing some basic aspects of the website design, choosing or uploading the desired transformation stylesheets and configuring necessary search facets.

This would not free the user with higher or more specific expectations from plunging into the exhausting-yet-rewarding journey with Cocoon/SOLR/XSLT but should be sufficient for a lot of quite standard projects especially at the prototyping stage. And hopefully with growing user base the open pool of domain-specific but still reusable customizations would grow as well.

The other thing is to start this online knowledge base to answer questions for those who don’t have Kiln expert at hand to pester directly. To this end in my next posts I will try to describe the process of customization of Kiln for the purposes of publication of the edition of 16th century letters (as seen in the teaser screenshot above).

Posted in Uncategorized | Tagged , , | 1 Comment