DigiHistCH24
  • Home
  • Book of Abstracts
  • Conference Program
  • Call for Contributions
  • About

On a solid ground. Building software for a 120-year-old research project applying modern engineering practices

  • Home
  • Book of Abstracts
    • Data-Driven Approaches to Studying the History of Museums on the Web: Challenges and Opportunities for New Discoveries
    • On a solid ground. Building software for a 120-year-old research project applying modern engineering practices
    • Tables are tricky. Testing Text Encoding Initiative (TEI) Guidelines for FAIR upcycling of digitised historical statistics.
    • Training engineering students through a digital humanities project: Techn’hom Time Machine
    • From manual work to artificial intelligence: developments in data literacy using the example of the Repertorium Academicum Germanicum (2001-2024)
    • A handful of pixels of blood
    • Impresso 2: Connecting Historical Digitised Newspapers and Radio. A Challenge at the Crossroads of History, User Interfaces and Natural Language Processing.
    • Learning to Read Digital? Constellations of Correspondence Project and Humanist Perspectives on the Aggregated 19th-century Finnish Letter Metadata
    • Teaching the use of Automated Text Recognition online. Ad fontes goes ATR
    • Geovistory, a LOD Research Infrastructure for Historical Sciences
    • Using GIS to Analyze the Development of Public Urban Green Spaces in Hamburg and Marseille (1945 - 1973)
    • Belpop, a history-computer project to study the population of a town during early industrialization
    • Contributing to a Paradigm Shift in Historical Research by Teaching Digital Methods to Master’s Students
    • Revealing the Structure of Land Ownership through the Automatic Vectorisation of Swiss Cadastral Plans
    • Rockefeller fellows as heralds of globalization: the circulation of elites, knowledge, and practices of modernization (1920–1970s): global history, database connection, and teaching experience
    • Theory and Practice of Historical Data Versioning
    • Towards Computational Historiographical Modeling
    • Efficacy of Chat GPT Correlations vs. Co-occurrence Networks in Deciphering Chinese History
    • Data Literacy and the Role of Libraries
    • 20 godparents and 3 wives – studying migrant glassworkers in post-medieval Estonia
    • From record cards to the dynamics of real estate transactions: Working with automatically extracted information from Basel’s historical land register, 1400-1700
    • When the Data Becomes Meta: Quality Control for Digitized Ancient Heritage Collections
    • On the Historiographic Authority of Machine Learning Systems
    • Films as sources and as means of communication for knowledge gained from historical research
    • Develop Yourself! Development according to the Rockefeller Foundation (1913 – 2013)
    • AI-assisted Search for Digitized Publication Archives
    • Digital Film Collection Literacy – Critical Research Interfaces for the “Encyclopaedia Cinematographica”
    • From Source-Criticism to System-Criticism, Born Digital Objects, Forensic Methods, and Digital Literacy for All
    • Connecting floras and herbaria before 1850 – challenges and lessons learned in digital history of biodiversity
    • A Digital History of Internationalization. Operationalizing Concepts and Exploring Millions of Patent Documents
    • From words to numbers. Methodological perspectives on large scale Named Entity Linking
    • Go Digital, They Said. It Will Be Fun, They Said. Teaching DH Methods for Historical Research
    • Unveiling Historical Depth: Semantic annotation of the Panorama of the Battle of Murten
    • When Literacy Goes Digital: Rethinking the Ethics and Politics of Digitisation
  • Conference Program
    • Schedule
    • Keynote
    • Practical Information
    • Event Digital History Network
    • Event SSH ORD
  • Call for Contributions
    • Key Dates
    • Evaluation Criteria
    • Submission Guidelines
  • About
    • Code of Conduct
    • Terms and Conditions

On this page

  • Introduction
    • General Problem Description
    • The Swiss Law Sources
    • Sidenote on the evolution technical infrastructure of the SLS
  • Data as a solid ground: developing an XML Schema for a scholarly edition
    • Four modern engineering practices and their application
    • Modular software development
    • Test driven development (TDD)
    • Semiautomatic documentation
    • Semantic versioning (SemVer and git)
  • A brief outlook
  • References
  • Edit this page
  • Report an issue

Other Links

  • Presentation Slides (PDF)

On a solid ground. Building software for a 120-year-old research project applying modern engineering practices

Session 6A
Authors
Affiliation

Christian Sonder

University of St. Gallen

Bastian Politycki

University of St. Gallen

Published

September 13, 2024

Modified

November 15, 2024

Doi

10.5281/zenodo.14851557

Abstract

There is no doubt that the increasing use of digital methods and tools in the humanities opens up an almost infinite number of new possibilities. At the same time, it is becoming more and more clear that this creates new problems for the humanities. Many software solutions are often ‘quick hacks’—changes to them are time-consuming, lead to errors, and the sustainability of the solution itself is overall questionable. Digital editing projects—which are mostly based on TEI-XML—face this challenge from the beginning: The ‘TEI standard’ is rather a loose collection of recommendations, which necessitates the development of a customized schema (a TEI subset) for project-specific data, so that the edition or encoding guidelines can be enforced and their compliance checked. These machine-readable rules must be supplemented by human-readable guidelines which document the fundamental philological decisions and can be used as a manual for the editors.

The development of such a schema—and the associated workflows—becomes particularly relevant in the context of long-term projects, such as the Collection of Swiss Legal Sources (SLS). Changes to the schema require a continuous conversion of existing datasets. The contribution addresses how practices of modern software development, such as versioning or test-driven development (TDD), can be profitably used for humanities projects. It presents the entire workflow beginning with the creation of a modularized schema for a complex text corpus, which includes texts in German, French, Latin, Italian and Romansh from the 6th to the 18th century, up to the automated verification and publication of the documentation/schema.

Keywords

software engineering, TEI-XML, digital edition, project organization

For this paper, slides are available on Zenodo (PDF).

Introduction

General Problem Description

Nowadays, software is a central component of every research project. Since the establishment of personal computers digital tools are used for a wide range of tasks, from simple text processing to machine assisted recognition in all sorts of historical documents. Research projects however, in particular those that produce digital scholarly editions, rarely rely just on existing tools, they often create new ones. Starting from the development or customization of own data formats ending with the implementation of often complex web applications for presentation, it is not uncommon for the tools developed in this context to be ‘quick hacks’ rather than well-designed software projects.1 In many cases, this is not a problem at all, because the duration of research projects in the humanities is often rather short (e.g. between three and six years). Software developed in such a short amount of time must first and foremost achieve the project’s goals, and therefore adaptation to other subjects or subsequent use is usually not intended. However, this becomes a problem if the corresponding research project is scheduled for a longer term, or if it is part of a series of projects depending on each other. In this case, quick solutions often become serious issues and are not really FAIR for either internal or external subsequent use. Not least for this reason, this phenomenon is the subject of discussion in the digital humanities community under the heading of research software engineering.2 This paper describes practical experiences from the perspective of a long-term editorial project and explores opportunities for sustainable development practices by utilizing modern methods that have long been established outside the academic world.

The Swiss Law Sources

The Collection of Swiss Law Sources (SLS) is a 120 year old research project that publishes Swiss legal texts in German, French, Latin, Italian and Romansh from the 6th to the 18th century. The edited texts are published in a printed reference publication and in digital form.3 By the time of writing ten edition projects are currently carried out by 23 researchers in three languages throughout Switzerland: In French, volumes are to be published in the cantons of Geneva (1 vol.), Vaud (2 vols.) Neuchatel (1 vol.), Fribourg (1 vol.); in German Valais (1 vol.), Lucerne (2 vols.), Schaffhausen (2 vols.), St. Gallen (1 vol.), Graubünden (1 vol.) and in Italian Ticino (1 vol.). Further editions projects are planned or applied for, while the overall project is scheduled for another ~ 50 years. The entire technical infrastructure is provided and developed by the SLS core team, which consists of the project manager and two members of staff specializing in DH (the authors of this paper). This team is also responsible for coordinating the projects, processing the data, typesetting the printed volumes and digital publishing of the edited texts.

In this context, the development of new and the improvement of already existing software is not only a technical challenge, but also an organizational one. Existing applications must run continuously to provide the researchers with the tools they need for their daily work (and to grant the users of the digital edition access to all information), while new requirements must be met on an ongoing basis as each project deals with unique documents.

Sidenote on the evolution technical infrastructure of the SLS

About 15 years ago the Swiss Law Foundation, which stands behind the SLS, decided to retro-digitize the over hundred volumes published up to that point. Since then, the results of these initial digitization efforts have been presented in a web application which, as a ‘browsing machine’, makes the results of the many years of editing work, previously locked between two book covers, available to a broad public. This also marked the start of the project’s transition to a predominantly digital editing and working method. In these 15 years numerous (web) applications have been created: These include databases that collate information on historical entities (people, organizations, places and terms), a digital application that presents the transcriptions, now encoded in TEI-XML, in both a web and a print view and a lot of other tools used in the various tasks at hand. The ongoing nature of the project was one of the reasons why many of these applications were often ‘ad hoc solutions’ or proof of concepts that were neither designed for long-term operation nor for integration—i.e. collaboration—with other tools. As a result, a rather diverse ecosystem of different technologies has developed on the data side and on the processing and presentation side.4

Data as a solid ground: developing an XML Schema for a scholarly edition

The foundation of a digital scholarly edition is undoubtedly the transcribed and annotated data, which usually is encoded in an XML format.5 All our newly edited texts are encoded in XML and as time permits all previously edited texts will be converted to this format. Therefore all further application layers, such as web presentation or printed output, have to be based on these XML files according to the single source principle. Over the last two decades, the guidelines of the Text Encoding Initiative (TEI)6 have established themselves as the de facto standard for this markup work. These guidelines are primarily a broad collection of suggestions rather than a clear set of rules, necessitating a precise formulation of philological concepts into a logical data model, specifically the creation of a TEI subset as an XML schema. The TEI itself offers a format called ODD (One Document Does it all) for creating an XML schema in a literate programming-fashion7, which itself is TEI-XML.8

A schema’s main use case is validation, i.e. checking whether the XML data corresponds to certain structures and constraints. As a TEI subset it defines which components and elements provided by the TEI guidelines are used and how they are used, making it an important part of the editing concept itself. The validation against a schema ensures the consistency of the resulting data sets in an ongoing project and is necessary to continuously support and check the researchers during the transcription and annotation process. Therefore we regard an XML schema as a key software component, although the development of a schema is typically not understood as software development in the true sense of the word. This is probably one of the reasons why most of the modern engineering practices we want to demonstrate are not yet applied in this field (at least as far as we know).

Four modern engineering practices and their application

In order to deal with a complex situation, as described above, the authors of this paper propose to make use of the following software engineering practices9:

  • modular software development
  • test driven development
  • semantic versioning
  • semiautomatic documentation

The development of the XML schema used in our project will be used as an example to show how these practices can be utilized for digital humanities projects. In the context of the ongoing reworking of the SLS application landscape, we developed a test based and modular workflow (see Figure 1) for the creation of a new schema, based on ODD-files as input.10

Figure 1: Test and build pipeline of a modern schema development workflow

Modular software development

If you download a sample ODD file from the TEI homepage11 which contains all elements and components, such a file may be made up of 70000 lines of code. Our ODD file—which is just a limited subset—still contains way over 20000 lines of code. The first step to handle such a large and complex object is to split it into manageable pieces. For each TEI-element we need, we created a separate file containing the element’s specification. Common parts like attribute classes, data types or custom definitions that are used by multiple elements each went into their own files.

A rather simple specification for the element <pc/> may look like this:

<elementSpec
  xmlns="http://www.tei-c.org/ns/1.0" xmlns:rng="http://relaxng.org/ns/structure/1.0" ident="pc" module="analysis" mode="change">
  <desc xml:lang="en" versionDate="2024-04-30">
    Contains a punctuation mark, which is processed specially
    considering linguistic regulations (for example, by adding a space).
  </desc>
  <classes mode="replace"/>
  <content>
    <rng:data type="string">
      <rng:param name="pattern">[;:?!]</rng:param>
    </rng:data>
  </content>
  <attList>
    <attDef ident="force" mode="delete"/>
    <attDef ident="unit" mode="delete"/>
    <attDef ident="pre" mode="delete"/>
  </attList>
</elementSpec>

This principle of atomicity enforces a clear structure, provides better maintainability in the future, made the files way more easy to grasp and to modify and had also the benefit of reducing redundancy, because shared parts were refactored and can be used throughout the schema while being defined in one place. The downside to this, of course, is the need to compile all those files into one ODD in a separate step. But this may be a small price to pay for the benefits.

Test driven development (TDD)12

The second step was to define a set of tests for all element, attribute and datatype definitions.13 Each test set describes the expected behavior of a piece of the schema and consists of three components: a title of the test set, the markup being tested and the expected result, which can either be valid (True) or invalid (False). Each test set is executed and evaluated by a Python function which invokes an XML-Parser.

The following tests describe the contents and attributes of the element <pc/>.

@pytest.mark.parametrize(
    "name, markup, result",
    [
        (
            "valid-pc",
            "<pc>;</pc>",
            True,
        ),
        (
            "invalid-pc-with-wrong-char",
            "<pc>-</pc>",
            False,
        ),
        (
            "invalid-pc-with-attribute",
            "<pc unit='c'>;</pc>",
            False,
        ),
    ],
)
def test_pc(
    test_element_with_rng: RNG_test_function,
    name: str,
    markup: str,
    result: bool,
):
    test_element_with_rng("pc", name, markup, result, False)

If each specification is coupled with one or more tests, it is guaranteed that individual changes to the schema will not compromise the overall functionality and possible side-effects may be detected early on. Such test cases are abstract enough to enable representative testing of the software components to be developed, but at the same time concrete enough to make them readable for employees specializing in philology, thus they can be used as a means of communication between the digital humanist team and the philological or historical team. We can simply ask: Should this piece of XML be True or False?

Semiautomatic documentation

The schema has to be documented for those who use it to encode the files as well as for those who use the files for any other purpose. We decided to generate as much of this documentation automatically as possible using markdown as a language and a site generator called MkDocs14. Our documentation website15 is constructed like this: A self written Python program reads all parts of the schema, converts them to markdown files and hands those to the MkDocs processor which returns a static HTML webpage that can easily be accessed and searched.

Semantic versioning (SemVer and git)

It is obvious that each change to the schema not only affects the XML files to be validated16, but also changes the documentation. For this purpose every release of the schema is versioned with git and is reflected in a new corresponding build of the documentation site. All versions of the schema are named in accordance to the principles of semantic versioning17 so a user of any XML file that has to be validated against our schema can see which versions are available and is able to read a specific documentation for any schema version.

A brief outlook

Although our journey of refactoring has just begun, we are already seeing the benefits of the principles we have applied. If the ground your are standing on is a solid one, you can build on it. Currently, we are working on a multilingual translation of our schema from German as the main language to English, French and Italian and hope to enrich the schema with extensive examples from actual XML files. Furthermore, we are rewriting the existing rendering-mechanisms (e.g. TEI to HTML), applying the same rules as described above. All in all, the work done and the cost we had to pay for is already paying off.

References

Alsaqqa, Samar, Samer Sawalha, and Heba Abdel-Nabi. 2020. “Agile Software Development: Methodologies and Trends.” International Journal of Interactive Mobile Technologies (iJIM) 14 (11): 246–70. https://doi.org/10.3991/ijim.v14i11.13269.
Burghardt, Manuel, and Claudia Müller-Birn. 2019. “Software Engineering in Den Digital Humanities 2. Workshop Der Fachgruppe Informatik Und Digital Humanities (InfDH).” In 50 Jahre Gesellschaft Für Informatik - Informatik Für Gesellschaft Workshopbeiträge Der 49. Jahrestagung Der Gesellschaft Für Informatik: 23.-26.9.2019, Kassel, Deutschland, 75. Proceedings, volume 295. Bonn Gesellschaft für Informatik e.V. [2019].
Carver, Jeffrey C., Nic Weber, Karthik Ram, Sandra Gesing, and Daniel S. Katz. 2022. “A Survey of the State of the Practice for Research Software in the United States.” PeerJ Computer Science 8 (May): e963. https://doi.org/10.7717/peerj-cs.963.
Christie, Tom. 2024. “MkDocs. Project Documentation with Markdown.” 2024. https://www.mkdocs.org.
Haaf, Susanne, and Christian Thomas. 2016. “Enabling the Encoding of Manuscripts Within the DTABf: Extension and Modularization of the Format.” Journal of the Text Encoding Initiative, December. https://doi.org/10.4000/jtei.1650.
Knuth, Donald Ervin. 1992. Literate Programming. CSLI Lecture Notes, no. 27. Stanford, Calif.: Center for the Study of Language; Information.
Law Sources Foundation of the Swiss Lawyers Society. 2024a. “Collection of Swiss Law Sources Online. Editio.” 2024. https://editio.sls-online.ch.
———. 2024b. “Transkriptionsrichtlinien Und Dokumentation. SSRQ Dokumentation.” 2024. https://schema.ssrq-sds-fds.ch/latest/.
Martin, Robert C., ed. 2009. Clean Code: A Handbook of Agile Software Craftsmanship. Upper Saddle River, NJ: Prentice Hall.
Neuber, Frederike. 2023. “Der Digitale Editionstext. Technologische Schichten, ‚Editorischer Kerntext‘ Und Datenzentrierte Rezeption.” In Der Text Und Seine (Re)produktion, edited by Niklas Fröhlich, Bastian Politycki, Dirk Schäfer, and Annkathrin Sonder, 55:69–84. Beihefte Zu Editio. Berlin/Boston.
Politycki, Bastian, Christian Sonder, and Pascale Sutter. 2024. “TEI-XML Schema Der Sammlung Schweizerischer Rechtsquellen.” https://doi.org/10.5281/zenodo.10625840.
Porter, Dot. 2024. “What Is an Edition Anyway? My Keynote for the Digital Scholarly Editions as Interfaces Conference, University of Graz.” July 25, 2024. http://www.dotporterdigital.org/what-is-an-edition-anyway-my-keynote-for-the-digital-scholarly-editions-as-interfaces-conference-university-of-graz/.
Preston-Werner, Tom. 2023. “Semantic Versioning 2.0.0.” 2023. https://semver.org.
Text Encoding Initiative. 2024a. “Guidelines. TEI: Text Encoding Initiative.” 2024. https://tei-c.org/release/doc/tei-p5-doc/en/html/index.html.
———. 2024b. “Roma. TEI: Text Encoding Initiative.” 2024. https://roma.tei-c.org.
Zundert, Joris van, and Tara Andrews. 2018. “What Are You Trying to Say? The Interface as an Integral Element of Argument.” In Digital Scholarly Editions as Interfaces, 3–33. Norderstedt.
Back to top

Footnotes

  1. Carver et al. recently demonstrated this with a survey, which shows that many researchers developing software have never received training in software development, and best practices are often ignored. See Carver et al. (2022).↩︎

  2. See Manuel Burghardt and Claudia Müller-Birn organised a workshop specifically on this topic at the 50th Annual Conference of the German Informatics Society, see Burghardt and Müller-Birn (2019).↩︎

  3. See Law Sources Foundation of the Swiss Lawyers Society (2024a) for the web presentation.↩︎

  4. The edited texts themselves are available as PDF (the retro-digitized collection), TeX and FileMaker (transition phase) and TEI-XML (current projects). These are processed by scripts and applications in the programming languages Perl, OCaml, Python, JavaScript and XQuery. Relational as well as graph-based and document-orientated databases are used to store the entity data.↩︎

  5. There have been various discussion what’s the key value of a digital scholarly digition. Maybe it’s the data (see Porter (2024)) or it’s the interface (see Zundert and Andrews (2018)). In the recent time it’s becoming more and more clear, it could be both. Therefore models have been developed, which understand scholarly editions as a stack of data, the processing applied to it and the resulting presentation (see Neuber (2023), p. 71).↩︎

  6. For details see Text Encoding Initiative (2024a).↩︎

  7. The term literate programming usually refers to a programming paradigm introduced by Donald E. Knuth. It describes an approach where programming is done in a human readable style at first. See Knuth (1992).↩︎

  8. The ODD-format is used in various contexts, e.g. the German Textarchiv (DTA) uses ODD-files as a source for their TEI-subset DTABf. See Haaf and Thomas (2016).↩︎

  9. This principles have been described in various books by many authors; one of the most famous is the book Clean code by Robert C. Martin (2009).↩︎

  10. The source code of this pipeline as well as the ODD sources are open sourced and can be found in the corresponding GitHub-Repo as well as on Zenodo. See Politycki, Sonder, and Sutter (2024).↩︎

  11. The starting point for the creation of ODD files is usually a tool called Roma. See Text Encoding Initiative (2024b).↩︎

  12. The term TDD usually refers to Kent Beck, who reintroduced this idea in the early 2000s. It describes a programming paradigm where tests are written before the actual code. See Alsaqqa, Sawalha, and Abdel-Nabi (2020), p. 255.↩︎

  13. These tests would normally be set up before the concrete description in the ODD-module is created, but we started with an already existing schema and decided to add the test later on.↩︎

  14. See Christie (2024).↩︎

  15. See Law Sources Foundation of the Swiss Lawyers Society (2024b).↩︎

  16. It may sometimes be necessary to convert them with XSLT to be valid against the newer version of the schema.↩︎

  17. See Preston-Werner (2023).↩︎

Reuse

CC BY-SA 4.0

Citation

BibTeX citation:
@misc{sonder2024,
  author = {Sonder, Christian and Politycki, Bastian},
  editor = {Baudry, Jérôme and Burkart, Lucas and Joyeux-Prunel,
    Béatrice and Kurmann, Eliane and Mähr, Moritz and Natale, Enrico and
    Sibille, Christiane and Twente, Moritz},
  title = {On a Solid Ground. {Building} Software for a 120-Year-Old
    Research Project Applying Modern Engineering Practices},
  date = {2024-09-13},
  url = {https://digihistch24.github.io/submissions/427/},
  doi = {10.5281/zenodo.14851557},
  langid = {en},
  abstract = {There is no doubt that the increasing use of digital
    methods and tools in the humanities opens up an almost infinite
    number of new possibilities. At the same time, it is becoming more
    and more clear that this creates new problems for the humanities.
    Many software solutions are often “quick hacks”—changes to them are
    time-consuming, lead to errors, and the sustainability of the
    solution itself is overall questionable. Digital editing
    projects—which are mostly based on TEI-XML—face this challenge from
    the beginning: The “TEI standard” is rather a loose collection of
    recommendations, which necessitates the development of a customized
    schema (a TEI subset) for project-specific data, so that the edition
    or encoding guidelines can be enforced and their compliance checked.
    These machine-readable rules must be supplemented by human-readable
    guidelines which document the fundamental philological decisions and
    can be used as a manual for the editors. The development of such a
    schema—and the associated workflows—becomes particularly relevant in
    the context of long-term projects, such as the Collection of Swiss
    Legal Sources (SLS). Changes to the schema require a continuous
    conversion of existing datasets. The contribution addresses how
    practices of modern software development, such as versioning or
    test-driven development (TDD), can be profitably used for humanities
    projects. It presents the entire workflow beginning with the
    creation of a modularized schema for a complex text corpus, which
    includes texts in German, French, Latin, Italian and Romansh from
    the 6th to the 18th century, up to the automated verification and
    publication of the documentation/schema.}
}
For attribution, please cite this work as:
Sonder, Christian, and Bastian Politycki. 2024. “On a Solid Ground. Building Software for a 120-Year-Old Research Project Applying Modern Engineering Practices.” Edited by Jérôme Baudry, Lucas Burkart, Béatrice Joyeux-Prunel, Eliane Kurmann, Moritz Mähr, Enrico Natale, Christiane Sibille, and Moritz Twente. Digital History Switzerland 2024: Book of Abstracts. https://doi.org/10.5281/zenodo.14851557.
Data-Driven Approaches to Studying the History of Museums on the Web: Challenges and Opportunities for New Discoveries
Tables are tricky. Testing Text Encoding Initiative (TEI) Guidelines for FAIR upcycling of digitised historical statistics.
Source Code
---
submission_id: 427
categories: 'Session 6A'
title: On a solid ground. Building software for a 120-year-old research project applying modern engineering practices
author:
  - name: Christian Sonder
    orcid: 0009-0009-5702-7902
    email: christian.sonder@unisg.ch
    affiliations:
      - University of St. Gallen
  - name: Bastian Politycki
    orcid: 0000-0002-6308-2424
    email: bastian.politycki@unisg.ch
    affiliations:
      - University of St. Gallen
keywords:
  - software engineering
  - TEI-XML
  - digital edition
  - project organization
abstract: |
    There is no doubt that the increasing use of digital methods and tools in the humanities opens up an almost infinite number of new possibilities. At the same time, it is becoming more and more clear that this creates new problems for the humanities. Many software solutions are often 'quick hacks'—changes to them are time-consuming, lead to errors, and the sustainability of the solution itself is overall questionable. Digital editing projects—which are mostly based on TEI-XML—face this challenge from the beginning: The 'TEI standard' is rather a loose collection of recommendations, which necessitates the development of a customized schema (a TEI subset) for project-specific data, so that the edition or encoding guidelines can be enforced and their compliance checked. These machine-readable rules must be supplemented by human-readable guidelines which document the fundamental philological decisions and can be used as a manual for the editors.

    The development of such a schema—and the associated workflows—becomes particularly relevant in the context of long-term projects, such as the Collection of Swiss Legal Sources (SLS). Changes to the schema require a continuous conversion of existing datasets. The contribution addresses how practices of modern software development, such as versioning or test-driven development (TDD), can be profitably used for humanities projects. It presents the entire workflow beginning with the creation of a modularized schema for a complex text corpus, which includes texts in German, French, Latin, Italian and Romansh from the 6th to the 18th century, up to the automated verification and publication of the documentation/schema.
key-points:
  - Software development is increasingly important in digital humanities research projects, yet many struggle to implement modern engineering practices that enhance sustainability and speed up development.
  - Developing an XML schema for a scholarly edition project is challenging but can provide a solid foundation for the project when executed effectively.
date: 09-13-2024
date-modified: 11-15-2024
doi: 10.5281/zenodo.14851557
other-links:
  - text: Presentation Slides (PDF)
    href: https://doi.org/10.5281/zenodo.14851557
bibliography: references.bib
---

::: {.callout-note appearance="simple" icon=false}

For this paper, slides are available [on Zenodo (PDF)](https://zenodo.org/records/14851557/files/427_DigiHistCH24_SolidGround_Slides.pdf).

:::

## Introduction

### General Problem Description

Nowadays, software is a central component of every research project. Since the establishment of personal computers digital tools are used for a wide range of tasks, from simple text processing to machine assisted recognition in all sorts of historical documents. Research projects however, in particular those that produce digital scholarly editions, rarely rely just on existing tools, they often create new ones. Starting from the development or customization of own data formats ending with the implementation of often complex web applications for presentation, it is not uncommon for the tools developed in this context to be 'quick hacks' rather than well-designed software projects.^[Carver et al. recently demonstrated this with a survey, which shows that many researchers developing software have never received training in software development, and best practices are often ignored. See @carver_survey_2022.] In many cases, this is not a problem at all, because the duration of research projects in the humanities is often rather short (e.g. between three and six years). Software developed in such a short amount of time must first and foremost achieve the project’s goals, and therefore adaptation to other subjects or subsequent use is usually not intended. However, this becomes a problem if the corresponding research project is scheduled for a longer term, or if it is part of a series of projects depending on each other. In this case, quick solutions often become serious issues and are not really FAIR for either internal or external subsequent use. Not least for this reason, this phenomenon is the subject of discussion in the digital humanities community under the heading of *research software engineering*.^[See Manuel Burghardt and Claudia Müller-Birn organised a workshop specifically on this topic at the 50th Annual Conference of the German Informatics Society, see @informatik_software_2019.] This paper describes practical experiences from the perspective of a long-term editorial project and explores opportunities for sustainable development practices by utilizing modern methods that have long been established outside the academic world.

### The Swiss Law Sources

The Collection of Swiss Law Sources (SLS) is a 120 year old research project that publishes Swiss legal texts in German, French, Latin, Italian and Romansh from the 6th to the 18th century. The edited texts are published in a printed reference publication and in digital form.^[See @law_sources_foundation_of_the_swiss_lawyers_society_collection_2024 for the web presentation.] By the time of writing ten edition projects are currently carried out by 23 researchers in three languages throughout Switzerland: In French, volumes are to be published in the cantons of Geneva (1 vol.), Vaud (2 vols.) Neuchatel (1 vol.), Fribourg (1 vol.); in German Valais (1 vol.), Lucerne (2 vols.), Schaffhausen (2 vols.), St. Gallen (1 vol.), Graubünden (1 vol.) and in Italian Ticino (1 vol.). Further editions projects are planned or applied for, while the overall project is scheduled for another ~ 50 years. The entire technical infrastructure is provided and developed by the SLS core team, which consists of the project manager and two members of staff specializing in DH (the authors of this paper). This team is also responsible for coordinating the projects, processing the data, typesetting the printed volumes and digital publishing of the edited texts.

In this context, the development of new and the improvement of already existing software is not only a technical challenge, but also an organizational one. Existing applications must run continuously to provide the researchers with the tools they need for their daily work (and to grant the users of the digital edition access to all information), while new requirements must be met on an ongoing basis as each project deals with unique documents.

### Sidenote on the evolution technical infrastructure of the SLS

About 15 years ago the Swiss Law Foundation, which stands behind the SLS, decided to retro-digitize the over hundred volumes published up to that point. Since then, the results of these initial digitization efforts have been presented in a web application which, as a 'browsing machine', makes the results of the many years of editing work, previously locked between two book covers, available to a broad public. This also marked the start of the project's transition to a predominantly digital editing and working method. In these 15 years numerous (web) applications have been created: These include databases that collate information on historical entities (people, organizations, places and terms), a digital application that presents the transcriptions, now encoded in TEI-XML, in both a web and a print view and a lot of other tools used in the various tasks at hand. The ongoing nature of the project was one of the reasons why many of these applications were often 'ad hoc solutions' or proof of concepts that were neither designed for long-term operation nor for integration—i.e. collaboration—with other tools. As a result, a rather diverse ecosystem of different technologies has developed on the data side and on the processing and presentation side.^[The edited texts themselves are available as PDF (the retro-digitized collection), TeX and FileMaker (transition phase) and TEI-XML (current projects). These are processed by scripts and applications in the programming languages Perl, OCaml, Python, JavaScript and XQuery. Relational as well as graph-based and document-orientated databases are used to store the entity data.]

## Data as a solid ground: developing an XML Schema for a scholarly edition

The foundation of a digital scholarly edition is undoubtedly the transcribed and annotated data, which usually is encoded in an XML format.^[There have been various discussion what's the key value of a digital scholarly digition. Maybe it's the data (see @porter_what_2024) or it's the interface (see @van_zundert_what_2018). In the recent time it's becoming more and more clear, it could be both. Therefore models have been developed, which understand scholarly editions as a stack of data, the processing applied to it and the resulting presentation (see @neuber_digitale_2023, p. 71).] All our newly edited texts are encoded in XML and as time permits all previously edited texts will be converted to this format. Therefore all further application layers, such as web presentation or printed output, have to be based on these XML files according to the single source principle. Over the last two decades, the guidelines of the Text Encoding Initiative (TEI)^[For details see @text_encoding_initiative_guidelines_2024.] have established themselves as the de facto standard for this markup work. These guidelines are primarily a broad collection of suggestions rather than a clear set of rules, necessitating a precise formulation of philological concepts into a logical data model, specifically the creation of a TEI subset as an XML schema. The TEI itself offers a format called ODD (One Document Does it all) for creating an XML schema in a literate programming-fashion^[The term literate programming usually refers to a programming paradigm introduced by Donald E. Knuth. It describes an approach where programming is done in a human readable style at first. See @knuth_literate_1992.], which itself is TEI-XML.^[The ODD-format is used in various contexts, e.g. the German Textarchiv (DTA) uses ODD-files as a source for their TEI-subset *DTABf*. See @haaf_enabling_2016.]

A schema’s main use case is validation, i.e. checking whether the XML data corresponds to certain structures and constraints. As a TEI subset it defines which components and elements provided by the TEI guidelines are used and how they are used, making it an important part of the editing concept itself. The validation against a schema ensures the consistency of the resulting data sets in an ongoing project and is necessary to continuously support and check the researchers during the transcription and annotation process. Therefore we regard an XML schema as a key software component, although the development of a schema is typically not understood as software development in the true sense of the word. This is probably one of the reasons why most of the modern engineering practices we want to demonstrate are not yet applied in this field (at least as far as we know).

### Four modern engineering practices and their application

In order to deal with a complex situation, as described above, the authors of this paper propose to make use of the following software engineering practices^[This principles have been described in various books by many authors; one of the most famous is the book *Clean code* by Robert C. @martin_clean_2009.]:

- modular software development
- test driven development
- semantic versioning
- semiautomatic documentation

The development of the XML schema used in our project will be used as an example to show how these practices can be utilized for digital humanities projects. In the context of the ongoing reworking of the SLS application landscape, we developed a test based and modular workflow (see @fig-schema-pipeline) for the creation of a new schema, based on ODD-files as input.^[The source code of this pipeline as well as the ODD sources are open sourced and can be found in the corresponding GitHub-Repo as well as on Zenodo. See @politycki_tei-xml_2024.]

![Test and build pipeline of a modern schema development workflow](./images/schema-pipeline.png){#fig-schema-pipeline}

### Modular software development

If you download a sample ODD file from the TEI homepage^[The starting point for the creation of ODD files is usually a tool called Roma. See @text_encoding_initiative_roma_2024.] which contains all elements and components, such a file may be made up of 70000 lines of code. Our ODD file—which is just a limited subset—still contains way over 20000 lines of code. The first step to handle such a large and complex object is to split it into manageable pieces. For each TEI-element we need, we created a separate file containing the element’s specification. Common parts like attribute classes, data types or custom definitions that are used by multiple elements each went into their own files.

A rather simple specification for the element `<pc/>` may look like this:

```xml
<elementSpec
  xmlns="http://www.tei-c.org/ns/1.0" xmlns:rng="http://relaxng.org/ns/structure/1.0" ident="pc" module="analysis" mode="change">
  <desc xml:lang="en" versionDate="2024-04-30">
    Contains a punctuation mark, which is processed specially
    considering linguistic regulations (for example, by adding a space).
  </desc>
  <classes mode="replace"/>
  <content>
    <rng:data type="string">
      <rng:param name="pattern">[;:?!]</rng:param>
    </rng:data>
  </content>
  <attList>
    <attDef ident="force" mode="delete"/>
    <attDef ident="unit" mode="delete"/>
    <attDef ident="pre" mode="delete"/>
  </attList>
</elementSpec>
```

This principle of atomicity enforces a clear structure, provides better maintainability in the future, made the files way more easy to grasp and to modify and had also the benefit of reducing redundancy, because shared parts were refactored and can be used throughout the schema while being defined in one place. The downside to this, of course, is the need to compile all those files into one ODD in a separate step. But this may be a small price to pay for the benefits.

### Test driven development (TDD)^[The term TDD usually refers to Kent Beck, who reintroduced this idea in the early 2000s. It describes a programming paradigm where tests are written before the actual code. See @alsaqqa_agile_2020, p. 255.]

The second step was to define a set of tests for all element, attribute and datatype definitions.^[These tests would normally be set up before the concrete description in the ODD-module is created, but we started with an already existing schema and decided to add the test later on.] Each test set describes the expected behavior of a piece of the schema and consists of three components: a title of the test set, the markup being tested and the expected result, which can either be valid (`True`) or invalid (`False`). Each test set is executed and evaluated by a Python function which invokes an XML-Parser.

The following tests describe the contents and attributes of the element `<pc/>`.

```python
@pytest.mark.parametrize(
    "name, markup, result",
    [
        (
            "valid-pc",
            "<pc>;</pc>",
            True,
        ),
        (
            "invalid-pc-with-wrong-char",
            "<pc>-</pc>",
            False,
        ),
        (
            "invalid-pc-with-attribute",
            "<pc unit='c'>;</pc>",
            False,
        ),
    ],
)
def test_pc(
    test_element_with_rng: RNG_test_function,
    name: str,
    markup: str,
    result: bool,
):
    test_element_with_rng("pc", name, markup, result, False)
```

If each specification is coupled with one or more tests, it is guaranteed that individual changes to the schema will not compromise the overall functionality and possible side-effects may be detected early on. Such test cases are abstract enough to enable representative testing of the software components to be developed, but at the same time concrete enough to make them readable for employees specializing in philology, thus they can be used as a means of communication between the digital humanist team and the philological or historical team. We can simply ask: Should this piece of XML be `True` or `False`?

### Semiautomatic documentation

The schema has to be documented for those who use it to encode the files as well as for those who use the files for any other purpose. We decided to generate as much of this documentation automatically as possible using markdown as a language and a site generator called MkDocs^[See @christie_mkdocs_2024.]. Our documentation website^[See @law_sources_foundation_of_the_swiss_lawyers_society_transkriptionsrichtlinien_2024.] is constructed like this: A self written Python program reads all parts of the schema, converts them to markdown files and hands those to the MkDocs processor which returns a static HTML webpage that can easily be accessed and searched.

### Semantic versioning (SemVer and git)

It is obvious that each change to the schema not only affects the XML files to be validated^[It may sometimes be necessary to convert them with XSLT to be valid against the newer version of the schema.], but also changes the documentation. For this purpose every release of the schema is versioned with git and is reflected in a new corresponding build of the documentation site. All versions of the schema are named in accordance to the principles of semantic versioning^[See @preston-werner_semantic_2023.] so a user of any XML file that has to be validated against our schema can see which versions are available and is able to read a specific documentation for any schema version.

## A brief outlook

Although our journey of refactoring has just begun, we are already seeing the benefits of the principles we have applied. If the ground your are standing on is a solid one, you can build on it. Currently, we are working on a multilingual translation of our schema from German as the main language to English, French and Italian and hope to enrich the schema with extensive examples from actual XML files. Furthermore, we are rewriting the existing rendering-mechanisms (e.g. TEI to HTML), applying the same rules as described above. All in all, the work done and the cost we had to pay for is already paying off.

## References

::: {#refs}
:::
  • Edit this page
  • Report an issue