NEH banner [Return to Query]

Products for grant HAA-263850-19

HAA-263850-19
Reading the Invisible Library: Rescuing the Hidden Texts of Herculaneum
William Seales, University of Kentucky Research Foundation

Grant details: https://securegrants.neh.gov/publicquery/main.aspx?f=1&gn=HAA-263850-19

Reading the Herculaneum Papyri: Yesterday, Today, and Tomorrow (Public Lecture or Presentation)
Title: Reading the Herculaneum Papyri: Yesterday, Today, and Tomorrow
Abstract: Professor Seales reveals latest technology for virtually unwrapping and reading the Herculaneum scrolls.
Author: W. Brent Seales
Date: 10/19/2019
Location: The Getty Villa, Malibu California
Primary URL: https://youtu.be/g-7-Xg75CCI
Primary URL Description: YouTube channel for the Getty Center
Secondary URL: https://drive.google.com/drive/u/1/folders/1OyHOC0VIG2bnTWbj689Dq1fqMW3njUih
Secondary URL Description: Google drive for sharing files; edited version that only contains Professor Seales lecture.

Reading the Herculaneum Papyrii: Yesterday, Today, and Tomorrow (Public Lecture or Presentation)
Title: Reading the Herculaneum Papyrii: Yesterday, Today, and Tomorrow
Abstract: Hundreds of papyrus scrolls carbonized by the eruption of Mt. Vesuvius in AD 79 were recovered from the ancient Roman residence known today as the Villa dei Papiri. Containing mostly Greek philosophical texts, these fragile scrolls and their contents have fascinated scholars since their rediscovery in 1752. Classicists David Blank of UCLA and Richard Janko of the University of Michigan discuss early and current attempts to open the fragile layers and decipher their texts, and computer scientist W. Brent Seales of the University of Kentucky shares how advances in technology and machine learning might allow the still unopened ancient book rolls to be "virtually unwrapped" and read. Reception to follow and galleries will be open until 8:00 p.m. Support provided by the American Friends of Herculaneum Society.
Author: W.B.Seales
Date: 10/19/2019
Location: The Getty Villa
Primary URL: https://www.getty.edu/visit/cal/events/ev_2831.html
Primary URL Description: Getty Villa announcement of event

The Promise of Virtual Unwrapping: Reading the Invisible Library (Public Lecture or Presentation)
Title: The Promise of Virtual Unwrapping: Reading the Invisible Library
Abstract: The UCLA/Getty Conservation Program presents “The man who can read the unreadable,” computer scientist and professor W. Brent Seales, the first speaker in the 50th Anniversary Lecture Series. Currently a Getty Conservation Institute Scholar, Seales and his team have been key to revealing texts on papyri that are too fragile to unroll, such as Homers “Iliad” and the Dead Sea Scrolls. The recipient of a $2 million grant from The Andrew W. Mellon Foundation, Seales will discuss how technological progress over the past ten years has led to the promise of “virtual unwrapping” for reading the “invisible library” of scrolls found at Herculaneum; papyri that were buried and burned in the eruption of Mount Vesuvius in 70 CE. Reservations requested. Click here to RSVP by January 8. For more information call 310-825-4004. Friends of the Cotsen Institute are invited to a private reception with Dr. Seales at 6pm. To learn more about the Friends visit their page or contact Michelle Jacobson at mjacobson@ioa.ucla.edu. Seales is Professor and Chairman of the Department of Computer Science at the University of Kentucky. His research applies data science and computer vision to challenges in the digital restoration and visualization of antiquities. In 2012-13, he was a Google Visiting Scientist in Paris, where he continued work on the “virtual unwrapping” of the Herculaneum scrolls. In 2015, Seales and his research team identified the oldest known Hebrew copy of the book of Leviticus (other than the Dead Sea Scrolls), carbon dated to the third century C.E. The reading of the text from within the damaged scroll has been hailed as one of the most significant discoveries in biblical archaeology of the past decade.
Author: W.B.Seales
Date: 01/14/2020
Location: The campus of UCLA in Los Angeles, CA
Primary URL: https://ioa.ucla.edu/content/promise-virtual-unwrapping-reading-invisible-library

Reading the Invisible Library (Public Lecture or Presentation)
Title: Reading the Invisible Library
Abstract: ABSTRACT: Progress over the past decade in the digitization and analysis of text found in cultural objects (inscriptions, manuscripts, scrolls) has led to new methods for reading the “invisible library”. This talk explains the development of non-invasive methods, showing results from restoration projects on Homeric manuscripts, Herculaneum material, and Dead Sea scrolls. Premised on “virtual unwrapping” as an engine for discovery, the presentation culminates in a new approach – Reference-Amplified Computed Tomography (RACT) – where machine learning and cloud computing becomes a crucial part of the imaging pipeline. This talk will explore the promise of virtual unwrapping and related applications, and will make the case that RACT may indeed be the pathway for rescuing still-readable text from some of the most stubbornly damaged materials, like the enigmatic Herculaneum scrolls. BIO: W.Brent Seales, Professor and Chairman of the Department of Computer Science at the University of Kentucky, is currently a Getty Conservation Institute Scholar. Seales’ research applies data science and computer vision to challenges in the digital restoration and visualization of antiquities. In the 2012-13 he was a Google Visiting Scientist in Paris, where he continued work on the “virtual unwrapping” of the Herculaneum scrolls. In 2015, Seales and his research team identified the oldest known Hebrew copy of the book of Leviticus (other than the Dead Sea Scrolls), carbon dated to the third century C.E. The reading of the text from within the damaged scroll has been hailed as one of the most significant discoveries in biblical archaeology of the past decade. Hosted by Professor Richard Korf
Author: W.B.Seales
Date: 02/11/2020
Location: The campus of UCLA in Los Angeles, CA
Primary URL: https://www.cs.ucla.edu/upcoming-events/reading-the-invisible-library-w-brent-seales-university-of-kentucky/

The Promise of Virtual Unwrapping: Reading the Lost Library of the Ancients (Public Lecture or Presentation)
Title: The Promise of Virtual Unwrapping: Reading the Lost Library of the Ancients
Abstract: Recent progress.
Author: W. Brent Seales
Date: 07/09/2020
Location: zoom lecture to 500 participants from Memoria Press and classical schools

Keeping Current: Recovering the Ink of Herculaneum (Public Lecture or Presentation)
Title: Keeping Current: Recovering the Ink of Herculaneum
Abstract: The noninvasive digital restoration of ancient texts written in carbon black ink and hidden inside artifacts has proven elusive, even with advanced imaging techniques like X-ray-based micro-computed tomography (micro-CT). This work identifies a crucial mistaken assumption: that micro-CT data fails to capture any information representing the presence of carbon ink. We demonstrate a new computational approach that captures, enhances, and makes visible the characteristic signature created by carbon ink in micro-CT. This previously “unseen” evidence of carbon inks, which can now successfully be made visible, can lead to the noninvasive digital recovery of the lost texts of Herculaneum.
Author: Stephen Parsons
Date: 03/04/2020
Location: University of Kentucky, Lexington, KY
Primary URL: http://www.cs.uky.edu/~raphael/grad/keepingCurrent.html
Primary URL Description: Archive of talk abstracts and recordings for the seminar series

Using METS to Express Digital Provenance for Complex Digital Objects (Book Section) [show prizes]
Title: Using METS to Express Digital Provenance for Complex Digital Objects
Author: Christy Chapman
Author: C. S. Parker
Author: W. Brent Seales
Author: Stephen Parsons
Editor: Manolis Garoufallou
Editor: María-Antonia Ovalle-Perandones
Abstract: Today’s digital libraries consist of much more than simple 2D images of manuscript pages or paintings. Advanced imaging techniques – 3D modeling, spectral photography, and volumetric x-ray, for example – can be applied to all types of cultural objects and can be combined to create complex digital representations comprising many disparate parts. In addition, emergent technologies like virtual unwrapping and artificial intelligence (AI) make it possible to create “born digital” versions of unseen features, such as text and brush strokes, that are “hidden” by damage and therefore lack verifiable analog counterparts. Thus, the need for transparent metadata that describes and depicts the set of algorithmic steps and file combinations used to create such complicated digital representations is crucial. At EduceLab, we create various types of complex digital objects, from virtually unwrapped manuscripts that rely on machine learning tools to create born-digital versions of unseen text, to 3D models that consist of 2D photos, multi- and hyperspectral images, drawings, and 3D meshes. In exploring ways to document the digital provenance chain for these complicated digital representations and then support the dissemination of the metadata in a clear, concise, and organized way, we settled on the use of the Metadata En-coding Transmission Standard (METS). This paper outlines our design to exploit the flexibility and comprehensiveness of METS, particularly its behaviorSec, to meet emerging digital provenance metadata needs.
Year: 2020
Publisher: Springer Nature
Book Title: Metadata and Semantic Research [SUBTITLE : 14th Research Conference, MTSR 2020, Madrid, Spain, December 2–4, 2020, Revised Selected Papers]]
ISBN: 978-3-030-7190

Using METS to Express Digital Provenance for Complex Digital Objects (Conference Paper/Presentation) [show prizes]
Title: Using METS to Express Digital Provenance for Complex Digital Objects
Author: Christy Chapman
Author: C. S. Parker
Author: W. Brent Seales
Author: Stephen Parsons
Abstract: Today’s digital libraries consist of much more than simple 2D images of manuscript pages or paintings. Advanced imaging techniques – 3D model-ing, spectral photography, and volumetric x-ray, for example – can be ap-plied to all types of cultural objects and can be combined to create complex digital representations comprising many disparate parts. In addition, emer-gent technologies like virtual unwrapping and artificial intelligence (AI) make it possible to create “born digital” versions of unseen features, such as text and brush strokes, that are “hidden” by damage and therefore lack veri-fiable analog counterparts. Thus, the need for transparent metadata that de-scribes and depicts the set of algorithmic steps and file combinations used to create such complicated digital representations is crucial. At EduceLab, we create various types of complex digital objects, from virtually un-wrapped manuscripts that rely on machine learning tools to create born-digital versions of unseen text, to 3D models that consist of 2D photos, multi- and hyperspectral images, drawings, and 3D meshes. In exploring ways to document the digital provenance chain for these complicated digital representations and then support the dissemination of the metadata in a clear, concise, and organized way, we settled on the use of the Metadata En-coding Transmission Standard (METS). This paper outlines our design to exploit the flexibility and comprehensiveness of METS, particularly its behaviorSec, to meet emerging digital provenance metadata needs.
Date: 12/3/2020
Conference Name: Metadata and Semantics Research Conference 2020

Towards Automating Volumetric Segmentation for Virtual Unwrapping: Supporting Deep Learning Through Volumetric Page Instance Segmentation (Conference Paper/Presentation) [show prizes]
Title: Towards Automating Volumetric Segmentation for Virtual Unwrapping: Supporting Deep Learning Through Volumetric Page Instance Segmentation
Author: Kristina Gessel
Author: Stephen Parsons
Author: C. S. Parker
Author: W. Brent Seales
Abstract: A new algorithm for volumetric segmentation that produces high-quality segmentations of the pages of a manuscript is presented. This approach replaces the current segmentation method of the virtual unwrapping pipeline and greatly reduces the effort and time required of a user. The algorithm is applied to extract pages with clear ink signal from micro-CT scans of the M.910 manuscript. Future applications of this algorithm, particularly its ability to generate training data for a supervised machine learning algorithm that fully performs segmentation automatically, are also discussed.
Date: 11/06/2020
Primary URL: https://drive.google.com/file/d/1h8C3OR_kO85xIps1x5LTV3lDcEth2Pff/view?usp=sharing
Primary URL Description: Recording of presentation
Conference Name: 25th Cultural Heritage and New Technologies Conference

Deep Learning for More Expressive Virtual Unwrapping (Conference Paper/Presentation)
Title: Deep Learning for More Expressive Virtual Unwrapping
Author: Stephen Parsons
Author: Kristina Gessel
Author: C. S. Parker
Author: W. Brent Seales
Abstract: This paper presents the general use of deep learning for texturing within the virtual unwrapping model. Virtual unwrapping is a software pipeline for the noninvasive recovery of texts inside damaged manuscripts or scrolls via the analysis of tomography and consists of three stages. Segmentation isolates pages or layers as surface meshes, texturing paints these surfaces based on the local neighborhood in the tomography, and flattening produces legible images from the folded, rolled or warped meshes. The pipeline allows for the recovery and restoration of a variety of otherwise lost or hidden heritage objects. This work further expands the generalization of the texturing stage from that of Parker et al. (2019) which trains a neural network to act as the texturing component. Neural networks can be trained to detect not only carbon inks, but any desired signal in tomography data. Additionally, they can output not only color images, but any modality or form desired by the scholar. The contributions of this paper are as follows: 1) A conceptualization of texture mapping as a general function, perhaps learned, mapping tomography to any modality is established. 2) Several technical improvements to the previous neural network approach are discussed. 3) This framework is applied to new manuscript scan data, yielding state-of-the-art results.
Date: 11/06/2020
Primary URL: https://drive.google.com/file/d/1q21CxpRYCXItXqyp0ZjzpHhqur4NpjDt/view
Primary URL Description: Recorded video of conference presentation.
Conference Name: 25th Cultural Heritage and New Technologies Conference

Keeping Current: The Herculaneum Project (Public Lecture or Presentation)
Title: Keeping Current: The Herculaneum Project
Abstract: For more than 20 years, Professor Brent Seales has worked to digitally restore the most damaged, inaccessible, and venerated cultural heritage objects in the world. In this talk, he and his team will present an overview of their current project to virtually unwrap and read the Herculaneum scrolls and other unopenable texts. The team will cover (1) the important role of data science, (2) the use of photogrammetry to create 3D models of open papryi fragments, (3) the development of a machine learning tool to identify the ink used in the Herculaneum scrolls; (4) the computer scientist's role in establishing the integrity of digital objects.
Author: W. Brent Seales
Author: C. S. Parker
Author: Stephen Parsons
Author: Christy Chapman
Date: 11/18/2020
Location: University of Kentucky, Lexington, KY
Primary URL: https://youtu.be/sIQBuEHK4Lc
Primary URL Description: Video of presentation.

Keeping Current: Metadata-enabled Computational Graphs (Public Lecture or Presentation)
Title: Keeping Current: Metadata-enabled Computational Graphs
Abstract: Keeping track of computational metadata is a pain. Sure, it’s easy enough for small processes, but as the complexity of pipelines grow, so do the headaches. I will discuss a work-in-progress C++ library I’ve been working on for my research group that turns existing C++ classes and functions into reproducible dataflow pipelines with automatically tracked metadata.
Author: C. S. Parker
Date: 10/7/2020
Location: University of Kentucky, Lexington, KY
Primary URL: https://youtu.be/nyEGQsIx9Aw
Primary URL Description: Video of presentation

OLLI: Reading the Invisible Library-How We Are Using Technology to Recover Lost Texts (Public Lecture or Presentation)
Title: OLLI: Reading the Invisible Library-How We Are Using Technology to Recover Lost Texts
Abstract: The team presents an overview of the history and future of their "virtual unwrapping" work. The talk includes: A description of early experiments and the success with reading the scroll from En Gedi: A discussion of all the different kinds of material damage and the need to address open, closed, and other form factors. Discussion of recent work to incorporate photogrammetry in the process. Discussion of how we are using open and closed fragments + ML/AI as a way to read inside damaged scrolls. Discussion of the importance of metadata and a digital provenance data chain. Description of a broader view of the Invisible Library and a look-ahead to current and future projects (Morgan M.910, Dead Sea Scroll fragments, and ancient book bindings).
Author: W. Brent Seales
Author: C. S. Parker
Author: Stephen Parsons
Author: Christy Chapman
Date: 9/29/2020
Location: Osher Lifelong Learning Institute at the University of Kentucky
Primary URL: https://www.uky.edu/olli/
Primary URL Description: Home page for the Osher Lifelong Learning Institute

Keeping Current: Towards Automating Volumetric Segmentation for Virtual Unwrapping (Public Lecture or Presentation)
Title: Keeping Current: Towards Automating Volumetric Segmentation for Virtual Unwrapping
Abstract: I will present a new approach to volumetric page instance segmentation — the process of extracting pages of a book or layers of a scroll from volumetric data. I will show some of the interesting results we have obtained by using this approach to segment micro-CT scans of an unopenable 6th century Coptic manuscript and discuss our goal of fully automating the segmentation process.
Author: Kristina Gessel
Date: 11/20/2020
Location: Davis Marksbury Theatre, Computer Science Department, University of Kentucky
Primary URL: https://youtu.be/An4cAW29EGo
Primary URL Description: Video of presentation
Secondary URL: http://https://www.engr.uky.edu/research-faculty/departments/computer-science/research/keeping-current-department-colloquia
Secondary URL Description: Keeping Current Colloquia Home Page

Reading the Invisible Library: Virtual Unwrapping and the Scroll from En-Gedi (Public Lecture or Presentation)
Title: Reading the Invisible Library: Virtual Unwrapping and the Scroll from En-Gedi
Abstract: Progress over the past decade in the digitization and analysis of text found in cultural objects (inscriptions, manuscripts, scrolls) has led to new methods for reading the “invisible library”. This talk explains the development of non-invasive methods, showing results from restoration projects on Homeric manuscripts, Herculaneum material, and Dead Sea scrolls. Premised on “virtual unwrapping” as an engine for discovery, the presentation culminates in a new approach that may indeed be the pathway for rescuing still-readable text from some of the most stubbornly damaged materials, like the enigmatic Herculaneum scrolls.
Author: W. Brent Seales
Date: 12/08/2020
Location: UM-St. Louis Digital Humanities Series in St. Louis, MO, presentation via zoom from Kentucky

Virtual Unwrapping of Herculaneum Fragments and Scrolls: Recent Results (Conference/Institute/Seminar)
Title: Virtual Unwrapping of Herculaneum Fragments and Scrolls: Recent Results
Author: W. Brent Seales
Abstract: x
Date Range: 11/07/2020
Location: Recent Research in Imaging and Archaeological Science: Herculaneum and Beyond. Joint Conference between a joint event with the Institute of Classical Studies in London and the British Friends of Herculaneum. Held via zoom.

The Digital Restoration Initiative: Reading the Invisible Library (Public Lecture or Presentation)
Title: The Digital Restoration Initiative: Reading the Invisible Library
Abstract: Damaged artifacts that contain text make up an “invisible library” of written material that is incredibly difficult to read. But progress over the past decade using new computer techniques for the digitization and analysis of text found in cultural objects (inscriptions, manuscripts, scrolls) has led to workable, non-invasive methods for reading this invisible library. This talk shows results over the past two decades from digital restoration projects on Homeric manuscripts, Herculaneum material, and Dead Sea scrolls, culminating in the reading of the text from within a damaged scroll unearthed at En-Gedi, which has been hailed as one of the most significant discoveries in biblical archaeology of the past decade. Premised on “virtual unwrapping” as an engine for discovery, this presentation explains the complete process developed for reading the scroll from En-Gedi, and the broader significance of the discovery. The talk concludes by unveiling a new approach – Reference-Amplified Computed Tomography (RACT) – where machine learning becomes a crucial part of the digital restoration pipeline. You will leave this talk considering that RACT may indeed be the pathway for rescuing still-readable text from some of the most stubbornly damaged materials, like the enigmatic Herculaneum scrolls.
Author: W. Brent Seales
Date: 02/21/2021
Location: Long Island, Archaeological Institute of America, via zoom
Primary URL: https://www.archaeological.org/lecturer/w-brent-seales/
Primary URL Description: AIA lecturer profile page

Computed Tomography and Virtual Unwrapping of Scrolls (Conference Paper/Presentation)
Title: Computed Tomography and Virtual Unwrapping of Scrolls
Author: W. Brent Seales
Abstract: Progress over the past decade in the digitization and analysis of text found in cultural objects (inscriptions, manuscripts, scrolls) has led to new methods for reading the “invisible library”. This talk explains the development of non-invasive methods, showing results from restoration projects on Homeric manuscripts, Herculaneum material, and Dead Sea scrolls. Premised on “virtual unwrapping” as an engine for discovery, the presentation culminates in a new approach that may indeed be the pathway for rescuing still-readable text from some of the most stubbornly damaged materials, like the enigmatic Herculaneum scrolls.
Date: 07/30/2021
Primary URL: https://docs.google.com/presentation/d/1hquZoFIL2WLVmWhS7XHzK0T-q6Ac4Qop/edit?usp=sharing&ouid=113415078631637261934&rtpof=true&sd=true
Primary URL Description: Link to slide deck in Google drive
Conference Name: Digital Imaging for Non-Destructive Testing 2021 an ASNT event

EduceLab: An Ecosystem for Advancing the Scientific Foundations of Heritage Science (Public Lecture or Presentation)
Title: EduceLab: An Ecosystem for Advancing the Scientific Foundations of Heritage Science
Abstract: This talk will report progress on several NCHS projects at the University of Kentucky in the context of EduceLab, an emerging programmatic and instrumentation ecosystem. The infrastructure platform, named after the verb “to educe” (meaning to bring out from data, or to develop something that is latent but is not, on its own, explicit) is organized into deployment clusters, which broaden its use and facilitate training for future innovations. The primary deployment clusters are tied together by equipment and expertise in data science, recognizing that we are shifting to a “fourth paradigm” in scientific inquiry revolving around data analysis at scale. Progress in areas such as "virtual unwrapping" and AI-inspired solutions to NCHS problems will make the case for continued interdisciplinary collaboration and sustained investment in EduceLab infrastructure.
Author: W. Brent Seales
Date: 07/01/2021
Location: Via zoom
Primary URL: https://docs.google.com/presentation/d/1u3FmU9oOyc4qbyKpbASTxAuvhEKjDVIv/edit?usp=sharing&ouid=113415078631637261934&rtpof=true&sd=true
Primary URL Description: link to slide deck on Google drive

BRill 4.1.1.7 Micro-Computed Tomography (CT) Paul C. Dilley, Christy Chapman, C. Seth Parker, W. Brent Seales Part of 4 Science and Technology – 4.1 Scientific Methodologies and Technology – 4.1.1 Imaging Technologies (Book Section)
Title: BRill 4.1.1.7 Micro-Computed Tomography (CT) Paul C. Dilley, Christy Chapman, C. Seth Parker, W. Brent Seales Part of 4 Science and Technology – 4.1 Scientific Methodologies and Technology – 4.1.1 Imaging Technologies
Author: Paul C. Dilley, Christy Chapman, C. Seth Parker, W. Brent Seales
Editor: Bas van der Mije
Abstract: The application of new technologies to the study of cultural objects is inexorable, driven by intense human curiosity and the profound mysteries of the past. The long term diffusion of technology into cultural studies and the humanities has matured in this technical age, becoming much less invasive and remarkably precise, even revelatory. For example, one of the first non-medical applications of computed tomography (CT), achieved within two years of CT’s commercial development in the 1970s,1 was the visualization of the internal structure of ancient Egyptian mummies. Similarly, conservation of and scholarship with manuscripts have become increasingly technology-oriented.2 With the advent of technical approaches that are more and more precise and at the same time less and less invasive, research is accelerating and diffusing into new fields, spawning novel techniques that reveal new information. The rapid technological advances over the last decade have led to exciting, innovative approaches to the study of cultural heritage objects. One of the most promising and increasingly used technologies is that of micro-computed tomography (micro-CT). Like medical CT scans, micro-CT is an “X-ray transmission image technique”3 that combines the penetrating clarity of x-rays with the speed, precision, and accuracy of computers. While X-ray technology offers a two-dimensional glimpse through an object, micro-computed tomography generates a highly detailed 3D image of the entire internal structure of an object, revealing its inner features.4 Just as X-ray and computed tomography revolutionized medicine when they were discovered (the responsible scientists received Nobel prizes for their work, German physicist Wilhelm Roentgen in 1901 and Allan McLeod Cormack and Godrey Houndsfield jointly in 1979), today’s micro-CT capabilities are similarly transforming the study of cultural heritage.
Year: 2021
Publisher: Brill
Book Title: Textual History of the Bible vol. 3D (Science and Technology)

The X-Ray Micro-CT of a Full Parchment Codex to Recover Hidden Text: Morgan Library M.910, An Early Coptic Acts of the Apostles Manuscript (Article)
Title: The X-Ray Micro-CT of a Full Parchment Codex to Recover Hidden Text: Morgan Library M.910, An Early Coptic Acts of the Apostles Manuscript
Author: Paul C. Dilley, Christy Chapman, C. Seth Parker, W. Brent Seales
Abstract: One of the oldest near-complete copies of the Acts of the Apostles is found in codex M.910, a damaged Coptic manuscript copied sometime in the fifth or the sixth century, which is now held by the Morgan Library & Museum in New York, where it has eluded scholarly eyes for decades because of its fragile state. The Morgan (then known as the Pierpont Morgan Library) purchased the manuscript in 1962, right before it acquired another Coptic Acts of the Apostles codex, the Codex Glazier. While Hans-Martin Schenke edited this text, in the Middle Egyptian dialect of Coptic, in the 1980s, M.910 still awaits publication, which will soon be possible, thanks to collaboration between a scholar of early Christianity and Coptic (Paul Dilley), a computer scientist with a research focus on the digital recovery of ancient texts (Brent Seales and his team), and the head of conservation at the Morgan Library & Museum (Maria Fredericks). These specialists convened at the Morgan from December 11-18, 2017, to image the damaged manuscript with a Skyscan micro-CT scanner, donated for use in this project by Micro Photonics, the US distributor, and operated by Raj Manoharan. A second round of imaging focusing on a smaller set of fragments and separated pages was carried out in November 18-22, 2019 at Micro Photonics’ headquarters in Allentown, using the Skyscan 1272 and 1273 scanners, operated by Seth Hogg. Ben Ache of Micro Photonics coordinated on both occasions. Work on processing the images, and reading the resulting text, has been ongoing. This chapter will provide an overview of the manuscript and the imaging process, with a focus on the practical aspects of simultaneously addressing philological, conservation, and engineering challenges. To our knowledge, this work represents the first attempt to use x-ray imaging to read inaccessible writing on both sides of a surface (as opposed to scrolls with writing on one side), within a codex of substantial size.
Year: 2021
Format: Journal
Periodical Title: Manuscript Studies
Publisher: University of Pennsylvania Libraries and Press

vc-deps (Computer Program)
Title: vc-deps
Author: Seth Parker, Stephen Parsons, W. Brent Seales
Abstract: CMake project for building Volume Cartographer dependencies from source
Year: 2021
Primary URL: https://gitlab.com/educelab/vc-deps
Primary URL Description: Code repository
Access Model: open access
Programming Language/Platform: C++
Source Available?: Yes

ink-id (Computer Program)
Title: ink-id
Author: Seth Parker, Stephen Parsons, W. Brent Seales
Abstract: inkid is a Python package and collection of scripts for identifying ink in volumetric CT data using machine learning.
Year: 2021
Primary URL: https://gitlab.com/educelab/ink-id
Primary URL Description: code repository
Access Model: open access
Programming Language/Platform: Python
Source Available?: Yes

smeagol (Computer Program)
Title: smeagol
Author: Seth Parker, W. Brent Seales
Abstract: smeagol is a C++14 library for creating custom dataflow pipelines that are instrumented for serialization. It was designed to make it easy to convert existing processing workflows into repeatable and observable pipelines for the purposes of experimental reporting, reliability, and validation.
Year: 2021
Primary URL: https://gitlab.com/educelab/smeagol
Primary URL Description: code repository
Access Model: open access
Programming Language/Platform: C++
Source Available?: Yes

Volume Cartographer (Computer Program)
Title: Volume Cartographer
Author: Seth Parker, W. Brent Seales, Stephen Parsons
Abstract: Volume Cartographer is a cross-platform C++ library and toolkit for virtually unwrapping volumetric datasets. It was designed to recover text from CT scans of ancient, badly damaged manuscripts, but can be applied in many volumetric analysis applications.
Year: 2021
Primary URL: https://gitlab.com/educelab/volume-cartographer
Primary URL Description: code repository
Access Model: open access
Programming Language/Platform: C++
Source Available?: Yes


Permalink: https://securegrants.neh.gov/publicquery/products.aspx?gn=HAA-263850-19