There is a fundamental difference between the representation and the interpretation of data, most certainly of historical ones. Technologies to implement both are usually discussed as systems of annotations and markup.
[ A slightly reformatted pdf of this post for printing is available here ]
That both technologies today are usually applied without conceptually caring for this difference is the reason, that in [Thaller 2018] as well as in [Thaller 2020] we were highly critical of embedded markup, which has a tendency to obscure the difference, which in my opinion leads to a dense epistemic labyrinth. The markup embedded into a document shall: (a) represent characters, which do not exist in the fonts available or which are non-alphabetic like interpunctuation. (b) Allow the representation of abstract texts resulting from the evaluation of various witnesses in a critical edition. (c) Annotate a text with interpretations.
While here the focus on embedded markup blurs an important difference, in another case it creates one which does not exist conceptually: Text, image, sound and other data are not inherently different. All data are raw material for an interpretative process leading to information. Markup can only be embedded into a text; neither into an image, nor a sound stream or any of the other more complex basic data types. I find it difficult to currently recognize a unique concept of “annotation” which stresses the common problems by providing an interpretation of some data, abstracting from the data type these data have.
Purpose of this paper
The previous papers in this series have derived requirements for new technological concepts to overcome this confusing situation, notably:
- The representation of a historical source and its interpretation should be kept separate as clearly as possible in all stages of processing.
- For interpretative purposes we need technological support for annotation schemes which are independent of data types and are based on the concept of standoff markup.
- For representative purposes we need a way to handle strings, which goes beyond the concept of a character string and may mix characters from the traditional character sets smoothly integrated with visual snippets of “non characters”.
- And any solution for interpretations must support a concept of ambiguity.
While these are principles derived from methodological / epistemic requirements, this paper does not intend to reflect or develop these principles further, but is written from a decidedly technological point of view. While some of the methodological reasons behind the technical requirements are briefly discussed below to explain the need for individual features, I mainly try to develop extensions to well understood technologies to show that the implementation of both, a generalized system for interpretative annotations, as well as standoff markup for representative problems is realistic and technically achievable.
This is done in four sections: Section 1 discusses a generalized concept for sustainable annotations, standoff markup for all data types, based on a new concept of links, section 2 extends this to bidirectional links with a stricter model of administration. Both are understood to be solutions for basic problems of interpretative markup. Section 3 presents a concept for strings which allow a generalized solution for a large segment of what is currently handled by representative markup. Section 4 extends this towards ambiguous representations.
Style of the Technical Proposals
The later parts of this paper consist mainly of declarations of program libraries, which shall explore how great the effort of implementing the theoretically required capabilities is. Actual implementations in all cases would include additional functionalities. When, e.g., in the following we discuss a new type of string, a library supporting it would certainly include functions to convert such a string to print or another, more conventional, string representation. We have restricted ourselves to those parts of such libraries which are dedicated to the problems raised in this paper, ignoring all details which can be solved by the application of quite well understood programming patterns. Error handling has not been covered in detail. All very low-level details have been ignored as well, as e.g., the handling of the Endian problem.
One aspect, which is more conceptual than strictly technical, but quite fundamental: I try not to describe systems of annotation and markup. I try to describe fairly low-level changes in the technology stack, which would make it easy to include capabilities which are achieved with such systems into all future software, not only into tools dedicated for text processing, but also enabling data base systems, strings in property graphs and all other types of software routinely to handle more expressive data representations. Therefore, I do not discuss existing applications.
There is one aspect, where the descriptions are not reflecting a model for implementation: Different blocks of functionality are described as more independent of each other as they would be in an implementation. So, the concepts of token strings, enhanced token strings and ambiguous token strings in their crisp and weighted varieties would not be implemented as four different function libraries, but rather as one integrated on. As would be basic, enhanced, linked and bidirectional multi stream files.
With these restrictions, the libraries are assumed to be functionally complete. As, in accordance with the argumentation of [Thaller 2018], such solutions should be implemented at a low level of the technology stack, the declarations of all functions have been formulated in the C programming language, as many other programming languages themselves are based on software layers implemented in C. Most of the time, this is unproblematic. Certain features like polyvalence offered, e.g., by C++ or other object-oriented languages would have made some declarations more compact, though.
In one case that has led to a convention which needs to be introduced explicitly. As C has a very restricted support for variadic parameter lists, a specific notation has been used. Whensoever a parameter list of the type function( some parameters, …, int NofPar, int parA1, char *parB1, …); appears, this indicates a function where the part of the parameter list “parA, parB” is to be repeated NofPar number of times, referred to in the discussion of the function as parA1 – parAn, parB1 – parBn.
Last remark: To keep things simple, all pointers and integers are assumed to have a length of 4 bytes.
There exists no implementation currently; to restrict flights of fantasy and inconsistencies a bit, all proposed functions and data structures have been expressed in header files which can be and have been compiled by gcc.
I have no illusions that during implementation some inconsistencies and unexpected dependencies would not be detected. If you do so, please let me know.
1. Generalizing relationships between files as base for generalized annotations.
1.1 The current concept of a file in software technology
If you are familiar with the system call based level of I/O skip this section and continue with 1.2.
If you have no technical background and want to engage with these problems nevertheless: Section 1.1 tries to introduce the technologies concerned at a low level of complexity. “Low level of complexity” being a notoriously relative term, though.
One of the reasons for the ubiquity of UNIX and the operating systems derived from it is the I/O model, which also has been partially copied by the other serious contender, Windows. A reader interested in the complexities of earlier solutions may be interested in [Claybrook 1983]. At least for the purpose of this paper we can say today that all I/O operations are based upon the notion of a data stream on which a small number of operating system functions operate, which are invoked by the I/O libraries of the various programming languages to support I/O systems, which are more comfortable than the raw services offered by system calls, as the low-level functions of the operating system are usually called. As we will propose a slight generalization of this model, it may be wise to recapitulate it, even if all readers with a technical background should be familiar with it.
The basic model of a file provides three concepts.
- A stream is a series of individual bytes, which are numbered starting with 0.
- Every stream has a current position stored in a file pointer, which at the start of processing is conceptually at the beginning of the first byte of the stream.
- If for any reasons the processing of the file has led to a situation, where the file pointer is at the end of the last byte of a file, the file pointer has a value of end of file or EOF.
Figure 1 shows the file when the file pointer is at its very beginning, or at address “0”[i]; figure 2 shows the file pointer after the last byte, at the end of file position.
To simplify matters we ignore a few special cases – handling data coming from an URL as a pseudo file stream etc. – and some specialized functions which provide shortcuts to the more basic functionalities. Furthermore, we tacitly include a function from a higher level in the technology stack and abstract a bit from implementation details. We can then say that all transfers between a persistent medium (disk, USB stick, tape library) and the volatile memory of the computer are handled by six functions:
fileObj = open(pathName, flags, mode);
status = close(fileObj);
actualCount = write(fileObj, buffer, count);
actualCount = read(fileObj, buffer, count);
filePosition = seek(fileObj, position, whence);
filePosition = tell(fileObj);
1. open()
A fileObj is the data structure, which represents the file in persistent storage within a running program. It contains the active file pointer and administers various buffers, which care for the actual transfer of data from the persistent medium into the memory blocks used by the running program.
The pathName in open() is a character string, which ends in the filename as visible to the user of the program, potentially prefixed by character strings representing the directories of the hierarchy in which the file is positioned on the persistent media of the computer, e.g., its hard disk.
Flags represents a couple of optional keywords, which regulate what to do when the file addressed by pathname does not exist, whether writing to the file is allowed and similar properties.
Mode is mainly concerned with the question, which users have the right to access and / or modify the content of the file.
So, the statement myData = open(“myProject/Archive/mySource”, O_RDWR,0); would try to access the file “mySource” in the sub directory “Archive” of the directory “myProject” and prepare it for both reading and writing; as it is supposed to exist, no mode is specified. If the file exists, its content can be accessed in the program via “myData” henceforth.
2. close()
After processing the various actions the user demands, close(myData) must be called to make sure, that the file on the persistent storage device contains all the changes made during processing in memory and is accessible to other users.
3. read()
To read from the file you use statements like bytesRead = read(myData, myText, 1024);
This instructs the program to read, starting at the current position of the file pointer, up to 1024 bytes from the file represented by “myData” into a variable “myText”, presumably a character string. “bytesRead” informs the programmer, how many bytes have really been read. If, e.g., from the position of the file pointer to the end of the file there would have been 500 bytes only, only those 500 would have been transferred. After the operation, the file pointer would point to the byte after the last one read.
Graphically, if so far 1000 bytes have been processed, the operation read(myData, myText, 1024); would produce the following effect:
4. write()
To write to the file you use statements like bytesWritten = write(myData, myText, 1024); which works exactly like the read operation but exchanges the direction of the transfer between volatile and persistent storage.
5. seek()
So far on the process, where you would want to read / write a file from the beginning to its end. As every read shifts the file pointer, two successive reads will read byte sequences – strings of characters – which are adjacent. If in our example you would want to skip the first one thousand bytes, you would have to position the file pointer explicitly at the beginning of the sequence you would want to read. To do so, you would enter the two commands: position = seek(myData, 1000, SEEK_SET) and read(myData, myText, 1024).
Graphically, this would result in:
The keyword SEEK_SET could be ignored; except that to understand the following section it is important to know, that specifying another keyword there, we could position the file pointer explicitly at the end of the file.
6. ftell()
To explain what you would want to use ftell() for needs a slightly more complex example.
Assume you have the following graph consisting of three nodes:
Any program handling this graph would do so by two types of data structure, approximately like this:
nodeObj
{
int nodeId;
string name;
string first;
int NofEdges;
edgeObj myEdges[];
} ;
and
edgeObj
{
int edgeId;
string type;
nodeObj source;
nodeObj target;
} ;
If we try to express the graph in figure 8 with these structures, we get:
nodeObj{0, “Smith”, “John”, 1, {0} }
nodeObj{1, “Doe”, “Jane”, 1, {2} }
nodeObj{2, “Smith”, “Jill”, 2, {1, 3 } }
and
edgeObj{0, “daughter”, 0, 2}
edgeObj{1, “father”, 2, 0}
edgeObj{2, “mother”, 1, 2}
edgeObj{3, “daughter”, 2, 1}
While professional graph libraries add many refinements, the most basic way to administer such an approach to represent a graph uses two arrays which keep the nodes and edges, respectively.
nodeObj myNodes[3];
myNodes[0] = nodeObj{0, “Smith”, “John”, 1, {0} }
myNodes[1] = nodeObj{1, “Doe”, “Jane”, 1, {2} }
myNodes[2] = nodeObj{2, “Smith”, “Jill”, 2, {1, 3} }
edgeObj myEdges[4];
myEdges[0] = edgeObj{0, “daughter”, 0, 2}
myEdges[1] = edgeObj{1, “father”, 2, 0}
myEdges[2] = edgeObj{2, “mother”, 1, 2}
myEdges[3] = edgeObj{3, “daughter”, 2,1}
If the data are in memory, it is quite plausible, that Jill is connected to her parents by the edges in myEdges[1] and myEdges[2]. But how would you save such a graph in a file with the system calls described above?
You would first write the nodes and edges into the file, leaving a bit free space at the beginning of the file. Simplifying slightly, you would do so with a chunk of code as follows, where ftell() gets you to the current position of the file pointer. That is: the offset between the start of the file and the byte with which the content of the node or edge object is saved in the file. We assume that sizeOfObj() tells us how many bytes are needed to save a specific object in a file and that we are using a system, where an integer uses 4 bytes, and a file position can be stored in an integer.
int nodeAddr[3];
int edgeAddr[4];
int setAddr[2];
int setCount[2] = { 3, 4 }; // Define number of nodes and edges
int position;
int length;
seek(myData, 16, SEEK_SET); // Leave the first sixteen bytes empty.
setAddr[1] = 16; // Data on nodes will start in byte sixteen[ii].
for (int i = 0; i <3; i++) // Write data on nodes to the file
{
length = sizeOfObj(myNodes[i]);
nodeAddr[i] = ftell(myData);
write(myData, length, 4);
write(myData, myNodes[i], length);
}
setAddr[2] = ftell(myData);
for (int i = 0; i <4; i++) // Write data on edges to the file
{
length = sizeOfObj(myEdges[i]);
edgeAddr[i] = ftell(myData);
write(myData, length, 4);
write(myData, myEdges[i], length);
}
seek(myData, 0, SEEK_SET); // Position the file pointer at the start
write(myData, setAddr, 8); // Starting address nodes: first 8 bytes
write(myData, setCount, 8); // Starting addresses edges: next 8 bytes
The nice thing about this solution is, that we do have to know nothing about the graph to be able to read it back from the file. The number of nodes and edges and their positions within the file – usually called addresses – are kept within the first 16 bytes of the file – therefore it is unimportant, whether the graph has 3 nodes or 40.000 for the following chunk of simplified code to read it. Before we show the code, let us have a look at the content of the file:
Assuming we have the same variables as before, you could reactivate the graph by reading it from the file as follows:
seek(myData, 0, SEEK_SET); // File pointer at the start of the file
read(myData, setAddr, 8); // Read starting addresses of nodes
read(myData, setCount, 8); // Read starting addresses of edges
seek(myData, setAddr[1], SEEK_SET); // Position file to the nodes
for (int i = 0; i <setCount[1]; i++) // Read nodes from the file
{
read(myData, length, 4);
read(myData, myNodes[i], length);
}
seek(myData, setAddr[2], SEEK_SET); // Position file to the edges
for (int i = 0; i <setCount[2]; i++) // Read edges from the file
{
read(myData, length, 4);
read(myData, myEdges[i], length);
}
—*—
If this is the very first time you have encountered an I/O system call, you probably found the mechanism complicated. You may be relieved to know, that if you have understood the mechanism of storing blocks of data intermittently with the starting positions of these blocks of data to store how these blocks of data are interrelated, you have understood quite a bit: it is (almost) everything you have to know to understand a relatively complex format like TIFF for example. I doubt that there is any not completely trivial file format, which does not rely heavily on this mechanism.
1.2 Shortcomings of the Current Concept of a File in Software Technology
(Yes, this paper discusses a model for standoff markup. As one of the enabling technologies needed for that model is more easily explained using another example, I beg your patience that I turn to this example next.)
Storing blocks of data and their addresses within the file to show interconnections between such blocks is an extremely powerful mechanism. As it is content agnostic it can be used to save every data structure to persistent devices and read them again. This absolute disinterest in what the stored data mean has just one shortcoming: If there are structures of data which are relevant to different types of file, it is impossible to define a solution common to all of them. Image and sound data are notorious for requiring metadata. For the technical properties of images there exists, therefore, since some time a highly detailed standard, the Exchangeable image file format for digital still cameras: Exif Version 2.32 [Exif 2019], which despite its name is also applicable to sound files. Conceptually this meta data standard applies to all image and sound files. As every file format must be build up from scratch with the system calls discussed here (even if they are hidden in one of the more abstract I/O models of programming languages), however, there exist currently provisions for these metadata in the TIFF and JPEG formats, but not in the JPEG2000 or GIF formats. Each of the communities responsible for the later two and the many other existing formats must find its own solution for embedding EXIF into the format they are responsible for.
It is of course possible to declare within a project that two sets of data which are closely interrelated, but operated upon by different tools, are saved in a pair of files independent of each other, which are administered by a superordinated system aware of the two being related to each other. But that means, that the relationship between these two files is only guaranteed within this superordinated system. What we want is a situation, where we can say: I am defining a completely new image format, which takes care of a new class of algorithms and is completely different from all existing ones. At the same time, however, I want this file format to include EXIF metadata by simply calling upon saveExifData() / loadExifData() functions from an appropriate library developed by somebody else. And I really do not want to have to do anything else to accomplish that. Furthermore: the same should work with saveXMPData() / loadXMPData() for content-oriented metadata. And if in ten years a new metadata standard comes up, which is supported by a sufficiently strong community to create an appropriate support library, I want to be able to support it with my next version without doing more than including a saveBestMetadataEverData() / loadBestMetadataEverData(). And, by the way: If somebody should not be able to process my image format, they nevertheless should be able to extract any of the metadata contained in a file stored according to my format.
1.3 Multi Stream Files (MSF)
1.3.1 Structure of an MSF
This can be achieved relatively easily if we give up the notion that a file stores exactly one binary stream. As it would be obviously absurd to assume, that we could completely change the structure of the current system calls, which are rooted as deeply into the operating system as is possible, the following notes assume, that a new set of extended system calls should be able to process files which are as fully compatible to the currently used set as possible. The idea of a Multi Stream File or MSF simply assumes, that one file on a persistent medium contains a sequence of addressing spaces, graphically:
This obviously solves one problem: As the different types of data and metadata are stored in one persistent chunk of storage, they cannot accidentally be separated. And the designer and maintainer of the TIFF standard has not to think of EXIF or XMP properties and vice versa.
The one complication is, that while a TIFF reader could simply open this file and process it starting with byte 0 as usually, an EXIF reader would have no idea where to start.
Therefore, we must add a table of streams to the design of the file, which informs the user which stream starts where. This is quite usual – many file formats start with a table informing the reader of that file format, which segment starts where.
struct streamSet
{
int nOfStreams;
streamInfo set[];
} ;
where
struct streamInfo
{
string streamName;
int streamStart;
int streamEnd;
} ;
In our case:
struct streamSet tableOfStreams
{
3,
{ “TIFF”, ???, n },
{ “EXIF”, n+1, m },
{ “XMP”, m+1, l }
} ;
That raises one problem though, indicated by the three question marks above: If we create the possibility to add image metadata in the form of a secondary stream to all existing image formats, but require that the information about that is at the start of the file, we would at the same time invalidate the image files for all existing readers of them, as they traditionally assume that they control the file starting with byte 0. We propose therefore, to add the table of streams at the very end of the existing content. Now we can replace the question marks above by 0 and a TIFF reader could happily process the file, ignoring the additional data, while an EXIF or XMP reader aware of the concept of multi stream files could process the data they are appropriate for – and the same for all other image formats, past or future.
This solution would be highly viable: A very preliminary test where I added a character string at the end of a file and tried to open the modified file with the standard tools installed on Linux and Windows 10 brought the result, that for JPG, PNG, TIFF, MP4 and PDF files the tools would open the files ignoring the appended data without problems. LibreOffice did the same for DOCX, the most recent version of Word complained about the additional characters, was quite able to process the document afterwards, though.
Remaining problem: The table of streams – or TOS – is of variable length, so we must add to the very end of the file its starting address. As it is always wise to signal to a program opening a file, whether it is in the format expected, that should be combined with a unique character string, say “MSF_TOS_”. Expanding figure 10, this results in the following file structure:
So far on the structure of an MSF. How to process them?
We have already mentioned, that part of the purpose of this design is, to make it as easy as possible, to introduce such files into current information technology. So any software unaware of MSFs which has no problems with bytes being in a file it does not know how to process, should be able to handle the first stream of an MSF as if it would be a classical file.
MSF aware processing can follow one of two scenarios. To keep matters simple, we look only at such cases, where the first stream contains some content and the other streams add interpretative information to such an object, be it metadata, annotations or any other data who are maningful only in the presence of content they refer to. To differentiate between them, we call the first stream the native stream of the file and the other ones additional streams.
We speak of basic MSF processing as long as all streams are functionally independent and there are no crossreferences between them. To fully support the MSF concept, the actual system calls would have to be modified. As this at the moment is utopian, we discuss a layer at the same position in the technology stack as the standard I/O library of the C programming language. As quite a few other programming languages, notably quite a few implementations of Java, are written in C this is perspectively less restricting as it may seem.
Functions which are described below with “MSF” as a prefix replacing the “f” in the standard library – MSFopen() instead of fopen() – are supposed to be so close to the standard library functions that the difference is irrelevant for the programmer. Functions covering operations which are not meaningful with traditional streams have names starting with the same prefix, but without equivalent in the standard library. The MSF I/O library should at first be implemented as a layer on top of the standard I/O library, mapping the MSF operations unto the functionality of the standard library. Specifically an MSFSTREAM is administered by a data structure which describe the relationships between the individual MSF streams and the underlying standard I/O file. This library is called MSF library henceforth.
The following descriptions are intended to sketch the feasibility of an implementation. They are complete as far as described, except for all aspects of error handling. An actual implementation would almost certainly include additional functions which have been excluded here to remain concise.
1.3.2 Basic Processing of an MSF
MSFSTREAM *MSFopen(const char *pathname, const char *mode, const char *streamname);
MSFopen searches first in a table MSFset maintained by the MSF library whether an MSFSTREAM has already been connected to pathname. If this is not the case, a new internal structure MSFstreamTable is initialized which is connected to the FILE* resulting from calling fopen() with pathname and mode. Into this MSFstreamTable the table of streams from the end of the opened file is loaded. This table contains a MSFstream structure for each stream in the file and a variable indicating which of these streams has been accessed last.
If the pathname in the MSFopen call has already been connected to another MSFSTREAM earlier the MSFstreamTable for this file is modified as follows.
If no streamname is given in the MSFOpen call, the function proceeds to process the native stream. Otherwise, it is checked whether a stream with the given streamname exists. If not so, the function signals an error.
For each MSFSTREAM opened the appropriate MSFstream structure contains a FILE*, the position of the file pointer and a status flag indicating whether the stream has been opened.
If all streams in the MSFstreamTable before the one currently being opened have either not yet opened or have been opened in read only mode, the FILE* of the MSFstream is the FILE* resulting from the fopen() executed for pathname and the file pointer is initialized with the starting address of the respective stream. The status flag indicates that the stream is now open for access.
If a stream is to be opened write enabled, all successive streams in the MSFfile are copied into temporary scratch files and the FILE*s in their respective MSFstream entries point to their respective scratch file. The file pointer for such a stream points to the beginning of this scratch file. That this stream is connected to a scratch file is indicated by a flag.
MSFstatus MSFclose(MSFSTREAM *msfstream);
If msfstream points to a scratch file, fflush() is called for that scratch file. The status flag indicates the stream as inactive.
If the MSFstreamTable containing the msfstream contains any link streams (see below, section 1.3.3) which have been modified, these link streams in the msfstream are opened write-enabled, i.e., the appropriate temporary scratch files for them and all succeeding streams are opened.
If the msfstream to be closed is the last one that has been opened in the MSFstreamTable, the content all streams that have been transferred into temporary scratch files are appended to the end of the first stream opened write enabled, and the temporary file is closed and deleted. All link streams that have been marked as modified are copied from the local table in the MSFstream structure. Then the file containing the streams is closed and its entry into the MSFset table is deleted.
MSFstatus MSFcloseall(MSFSTREAM *msfstream);
All streams in the file containing msfstream will be closed.
size_t MSFread(void *ptr, size_t size, size_t nmemb, MSFSTREAM *msfstream);
msfstream is checked to see, whether the last read operation on the FILE underlying this stream has been performed by this stream. If not, the FILE is repositioned by fseek() to the last file pointer position recorded for this stream.
The arguments ptr, size, nmemb and the FILE* contained in msfstream are handed over to fread().
The file pointer position for this stream is acquired via ftell() and stored in the MSFSTREAM structure of msfstream as last recorded position of this stream.
size_t MSFwrite(const void *ptr, size_t size, size_t nmemb, MSFSTREAM *msfstream);
msfstream is checked to see, whether the last read operation on the FILE underlying this stream has been performed by this stream. If not, the FILE is repositioned by fseek() to the last file pointer position recorded for this stream.
The arguments ptr, size, nmemb and the FILE* contained in msfstream are handed over to fwrite().
The file pointer position for this stream is acquired via ftell() and stored in the MSFSTREAM structure of msfstream as last recorded position of this stream.
MSFstatus MSFseek(MSFSTREAM *msfstream, MSFposition offset, int whence);
If msfstream is currently administered as part of the FILE underlying the MSFstreamTable, that is, if it is not connected to a temporary scratch file, offset is modified by adding the starting position of the stream within the FILE.
The FILE* connected to the msfstream, offset, and whence are handed over to fseek().
MSFposition MSFtell(MSFSTREAM *msfstream);
msfstream is handed over to ftell() resulting in a position;
If stream is currently administered as part of the FILE underlying the MSFstreamTable, that is, if it is not connected to a temporary scratch file, the starting position of the stream within the FILE is added to the position returned by the call to ftell().
The position is returned.
MSFstatus mseof(MSFSTREAM *msfstream);
If the physical file underlying msfstream is at the end of file or the msfstream is set after its last byte true is returned. Otherwise, false is returned.
1.3.3 Advanced Processing of an MSF: linked multi stream files
Conceptually the native stream of an MSF represents some information in a data stream. The additional streams are interpretations of this stream. In basic MSF processing, an interpretation always relates to the whole of the representation: So, a Dublin Core stream might describe the PDF of a book contained in the native stream.
When the granularity of an interpretation is finer, when e.g., a standoff markup system wants to address bytes {n – m} in a textual stream, there arises the big problem which has held back standoff systems: A change in the textual stream must be reliably reflected in the addresses, by which a markup stream refers to the text it interprets.
Changes in texts are notoriously frequent, as people working with them will always detect at least one more error to be corrected, so that is a serious problem. The principle of representation vs. interpretation does also apply to images, however. An annotation, which interprets the rectangle { n, m – o, p } in an image as representing an x is as vulnerable to the decision of the maintainer of an information system that the image is more useful, if it is clipped, zoomed, or rotated than a markup system to the decision to delete a redundant space character. The only difference is that in real life only few images who are integrated into a system to be annotated are changed afterwards, while texts, as mentioned, are notoriously prone to it. But in principle all annotation systems are effectively standoff markup systems referring to an object to be annotated which happens to be a data stream of a higher dimensionality than a one-dimensional linear string. As we see no difference between annotations and interpretative markup, the two terms “annotation” and “markup” are considered synonymous within the remainder of this section. (On uses of markup for representational purposes see sections 3 and 4 below).
In the current understanding of information technology, a reference to an object to be interpreted must be maintained within a system, which is connecting a specific representation to a specific interpretation. This is quite sensible, when this kind of connection occurs relatively rarely, for well circumscribed small sets of files. When we accept, that the need for interpretations of data objects, as in history, is the rule rather than the exception, and consider, furthermore, that we always have to provide for the possibility to connect different and contradictory interpretations to one data representation, it seems to be at least very difficult, if not outright impossible, to ensure that a change in a representation is corrected consistently in ten or more different standoff interpretations addressing a byte sequence in that representation. More concretely: If two different historians interpret one source in two different standoff systems, that source being administered by an editor or curator, who decides to correct the “last oversight”, both must be notified of that fact and both must have access to a mechanism, by which they can modify the addressing used for the annotations within their own standoff system.
We propose to redistribute the effort required between the technology components implementing such a system. An application should have the possibility to register a link to a multi stream file. Such links are integrated into a multi stream file as a privileged stream, a link stream of the MSF. For this mechanism to work, it must be supported by programs using that link stream according to the following conditions:
(a) All changes to the native stream of a linked MSF (LMSF) must be performed by a link-aware editor.
(b) All programs administering a standoff annotation for a LMSF must register their links to the LMSF with it.
It is not necessary, that the annotations themselves form a stream within the MSF containing the native stream to be annotated. Without the additional provisions discussed in section 2 below, it may be quite easy, however, to forget that what one does with the file one is primarily working on, has also be consequences for another one. It may be easier to avoid this when one keeps both, the native stream annotated and the stream(s) containing annotations together in one MSF.
The communication between an MSF containing a native stream to be annotated – called supplier in the following paragraphs -, and a file containing the annotations – called consumer in the following paragraph -, follows the following protocol, where LMSFid is the id of one specific link from the annotations to a segment of the native stream.
(a) Registration:
(1) Consumer sends supplier a request for a linked stream and stores the LMSFid of that stream it receives.
(2) Supplier creates such a stream and its LMSFid which it returns to consumer.
(b) Creation:
(1) Consumer sends supplier of an LMSFid a set of coordinates in the native stream of supplier and requests an LMSFLid for it, which it stores at the block of data to point to that spot in the native stream. It does not in its own data structures store the coordinates, but the LMSFLid received for them.
(2) Supplier creates a new entry in its array of links and returns its LMSFLid to consumer. Whensoever a program modifying the native stream performs an operation that invalidates the coordinates – by inserting / deleting characters, cropping or rotating an image – supplier changes the coordinates in the appropriate array of links.
(c) Use:
(1) Consumer asks supplier, which coordinates are currently stored for an LMSFLid.
(2) Supplier provides the current coordinates.
The interface for the administration of the links of an LMSF needed for this protocol is defined by the following functions:
MSFstatus LMSFsetDimensions(MSFSTREAM *msfstream, int dimension, MSFtype type);
This function inserts a dimension for the native stream into the MSFstreamTable of the MSFSTREAM structure of msfstream.
dimension is 1 for a linear string and 2 for a two-dimensional image. type for a stream with dimension 1 is either MStype_oneByte or MStype_tokenStream (see section 3 below), with dimension 2 it is MStype_image. The use of dimension and type for audio, video and other data contained in the native stream is currently undefined.
LMSFid LMSFregister(MSFSTREAM *stream, int dimension, MSFtype type);
A link stream is inserted into the MSFstreamTable with an internally created name. Its id, a positive number, is returned. If the dimension specified is different from the dimension previously registered for stream, an error is diagnosed and -1 is returned.
MSFstatus LMSFunregister(MSFSTREAM *msfstream, LMSFid id);
The link stream id is marked for deletion in the MSFstreamTable of msfstream. Actual deletion occurs the next time when the additional streams occurring after this link stream in the MSFstreamTable are updated from temporary scratch files.
MSFstatus LMSloadLinkStream(MSFSTREAM *msfstream, LMSFid id);
The link stream id is read from msfstream and a table of links is created in memory. Its address is stored in the msfstream.
LMSFLid LMSsetlink(MSFSTREAM *msfstream, LMSFid id, int* coordinates);
If the link stream id has not yet been opened for access LMSloadLinkStream() is called and the link stream is flagged as changed.
A reference to a subset of the data in the native stream is inserted into the table of links for the link stream id of msfstream. If it contains one dimensional data, primarily text, an upper and lower limit for a substring is inserted. If it contains two-dimensional data, primarily an image, the starting corner, width, and height of a rectangle in the image are inserted. No precautions for non-rectangular areas are currently planned.
int *LMSgetlinkI(MSFSTREAM *stream, LMSFid id, LMSFLid link);
If the link stream id has not yet been opened for access LMSloadLinkStream() is called.
The coordinates of the link link in the table of links for the link stream id of stream are returned.
float *LMSgetlinkF(MSFSTREAM *stream, LMSFid id, LMSFLid link);
This function behaves exactly as LMSFsetlinkF(), expects floating point coordinates, though.
MSFstatus LMSunlink(MSFSTREAM *stream, LMSFid id, LMSFLid link);
If the link stream id has not yet been opened for access LMSloadLinkStream() is called and the link stream is flagged as changed.
The link link in the table of links for link stream id of stream is deleted.
MSFstatus LMScleanLinks(MSFSTREAM *stream, LMSFid id);
All links in the link stream id of stream are deleted. The link stream remains active, however.
LMSFop *LMSupdateI(MSFSTREAM *msfstream, LMSFid id, int* coordinates, MSFstatus (*transformer)(int * in, int *out, int *coordinates));
A structure representing a necessary transformation on the coordinates of the links in id, is put on a queue in the msfstream. A variable number of integer coordinates is stored as well as the address of a transformation function transformer. A transformation function operating on a character string might e.g., receive two coordinates, indicating that due to a textual insertion all coordinates in links, which fall after the insertion point must be incremented by a specific number.
LMSFop *LMSupdateF(MSFSTREAM *msfstream, LMSFid id, float* coordinates, MSFstatus (*transformer)(int * in, int *out, float *coordinates));
A structure representing a necessary transformation on the coordinates of the links in id, is put on a queue in the msfstream. A variable number of floating-point coordinates is stored as well as the address of a transformation function transformer. This function is currently provided mainly for the processing of rotation operations on image files.
MSFstatus LMSFundo(MSFSTREAM *msfstream, LMSFid id);
The most recent operation is deleted from the queue of pending transformations of the link stream id in msfstream.
MSFstatus LMSFcommit(MSFSTREAM *msfstream, LMSFid id);
All operations on the queue are applied to the local set of coordinates in order of insertion.
MSFstatus LMSFsaveLinkStream(MSFSTREAM *msfstream, LMSFid id);
The link stream id in msfstream is replaced by the copy currently in memory.
MSFstatus LMSFignoreTransformations(MSFSTREAM *msfstream, LMSFid id);
The local set of links is dropped, and the link stream is flagged as unchanged.
2. Bidirectional Links: Making interpretative markup more sustainable
In section 1 we tried to keep changes to traditional habits to a minimum, so files were aware of each other only very generally. Which could easily lead to a situation, where a change needed in one file being triggered by the change in another file, might be forgotten. This is highly risky, if we assume that the general focus on complex and consistent contexts for data asked for in this series of papers implies many links on all levels of granularity as described in [Thaller 2020]. This strongly calls for bidirectional links.
The bidirectionality of links is deeply rooted in the history of Hypertext, already appearing in the experimental systems of the eighties [Barnet 2013, 109]. Ted Nelson – one of the fathers of the notion of hypertext – always assumed, that links must be bidirectional to be useful.[iii] The very first attempt of Tim Berners Lee at a hypertext system was based on bidirectional links [W3C 1995]; though out of reasons of ease of implementation he only considered it for the design of the WWW [Berners Lee 1990] but dropped it in implementation. The recent interest in annotations has led to a rise of interest in bidirectional links, which appear in several experimental systems. As right now I find it extremely hard to get fully transparent technical specifications for these, I avoid commenting on them.
To connect the data contained in a native string and the interpretations of such data by the annotations in a robust and sustainable way, the links between them must be bidirectional. Conceptually: The link stream of an MSF containing data to be annotated must be informed about changes in the way in which the links contained in it have been used. Technically this is quite simple, as described further below in this section. There exists an organizational problem however, for which I see no easy solution.
We have tacitly assumed so far, that the system of annotations would address the file with the native stream to be annotated by its plain file name. If we expect two files be connected in a robust way, robust even if their administrators are only vaguely – or not at all – aware of each other, we must require that the files connected have more persistent identities. Which means in this context that a link between two files should remain stable even if one or both files are moved somewhere else in the infosphere, whether on the same computer or somewhere in the www.
There are various possibilities to organize that. They would touch many other aspects of the way in which files can be used, beyond the ones interesting us here, so we remain very sketchy.
Obviously a register of files would be needed, which assigned a unique identity to each file within a well-defined scope. The scope of such a register easiest to understand being a single computer, where it could be created, as far as a specific operating system does not provide it anyway, by a small extension to the structure of the file systems. Even in such a restricted scope small changes, but changes, would be required to the operating system. In UNIX, e.g., one would have to extend the attributes stored for an inode by a flag indicating whether a file contained references to another one and modify the rm command to enforce for such a file, that before the file is removed, all files it refers to are informed about it and can modify their own side of a link – or possibly, in case of a very intimately interconnected system, prohibit the removal of the file.
This would be a change in the behavior of the filesystem, which would not really be more radical than other changes in the behavior of filesystems, but it would vulnerate the assumption of the other sections of this paper, that everything proposed here could be implemented in otherwise unchanged existing system environments.
We ignore this problem for the remainder of this section and assume that a test implementation would provide only some very shallow implementation of such a register administering the fact that two files are interconnected by links and otherwise rely on the responsible behavior of the users, which is of course totally unrealistic for a production system.
The yet undefined identifier for a file administered by such a file register is in the following called a BMSFhandle.
In section 1 above we have not yet discussed the nature of a LMSFid, though it was obviously a number referencing an array of ints or floats with n elements, n being the dimension of the native stream.
Our assumptions now:
(1) The annotation tool uses a BMSF as described below.
(2) The annotation tool follows the protocol described below.
A link in the MSF containing the native stream being annotated is defined as:
struct BMSFoutLink
{
LMSFid id;
MSFcoordinates *coordinates;
int NofReferences;
BMSFhandle *consumers;
LMSFid *cstream;
int *counts;
} ;
A link in the MSF containing the annotations is defined as:
struct BMSFinLink
{
LMSFLid id;
BMSFhandle supplier;
LMSFid sstream;
int count;
MSFcoordinates **coordinates;
} ;
A BMFS with an annotated native stream contains a BMSF stream, which is a serialized version of an array of BMFoutLinks. The id is an index into that array. The coordinates are a set of integers or floats as discussed in section 1. NofReferences indicates, how many BMSFs use this link; consumers[i] contains the BMSFhandle of exactly one BMSF which does so; cstream[i] contains the relevant LMSFid in that MSF; counts[i] indicates how many times this link is used in consumers[i].
A BMFS with annotations contains a BMSF stream, which is a serialized version of an array of BMFinLinks. The id is an index into that array. The supplier is the BMSFhandle of the BMSF to which these links point; sstream contains the relevant LMSFid in that MSF; count indicates how often this link is used within the file holding the annotations.
The connections are administered as follows, where “consumer” is the file containing annotations for “supplier’s” native stream.
(a) Registration:
(1) Consumer sends supplier a request for a bidirectional linked stream and stores the LMSFid of that stream it receives.
(2) Supplier creates such a stream and its LMSFid which it returns to consumer.
(b) Creation:
(1) Consumer sends supplier for an LMSFid a set of coordinates in the native stream of supplier and requests an LMSFLid for it, which it stores in its own BMSFinLink structure and uses at the spot, where the link to the stream to be annotated shall be stored. It does not in its own data structures store the coordinates, but the LMSFLid received for them.
(2) Supplier creates a new entry in its BMSFoutLink structure and returns its LMSFLid to consumer. Whensoever a program modifying the native stream performs an operation that invalidates the coordinates – by inserting / deleting characters, cropping or rotating an image – supplier changes the coordinates in its BMSFoutLink structure, and the linked stream, which is the serialized version of that structure.
(c) Use:
(1) Consumer asks supplier which coordinates are currently stored for an LMSFLid.
(2) Supplier provides the current coordinates.
(d) Reapplication:
(1) Consumer notifies supplier that it intends to apply a LMSFLid already received for another annotation.
(2) Supplier checks the LMSFLid and increases the count if valid. If the consumer notifying its intention to reapply is not identical with the consumer that has created the link, the BMSFoutLink structure of supplier is modified accordingly.
(e) Deletion by consumer:
(1) If a link shall be removed, consumer checks its own BMSFinLink structure, how frequently it used this LMSFLid. If this is only once, it deletes the LMSFLid, otherwise it decrements its count.
In both cases it sends a notification to supplier about the deletion.
(2) Supplier checks its own BMSFoutLink structure, how frequently this LMSFLid has been used by consumer. If this is only once, it deletes the LMSFLid, otherwise it decrements its count.
(f) Deletion by command of supplier:
This may seem to be a surprising operation, it is crucial, however, to keep the interrelationships consistent and avoid the equivalents of dead links. A typical case might be, that part of the native stream is changed in a way, where the segment that is addressed by the link ceases to exist – e.g., when cropping of an image in a repository deletes an annotated area. The standard operation would be:
(1) Supplier informs consumer about the deletion and deletes the LMSFLid from its BMSFoutlink structure .
(2) Consumer uses its BMSFinLink structure to remove all references to the link within its native stream containing the annotations (usually presumably deleting the orphaned annotations).
(g) Consensual deletion on request of supplier:
This would be the more mature behavior between MSFs more intricately connected.
(1) Supplier informs consumer about its intention to delete a specific LMSFLid (preferably before it executes whatsoever makes it invalid).
(2) Consumer decides whether the LMSFLid can be erased. If so, it behaves as in (d) step (2) above and acknowledges the deletion.
(3) If consumer has agreed, supplier executes step (1) of (d) above.
(4) Otherwise, it avoids the operation that would have made the LMSFLid invalid.
To allow this, the following interface is provided.
The following functions operate exactly as their MSF equivalents, except that (a) they access the file not via a filename, but via a handle into the not yet defined register of files and (b) use the BMSFSTREAM structure supporting bidirectional linking, rather than the simpler MSFSTREAM structure.
BMSFSTREAM *BMSFopen(BMSFhandle file, const char *mode, const char *streamname);
MSFstatus BMSFclose(BMSFSTREAM *bmsfstream);
MSFstatus BMSFcloseall(BMSFSTREAM *bmsfstream);
size_t BMSFread(void *ptr, size_t size, size_t nmemb, BMSFSTREAM *bmsfstream);
size_t BMSFwrite(const void *ptr, size_t size, size_t nmemb, MSFSTREAM *bmsfstream);
MSFstatus BMSFseek(BMSFSTREAM *bmsfstream, MSFposition offset, int whence);
MSFposition BMSFtell(BMSFSTREAM *bmsfstream);
MSFstatus BMSFeof(BMSFSTREAM *bmsfstream);
The following functions operate exactly as their LMSF equivalents, except that (a) they access the file not via a filename, but via a handle into the not yet defined register of files and (b) use the BMSFSTREAM structure supporting bidirectional linking, rather than the simpler LMSFSTREAM structure.
LMSFop *BMSupdateI(BMSFSTREAM *bmsfstream, LMSFid id, int* coordinates, MSFstatus (*transformer)(int * in, int *out, int *coordinates));
LMSFop *BMSupdateF(BMSFSTREAM *bmsfstream, LMSFid id, float* coordinates, MSFstatus (*transformer)(int * in, int *out, float *coordinates));
MSFstatus BMSFundo(BMSFSTREAM *bmsfstream, LMSFid id);
MSFstatus BMSFcommit(BMSFSTREAM *bmsfstream, LMSFid id);
MSFstatus BMSFsetDimensions(BMSFSTREAM *bmsfstream, int dimension, MSFtype type);
LMSFid BMSFregister(BMSFSTREAM *bmsfstream, int dimension, MSFtype type);
MSFstatus BMSFunregister(BMSFSTREAM *bmsfstream, LMSFid id, BMSFhandle consumer);
If all links in the BMSFoutLink structure of the linked stream id of bmsfstream have only been used by consumer, the link stream id is marked for deletion in the BMSFstreamTable of msfstream. Actual deletion occurs the next time when the additional streams occurring after this link stream in the MSFstreamTable are updated from temporary scratch files.
If the BMSFoutLink structurecontains links which have also been used by other files, all links connecting to consumer are removed from it, the stream is preserved with the remaining links, however.
MSFstatus BMSloadInLinkStream(BMSFSTREAM *bmsfstream, LMSFid id) ;
The link stream id is read from bmsfstream and a BMSFinLink table is created in memory. Its address is stored in the bmsfstream.
MSFstatus BMSloadOutLinkStream(BMSFSTREAM *bmsfstream, LMSFid id, BMSFhandle consumer) ;
The link stream id is read from bmsfstream and a BMSFoutLink table is created in memory. Its address is stored in the bmsfstream. This table contains only such links, as have been used by consumer.
It is checked during loading, whether the table in the stream also contained links used by other consumers. If so, a flag is set in the BMSFSTREAM.
LMSFLid BMSFsetlinkI(BMSFSTREAM *supplier, BMSFSTREAM *consumer, LMSFid inId, LMSFid outId, int* coordinates);
The arguments supplier and outId refer to the BMSF file, which contains the native stream to be annotated. The arguments consumer and inId refer to the BMSF file, which contains the annotations.
If the link stream inId has not yet been opened for access, BMSloadInLinkStream() is called and the link stream consumer is flagged as changed.
If the link stream outId has not yet been opened for access, BMSloadOutLinkStream() is called and the link stream supplier is flagged as changed.
A reference to a subset of the data in the native stream of supplier is inserted into the BMSFoutLink table for the link stream outId of supplier. If it contains one dimensional data, i.e., text, an upper and lower limit for a substring is inserted. If it contains two-dimensional data, i.e., an image, the starting corner, width, and height of a rectangle in the image are inserted. The consumer is stored as user of this link.
LMSFLid BMSFsetlinkF(BMSFSTREAM *supplier, BMSFSTREAM *consumer, LMSFid inId, LMSFid outId, float *coordinates);
This function behaves exactly as BMSFsetlinkF(), expects floating point coordinates, though.
int *BMSFgetlinkI(BMSFSTREAM *bmsfstream, LMSFid id, LMSFLid link, BMSFhandle consumer);
If the link stream id has not yet been opened for access BMSloadOutLinkStream()is called and the link stream is flagged as changed.
The coordinates of the link link in the BMSFoutLink table of the link stream id of bmsfstream are returned. The consumer is stored as user of this link in that table.
float *BMSFgetlinkF(BMSFSTREAM *bmsfstream, LMSFid id, LMSFLid link, BMSFhandle consumer);
This function behaves exactly as BMSFsetlinkF(), expects floating point coordinates, though.
MSFstatus BMSunlinkOut(BMSFSTREAM *stream, LMSFid id, LMSFLid link, BMSFhandle consumer, BMSFunlinkMode mode);
If the link stream id has not yet been opened for access BMSloadOutLinkStream() is called and the link stream is flagged as changed.
If mode is all:
If the link has only been used by consumer it is deleted in the BMSFoutLink structure of link stream id of stream. Otherwise, only all references to that link from consumer are removed.
If mode is single:
The count of references by consumer in the BMSFoutLink structure of link stream id of bmsfstream is decremented. If this results in the link not being used by consumer anymore, the process described for mode all is executed.
MSFstatus BMSunlinkIn(BMSFSTREAM *stream, LMSFid id, LMSFLid link, BMSFunlinkMode mode);
If the link stream id has not yet been opened for access BMSloadInLinkStream() is called and the link stream is flagged as changed.
If mode is all:
All references to that link are removed.
If mode is single:
The count of references in the BMSFinLink structure of link stream id of bmsfstream is decremented. If this results in the link not being used anymore it is removed.
MSFstatus BMScleanLinks(MSFSTREAM *bmsfstream, LMSFid id, BMSFhandle consumer);
All links in the BMSFoutLink structure of the link stream id of bmsfstream are deleted which are connected to consumer. Links which are not used by any other consumer are deleted afterwards. The link stream remains active, however.
MSFstatus BMSFsaveInLinks (BMSFSTREAM *bmsfstream, LMSFid id);
The BMSFinLink structure of the link stream id in bmsfstream is replaced by the copy currently in memory.
MSFstatus BMSFsaveOutLinks (BMSFSTREAM *bmsfstream, LMSFid id);
The BMSFoutLink structure of the link stream id in bmsfstream is replaced by the copy currently in memory.
MSFstatus BMSFignoreTransformations(BMSFSTREAM *bmsfstream, LMSFid id);
The in memory copy the BMSFoutLink structure of the link stream id is dropped, and the link stream is flagged as unchanged.
MSFstatus BMSFrequestPermission(void);
No concrete plans for this function exist yet. It should be used to allow a program which intends to modify a file administering a native stream to ask files administering annotations to it for their permission to do so. This function can only be specified more in detail when plans for the protocol for the communication between such programs have been made. It is a placeholder in the library, only reminding the implementer of the need for such communication. For the time being it always returns true.
3. Token Strings / Token Streams
So far, our argument has dealt with the connection between straightforward data types: Character strings as one-dimensional arrays of bytes or fixed length groups of two or four bytes, each displaying exactly one character. Images are two dimensional arrays of n-bytes, each describing exactly one pixel. In this environment addressing any of them is quite straightforward. The substring starting at byte n with a length l is extremely easy to understand, as is the rectangle starting at {x, y} and extending {w, h} bytes in the direction the coordinates grow.
The underlying assumption that all texts can be transcribed as well understood standardized characters I called unrealistic in [Thaller 2020] and indeed already in [Thaller 1993, 268-269]. Unless we assume that historians totally and completely understand the sources they encounter. Such a complete understanding may seem natural to linguists or computer scientists or even to philologists who live in the clean world of standardized printed editions; for a historian they are simply unacceptable in my opinion.
This is more serious, the more committed we are to our starting principle, that there is a difference between representation and interpretation. As [Coombs 1987, 935 ] has argued, there is a level of handling texts where interpunctuation and markup are simply the same. I agree, but I must point out, that there is a difference between an interpunctuation mark, which is in the transmitted text (where the punctuation mark is a device by the original author to express a meaning we may or may not know about) and the interpunctuation mark, which is added by an editor (by which the editors indicate their interpretation of the intentions of the author). When we want to clearly separate the traces left by the author and our interpretation of those traces, we can, therefore not simply replace a strange wiggle of the pen by a standardized punctuation mark. Nor should we replace a spurious semi-alphabetic abbreviation by a transcription during the representation stage. A clean separation of representation and interpretation, therefore, requires the possibility to transfer a source for processing into strings, which are mixtures of standard characters, graphics of strange wiggles of the pen and names of glyphs, which are recognizably standardized.
That for every character to be represented there is a residue of doubt, whether it is, e.g., an “f” or an “s” is of course true (as every reader of older German fonts is very aware). The fact, that we cannot provide a perfectly neutral representation without any influence of interpretative opinions should not prevent us from trying our best, however. And most of all, not let us be prevented doing our best by allegedly unchangeable implications of information technology. To quote Clifford Geertz: “I have never been impressed by the argument that, as complete objectivity is impossible in these matters (as, of course, it is), one might as well let one’s sentiment run loose. As Robert Solow has remarked, that is like saying that as a perfectly aseptic environment is impossible, one might as well conduct surgery in a sewer.” [Geertz 1973, 30]
3.1. Basic Token Strings
At first, we accept that a historian must decide, whether a character is an “s” or an “f”; more abstractly, that a character, which is recognizable as such, must be coded as such. We only relax the assumption, that a string must consist of standardized characters only. A token string is defined at its most basic as an arbitrary mixture of three components: standardized characters, symbols, and images.
A standardized character is a Unicode or an ASCII character[iv].
A symbol is a character string that represents a well understood abbreviation. The string representation is not defined by the token string library, will frequently follow a convention like the predefined entity “&name;” of XML.
An image is a bitmapped image which is stored as a bit string following a standard image file format.
A simple token string is a byte sequence, where these components may follow in arbitrary sequence. To represent a text fragment “Sepultus apud <choice> <abbr>mon&twiggle;</abbr> <expan>monasterium</expan> </choice> <note>unreadable place name</note>” we would use a byte sequence: “Sepultus ad &mon; [byte sequence representing scanned place name]”.
Such a simple token string or STS is represented by the data structure
struct STS
{
int length;
void *string;
int *Tstart;
int *Tlength;
TS_types *Ttype;
};
The element string points to the byte sequence which represents length tokens. The elements Tstart, Tlength and Ttype are arrays of equal length, each triplet { Tstart[i], Tlength[i], Ttype[i] } describing one token in the token string. In our case – assuming one-byte ASCII character codes for the initial text, the abbreviation as a character entity, and the unreadable name represented by a TIF file – like this:
string = {'S', 'e', 'p', 'u', 'l', 't', 'u', 's', ' ', 'a', 'd', ' ', '&', 'm', 'o', 'n', ';', ' ', <binary string> }
Tstart = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 18, 19 }
Tlength = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 6, 300 }
Ttype = {TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_SYMBOL, TS_BINARY }
As the types of string components can be easily extended, there is considerable space for an extension of this model. The following would be an alternative model to represent the same token string:
string = {"Sepultus ad ", "&mon; ", <binary string> }
Tstart = {0, 12, 19 }
Tlength = {12, 6, 300 }
Ttype = {TS_ASCII_STRING, TS_SYMBOL, TS_BINARY }
The differences between these two representations will become apparent a bit further below. For the time being, let us just explain, which data structures are supposed to support this representation.
There is an enumeration
enum {TS_ASCII, TS_SYMBOL, TS_BINARY, TS_ASCII_STRING, TS_MAX_TYPE} TS_type;
which is prepared to be extended by the user by adding additional codes TS_MAX_TYPE+0, TS_MAX_TYPE+1, … and an array of comparison functions
int TScompareASCII(char *a, char*b, int length);
int TScompareSYMBOL(char *a, char*b, int length);
int TScompareBINARY(void *a, void *b, int length);
int TScompareASCIIString(char *a, char*b, int length);
int (*TScomparator[TS_MAX_TYPE])(void *a, void* b, int length) =
{
TScompareASCII, TScompareSYMBOL, TScompareBINARY,
TScompareASCIIString
} ;
All comparison functions provided by the TS library return the difference between the first byte of two arguments that differ or 0 if the arguments are equal.
The call to comparator will be wrapped into a comparison shell function, which returns INT_MIN, if two tokens of different type should be compared.
These component specific comparison functions are called internally whenever STScompare() is called, which is part of the functions provided for the handling of simple token strings. These are currently:
STS *STScreate(void);
Creates an empty simple token string.
TSstatus STSdelete(STS *string);
Frees all memory allocated for string. An empty token string is left.
TSstatus STSdestroy(STS *string);
Frees all memory allocated for string and deletes the STS structure.
TSstatus STSappendASCII(STS *string, char c);
Appends the character c as a TS_ASCII token at the end of string.
TSstatus STSappendSYMBOL(STS *string, char *symbol);
Appends the character string symbol as a TS_SYMBOL token at the end of string.
TSstatus STSappendBINARY(STS *string, void *binary, int length);
Appends length bytes of binary data from address binary as a TS_BINARY token at the end of string.
TSstatus STSappendASCII_STRING(STS *string, char *appstring);
Appends the character string appstring as a TS_ASCII_STRING token at the end of string.
int STSlength(STS *string);
Returns the number of tokens in string.
int STSsize(STS *string);
Returns the number of bytes in string.
int STScompare(STS *a, STS *b);
If only the comparison functions are used which have been discussed above, this function returns the difference between the first two bytes in a and b which differ, or 0 if the two token strings are identical. If a user has supplied additional types of components and comparison functions which deviate from this convention, the results will be defined by the user.
If two tokens of different types are compared, INT_MIN will be returned.
int STSfind(STS *a, STS *b);
If the string b occurs in a, the function returns the start position in a. Otherwise -1 is returned.
STS *STSsubstring(STS *string, int start, int length);
This function returns a copy of the token string starting at token start in string up to length tokens.
TSstatus STScopy(STS *a, STS *b);
This function copies the token string a to the token string b.
TSstatus STScopyN(STS *a, STS *b, int start, int length);
This function copies the token string starting at token start of token string a of up to length tokens to token string b.
TSstatus STSreplace(STS *oldstring, int start, int length, STS *newstring);
This function replaces the token string starting at token position start with the length length in the token string oldstring with the token string newstring. The function returns a status code reporting on the success of the operation.
TSstatus STSinsert(STS *oldstring, int start, STS *newstring);
This function inserts the token string newstring at token position start into the token string oldstring. The function returns a status code reporting on the success of the operation.
TSstatus STSerase(STS *string, int start, int length);
This function deletes length tokens starting at token position start in the token string string.
void *STSextractToken(STS *string, int start, TS_types type);
This function extracts a pointer to the token of type type at start of the token string string. If that token is not of type type, NULL is returned.
void *STSextractUnknownToken(STS *string, int start, TS_types *type);
This function extracts a pointer to the token at start of the token string string. The type of the token is written into the variable pointed to by type.
3.2. Storing Token Strings in Multi Stream Files and Linking Token Streams
A token stream is defined by a call to LMSsetDimensions() with dimension 1 and type MStype_tokenStream.
To make the handling of token streams easier, two convenience functions are added to the MSF library:
MSFstatus MSFsaveTokenStream(MSFSTREAM *msfstream, STS *string);
This function stores the serialized token string string as the native stream in msfstream.
STS *MSFloadTokenString(MSFSTREAM *msfstream);
This function loads the native stream as a serialized token string from msfstream and returns a pointer to an STS structure holding it.
Both functions expect a serialization of the STS structure in the order length, Tstart, Tlength, Ttype, string to make it easier for specialized applications to load only parts of the structure at a time.
3.3 Representational standoff markup in token streams
The linkage mechanism described as the linked multi stream file mechanism above has been defined as generally as possible. The intention behind it was to provide a mechanism which allows the interconnection between arbitrary blocks of data which would be used to express interpretations of some historical source as flexibly as possible. A text can provide an interpretation of another text, as in the classical factual comment or editorial comment in an edition; a text can interpret an image, as in one of the most frequent usages of annotation systems; an image can interpret a text, as in a classical illustration. This we have described as interpretative markup.
Representational markup, contrastingly, tries to represent properties of tokens in a source, which are additional to the character codes, symbols for glyphs or bitstrings which make up token strings; which are more easily open to negotiated intersubjective consensus than, e.g., the question whether a writer intends a sentence as sarcastic. We assume it is easier to reach consensus, that a term in a text is in another font, than whether the reason for that is adherence to a contemporary habit, an indication that it was not yet familiar to the author or a result of the fact that the printer had insufficiently many letters available for the main font to be used for the text. A change in the color of the ink a given person uses in an official correspondence of the 19th century could be an indication of the original supply of ink having dried up; or of a considerable rise of the author within the bureaucratic ranks. Let us just emphasize for non historians, that the second example is all but artificial: indeed, the different colors of comments to drafts for diplomatic documents are in the 19th century quite often the only identifying mark, which diplomatic agent added which opinion.[v]
Such formal properties of text can easily be derived from simply looking at a text. It is absolutely clear what “font size” means. For the sake of this clarity, we will introduce the properties to be supported by such conveniently intuitive examples. I would like to emphasize, however, that behind the obvious property of “font size” there hides the abstract principle of an “integer scaled property”. And whatsoever mechanism we develop for the example of a font’s size, can also be applied on the more general level.
We propose to support the following intuitive textual properties, their more abstract definitions appearing in brackets:
- String mode (–> binary properties)
- String style (–> nominally scaled properties)
- String color (–> ordinally scaled properties)
- String size (–> rationally scaled properties)
Support for these properties is provided by the enhanced token string library or ETS library.
This library assumes, that each string has a default property set that will be applied to all tokens in a token string by default.
3.3.1 Administration of the Table of Available Properties
That a token stream is enhanced, that is, uses this mechanism, is activated by a call to:
TSstatus ETSenhance(STS *string);
This function creates a set of structures defining which properties are applicable to the extended token string and which are currently active. Together the are described by the structure:
struct ETSpropertyTable
{
ETSmodeSet modes;
ETSmodeSubset activeModes;
ETSstyleSet styles;
ETSstyleSubset activeStyles;
ETScolorSet colors;
ETScolorSubset activeColors;
ETSsizeSet sizes;
ETSsizeSubset activeSizes;
} ;
Where for modes:
struct ETSmodeSet
{
int NofModes;
char **modeNames;
} ;
This structure is initialized with {1, “ETS_MO_default”}.
struct ETSmodeSubset
{
int NofActiveModes;
int *activeModes;
} ;
This structure is initialized with {0, NULL}.
These initial values of ETSmodeSet and ETSmodeSubset indicate that so far, no mode has been activated.
For styles:
struct ETSstyleSet
{
int NofStyles;
ETSstyleTable *styles;
} ;
The initial value for NofStyles is 1.
struct ETSstyleTable
{
int NofStyleTypes;
char *styleName;
ETSstyleTypeSet styleType;
} ;
The initial values for ETSstyleTable are {1, “ETS_MO_default” }.
struct ETSstyleTypeSet
{
int NofProperties;
char *property;
} ;
The initial values for ETSstyleTypeSet are {1, “default”}.
struct ETSstyleSubset
{
int NofActivestyles;
int *activeStyles;
int *activeProperty;
} ;
This structure is initialized with {0, NULL, NULL}.
These initial values of ETSstyleSet, ETSstyleSubset and their substructures indicate that so far, no style has been activated for the token string.
For colors:
struct ETScolorSet
{
int NofColors;
ETScolorTable *styles;
} ;
The initial value for NofColors is 1.
struct ETScolorTable
{
int NofColorTypes;
char *colorName;
ETScolorTypeSet colorType;
} ;
The initial values for ETScolorTable are {1, “ETS_CO_default” }.
struct ETScolorTypeSet
{
int NofProperties;
char *property;
int *value;
} ;
The initial values for ETScolorTypeSet are {1, “default”, 0}.
The difference between style and color is almost indiscernible from the structures administrating them. It becomes apparent in their application. The style of two specific tokens is the same or it is not. A color of two tokens is compared via the value of its property. So, one token can have more or less of it than another token. Assuming the user defined color “naturalColor” has a property “red” with the value 665 and a property “blue” with the value 470 it would compare the two tokens according to the wavelength of the natural colors. A color is not conceptualized as rationally scaled, however.
struct ETScolorSubset
{
int NofActiveColors;
int *activeColors;
int *activeProperties;
};
This structure is initialized with {0, NULL, NULL}.
These initial values of ETScolorSet, ETScolorSubset and their substructures indicate that so far, no color has been activated for this token string.
For sizes:
struct ETSsizeSet
{
int NofSizes;
ETSsizeTable *sizes;
} ;
The initial value for NofSizes is 1.
struct ETSsizeTable
{
int NofSizeTypes;
char *sizeName;
} ;
The initial values for ETSsizeTable are {1, “ETS_SI_default”, 0.0 }.
struct ETSsizeSubset
{
int NofActiveSizes;
int *activeSizes;
float *activeValues;
} ;
This structure is initialized with {0, NULL}.
These initial values of ETSsizeSet and ETSsizeSubset indicate that so far, no sizes have been activated for the token string.
To administer this property table obviously a rather large number of fortunately quite simple utility functions is necessary. To keep things short, they will be handled only very briefly below. Generally, the interface is built around the concept that a (a) every ETS has a complete property table, (b) a set of token strings can share that property table and (c) a multi stream file with an enhanced token string as native stream will save / load the property table automatically to and from persistent devices. The serialization of the ETS structure is implemented in the order length, Tstart, Tlength, Ttype, ETSpropertyTable, string to preserve the possibility for specialized applications to load only parts of the structure at a time. (Cf. section 3.2 above)
These three rules result in the following interface:
TSstatus ETSshare(STS *owner, int NofPar, STS *new1, …);
The token strings new1 – newn share henceforth the ETSpropertyTable of the token string owner. Any modification of the property table of one token string applies to all token strings sharing it.
TSstatus ETSwithdraw(int NofPar, STS *string1, …);
The token strings new1 – newn share one ETSpropertyTable henceforth. They do not any longer share it with any other token strings they shared it with before.
If two strings are stored in different MSF files, they will receive separate copies of the property table. Deleting properties from one of the two or more property tables which have originally been shared, will not be reflected by the property tables in the other files.
The table of available properties is administered by the following interface:
TSstatus ETSAddMode(STS *string, char *modename);
A new mode with the name modename is added to the property table of string.
TSstatus ETSDeleteMode(STS *string, char *modename);
The mode modename is deleted from the property table of string. All occurrences of the mode modename in the link table of the token string string are deleted. As this also applies to all other token strings which share this property table, this function should be used with caution.
TSstatus ETSAddStyle(STS *string, char *stylename);
A new style with the name stylename is added to the property table of string.
TSstatus ETSAddStyleProperty(STS *string, char *stylename, char *property);
A new property is added to the style stylename of the property table of string.
Note: “style” is a concept, not a name. If you would e.g., want to introduce a “style” “italic” you need two calls:
TSstatus ETSAddStyle(myString, “style”);
TSstatus ETSAddStyleProperty(myString, “style”, “italic”);
TSstatus ETSDeleteStyle(STS *string, char *stylename);
The mode stylename is deleted from the property table of string. All occurrences of the style stylename in the link table of the token string string are deleted. As this also applies to all other token strings which share this property table, this function should be used with caution.
TSstatus ETSDeleteStyleProperty(STS *string, char *stylename, char *property);
The property property is deleted from the style stylename in the property table of string. All occurrences of the property property of the style stylename in the link table of the token string string are deleted. As this also applies to all other token strings which share this property table, this function should be used with caution.
TSstatus ETSAddColor(STS *string, char *colorname);
A new style with the name colorname is added to the property table of string.
TSstatus ETSAddColorProperty(STS *string, char *colorname, char *property, int value);
A new property is added to the color colorname of the property table of string.
Note: “color” is a concept, not a name. If you would e.g., want to introduce a “color” “red” you need two calls:
TSstatus ETSAddColor(myString, “color”);
TSstatus ETSAddColorProperty(myString, “color”, “red”, 665);
TSstatus ETSDeleteColor(STS *string, char *colorname);
The mode colorname is deleted from the property table of string. All occurrences of the color colorname in the link table of the token string string are deleted. As this also applies to all other token strings which share this property table, this function should be used with caution.
TSstatus ETSDeleteColorProperty(STS *string, char *colorname, char *property);
The property property is deleted from the color colorname in the property table of string. All occurrences of the property property of the color colorname in the link table of the token string string are deleted. As this also applies to all other token strings which share this property table, this function should be used with caution.
TSstatus ETSAddSize(STS *string, char *sizename);
A new size with the name sizename is added to the property table of string.
TSstatus ETSDeleteSize(STS *string, char *sizename);
The size sizename is deleted from the property table of string. All occurrences of the size sizename in the link table of the token string string are deleted. As this also applies to all other token strings which share this property table, this function should be used with caution.
Note: “size” is a concept, not a name. If you would e.g., want to introduce “size” must do so explicitly.
TSstatus ETSAddSize(myString, “size”);
3.3.2 Application of Properties to a Token String
A token string with properties is represented by an STS structure, which is expanded by a link to a an additional structure
struct STS
{
int length;
void *string;
int *Tstart;
int *Tlength;
TS_type *Ttype;
ETS *properties;
};
Where
struct ETS
{
int **modes;
int **styles;
int **colors;
int **sizes;
} ;
Which are interpreted as follows:
- modes[n][m] ::= token n is in activemode[m] if modes[n][m] is 1 (default: -1).
- styles[n][m] ::= token n has for activestyle[m] the value styles[n][m] (default: -1).
- colors[n][m] ::= token n has for activecolor[m] the value colors[n][m] (default: -1).
- sizes[n][m] ::= token n has for activesize[m] the value sizes[n][m] (default: -1).
A substring of a token string can acquire a property in one of two ways.
If a token is added with the functions STSappendASCII(), STSappendSYMBOL(), STSappendBINARY() or STSappendASCII_STRING(), all properties which are currently defined as „active“ in the ETSpropertyTable are applied to the added tokens.
If a token string is inserted with one of the functions STSreplace() or STSinsert() it may optionally acquire properties of the token string into which it is being inserted. How exactly that is handled, can be influenced by the function
TSstatus ETSinherit(STS *string, ETSinheritanceMode mode);
ETSinheritanceMode can take one of the values ETS_InheritNone, ETS_InheritAny, ETS_InheritLeft, ETS_InheritRight or ETS_InheritBoth.
Assume that insertion of the substring newstring occurs at token n of oldstring. In the case of a replacement, token n+1 is assumed to be the token after deletion of the substring to be replaced. Then:
Mode ETS_InheritNone results in insertion of newstring with all properties unchanged. This is the default.
For all other modes, any properties of newstring are first deleted. Then:
In mode ETS_InheritAny newstring acquires any property that is active for token n or n+1 in oldstring if the modes of these two tokens are not contradictory.
In mode ETS_InheritLeft newstring acquires any property that is active for token n in oldstring.
In mode ETS_InheritRight newstring acquires any property that is active for token n+1 in oldstring.
In mode ETS_InheritBoth newstring acquires any property that is active for token n as well as n+1 in oldstring.
Alternatively, a property can explicitly be assigned to any substring. The properties of a given token can be examined by a set of utility functions.
For the activation of subsets in the ETSpropertyTable of a token string the following interface is provided.
TSstatus ETSActivateMode(STS *string, char *modename, ETStoggle toggle);
In the property table of the string string the mode modename is set according to toggle. An ETStoggle can take the values ETS_on or ETS_off.
TSstatus ETSActivateStyle(STS *string, char *stylename, char *property);
If property is NULL, the style stylename is deactivated in the property table of string. Otherwise, it is activated and set to property.
TSstatus ETSActivateColorByName(STS *string, char *colorname, char *property);
If property is NULL, the color colorname is deactivated in the property table of string. Otherwise, it is activated and set to property.
TSstatus ETSActivateColorByValue(STS *string, char *colorname, int value);
The color colorname is activated in the property table of string and set to value.
TSstatus ETSActivateSize(STS *string, char *sizename, float value, ETStoggle toggle);
If toggle is ETS_off the size sizename is deactivated. Otherwise, it is activated and the size sizename is set to value.
For the application of properties to a substring of a token string the following interface is provided.
When in the following functions the token string has fewer tokens than are specified by the length argument, the function stops at the last existing token and returns a diagnostic code. If length is -1, the property is assigned until the end of the token string. The combination: start = 0, length = -1 assigns a property to the token string as a whole, therefore.
TSstatus ETSApplyMode(STS *string, int start, int length, char *modename);
Apply the mode modename to at most length tokens in string, starting at token start.
TSstatus ETSApplyStyle(STS *string, int start, int length, char *stylename, char *property);
Apply the style stylename with the value property to at most length tokens in string, starting at token start.
TSstatus ETSApplyColorByName(STS *string, int start, int length, char *colorname, char *property);
Apply the color colorname with the value property to at most length tokens in string, starting at token start.
TSstatus ETSApplyColorByValue(STS *string, int start, int length, char *colorname, int value);
Apply the color colorname with the value value to at most length tokens in string, starting at token start.
TSstatus ETSApplySize(STS *string, int start, int length, char *sizename, float value);
Apply the size sizename with the value value to at most length tokens in string, starting at token start.
For the removal of properties from a substring of a token string the following interface is provided.
When in the following functions the token string has fewer tokens than are specified by the length argument, the function stops at the last existing token and returns a diagnostic code. When in the following functions some of the length tokens starting at start do not carry the property to be removed a diagnostic code is returned, the substring is processed to the end, however. If length is -1, the property is removed until the end of the token string. The combination: start = 0, length = -1 removes a property to the token string as a whole, therefore.
TSstatus ETSCleanSubstring(STS *string, int start, int length);
Remove all properties from at most length tokens in string, starting at token start.
TSstatus ETSRemoveMode(STS *string, int start, int length, char *modename);
Remove the mode modename from at most length tokens in string, starting at token start. If modename is NULL, all modes are removed.
TSstatus ETSRemoveStyle(STS *string, int start, int length, char *stylename);
Remove the style stylename from at most length tokens in string, starting at token start. If stylename is NULL, all styles are removed.
TSstatus ETSRemoveColor(STS *string, int start, int length, char *colorname);
Remove the color colorname from at most length tokens in string, starting at token start. If colorname is NULL, all colors are removed.
TSstatus ETSRemoveSize(STS *string, int start, int length, char *sizename);
Remove the size sizename from at most length tokens in string, starting at token start. If sizename is NULL, all sizes are removed.
3.3.3 Evaluation of Properties of a Token String
To examine the properties of a token a structure can be filled by a call to the function
ETScurrentProperties *ETSexamineToken(STS *string, int n);
This function returns a pointer to a structure of the following type, which informs about the properties of the token n in string.
ETScurrentProperties
{
int NofCurrentModes;
char **currentMode;
int *modeStart;
int *modeLength;
int NofCurrentStyles;
char **currentStyle;
char **styleValue;
int *styleStart;
int *styleLength;
int NofCurrentColors;
char **currentColor;
char **colorName;
int *colorValue;
int *colorStart;
int *colorLength;
int NofCurrentSizes;
char **currentSize;
float *sizeValue;
int *sizeStart;
int *sizeLength;
} ;
For modes, styles, colors, and sizes the NofCurrent-x give the number of properties of the respective class assigned to the token. For all four the current-x array contains the names of the properties assigned, x-start contains the token number in string where this property starts, and x-length contains the length of the token substring with the same property. styleValue[i] contains the name of the style assigned within the currentStyle[i], colorName[i] contains the name of the color assigned within the currentColor[i], colorValue[i] contains the numeric value assigned within the currentColor[i] and sizeValue[i] contains the numeric value assigned within the currentSize[i].
All comparison functions check, whether a comparison must consider the properties of the tokens compared. After comparing the tokens, they will check whether all properties for which sensibility has been enabled are equal[vi] for all the tokens in the strings to be compared. If they are not equal a value of INT_MIN will be returned.
To control the sensitivity of comparisons the following interface is provided, where an ETStoggle can take the values ETS_on or ETS_off. By default, all comparisons are insensitive for all property differences.
TSstatus ETSsetModeSensitivity(ETStoggle toggle);
ETS_on makes all comparisons sensitive for differences in modes, ETS_off makes them insensitive.
TSstatus ETSsetStyleSensitivity(ETStoggle toggle);
ETS_on makes all comparisons sensitive for differences in styles, ETS_off makes them insensitive.
TSstatus ETSsetColorSensitivity(ETStoggle toggle);
ETS_on makes all comparisons sensitive for differences in colors, ETS_off makes them insensitive.
TSstatus ETSsetSizeSensitivity(ETStoggle toggle);
ETS_on makes all comparisons sensitive for differences in sizes, ETS_off makes them insensitive.
4. Ambiguous Tokens
The purpose of our token string approach has been to provide a coordinate system for the addressing of positions within a text, which accepts a text composed not only of standardized characters, but also of other entities, fixed atomic labels on the one hand, small graphics on the other. These strings are dedicated to representation, that is, they shall provide a way to represent the appearance of a source in an information system with as little human semantic intervention as possible. In the year 2021 this usually means to ask for disciplined humans trying to keep speculation under control during the act of transcription; and in the year 2041, hopefully, advanced handwritten text recognition algorithms, which transcribe what they recognize – most of the stuff, by then – and represent the remainder not by nonsense characters, but by graphical snippets. There is also the intriguing idea to convert the image of a text into a two-dimensional representation, a token matrix essentially, for which the addressing system of our linked multi-streams would be prepared, which we leave out of consideration at this stage, however.
We must take care of an addressing problem, which appears already at the lowest level of representation, the situation where some analog data could be transcribed in different ways. Let us address the following examples[vii] and provide proposals for the following criteria:
(Criterion 1) If in the snippet “Mar*us” it is hard to decide whether the asterisk should be read as “c” or “i”, we need a way to address this token in such a way, that an annotation can be targeted at (a) both readings, (b) only “c” or (c) only “i”.
(Criterion 2) In the example above we need a solution how to represent – and handle in a comparison and a find operation – a situation, where the different readings are assigned different probabilities (weights between 0.0 and 1.0 being guaranteed to add up to 1.0).
(Criterion 3) In the example above we need a solution how to represent – and handle in a comparison and a find operation – a situation, where the different readings are assigned different possibilities (weights between 0.0 and 1.0 being not restricted by each other).
(Criterion 4) If in the snippet “Herma*us” it is hard to decide whether the asterisk should be read as “n” or “nn”, we need, besides the solutions needed for the previous examples, a way to address the tokens “u” and “s”, which is valid independent of the reading of the intervening part of the string. (A generic solution. The obvious simple solution, representing “n” as TS_ASCII and “nn” as TS_SYMBOL, is insufficient.)
It should be emphasized, that as far as variance is concerned, we are discussing here only the variability in the identification of a textual feature in one transmitted text. The problem of representing a concrete document, which has an incarnation in the physical world. We are not discussing here problems of representing apparatus criticus, which deals with the representation of a hypothetical text, for which in a later document we will present another model.
4.1 Ambiguous Token Strings
In the preceding sections we have assumed for both, the addresses used for textual properties by the token string library, as well as for the mono-dimensional links used within the linked multi-stream library, that links are 2-tuples (start, length) where both, start and length are expressed as integers, offsets from token 0.
Using this assumption, we arrived at the structure for a basic token string in section 3 above:
struct STS
{
int length;
void *string;
int *Tstart;
int *Tlength;
TS_types *Ttype;
ETS *properties;
};
We recapitulate: The element string points to the byte sequence which represents length tokens. The elements Tstart, Tlength and Ttype are arrays of equal length, each triplet { Tstart[i], Tlength[i], Ttype[i] } describing one token in the token string. In our case “Mar*us” would result in:
string = {'M', 'a', 'r', '*', 'u', 's'}
Tstart = {0, 1, 2, 3, 4, 5 }
Tlength = {1, 1, 1, 1, 1, 1}
Ttype = {TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII}
To represent the ambiguity represented by ‘*’ we add two more elements to the STS structure, another TS_type TS_OVERLAY and another TScomparator function TScompareOVERLAY:
enum { TS_ASCII, TS_SYMBOL, TS_BINARY, TS_ASCII_STRING, TS_OVERLAY, TS_MAX_TYPE } TS_type;
int TScompareOVERLAY(ATSoverlay *a, ATSoverlay *b, int length);
While length is required by TScomparator, it remains unused for this comparison function as pointers have a fixed length.
struct ATS
{
int length;
void *string;
int *Tstart;
int *Tlength;
TS_types *Ttype;
ETS *properties;
int firstAmbiguity;
ATSoverlay *overlay;
};
resulting in
string = {'M', 'a', 'r', *, 'u', 's' }
Tstart = {0, 1, 2, 3, 4, 5 }
Tlength = {1, 1, 1, 4, 1, 1 }
Ttype = {TS_ASCII, TS_ASCII, TS_ASCII, TS_OVERLAY, TS_ASCII, TS_ASCII }
That is at any point a token ATSoverlay can be inserted. This overlay consists of an array of token strings, not of individual tokens. So, if in our second example the two readings – “n” and “nn” – should be handled, it would still be:
string = {'H', 'e', 'r', 'm', 'a', *, 'u', 's' }
Tstart = {0, 1, 2, 3, 4, 5, 6, 7 }
Tlength = {1, 1, 1, 1, 1, 4, 1, 1 }
Ttype = { TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_ASCII, TS_OVERLAY, TS_ASCII, TS_ASCII }
To keep things simple, we will develop and discuss the data structure ATSoverlay and its relationships to the STS (plus ETS) and LMSF (plus BMSF) libraries criterion by criterion.
In all levels firstAmbiguity contains the start of the first ambiguous token.
4.1.1 Ambiguous Token Strings – Crisp Ambiguity
To represent a set of alternatives as described in criterion 1 and criterion 4, we need
struct ATSoverlay
{
int NofOverlays;
STS **overlays;
} ;
If an ambiguous token string is accessed by any STS function:
- overlay 0 is used.
- all tokens which are contained in overlay 0 have as starting address within the token string as a whole the starting address of the ambiguous token plus the start within the overlay.
- all tokens after an overlay have as starting address the starting address of the overlay (if applicable, augmented by the length of previous overlays) plus the starting address within the overlay.
If an ambiguous token is explicitly accessed by any ETS function:
- all operations apply to the stack as a whole, that is to all overlays of ATSoverlay.
- As each overlay is a token string, ETS properties these have when added to the stack, belong to the individual overlays, not to the overlay token as a whole.
That is, both, the STS library as well as the ETS, operate in principle upon the ATS structure, ignore all but the first overlay – overlay 0 – however.
To manage the overlays in ambiguous token strings the int argument start in all STS and ETS functions is replaced by an argument start of data type ATSref:
struct ATSref
{
int overlay;
int start;
} ;
To understand the relationship between a token string and this overlay structure let us look at the “Mar*us” token string. If it is stored as indicated above in an ambiguous token string, the individual characters would be addressed by the following ATSreferences:
{ 0, 0 } == 'M'
{ 0, 1 } == 'a'
{ 0, 2 } == 'r'
{ 0, 3 } --> { 0, 0 } == 'i'
{ 1, 0 } == 'c'
{ 0, 4 } == 'u'
{ 0, 5 } == 's'
To handle the overlays the following interface is provided. Note: As an ATSoverlay consists of token strings, which can contain ATSoverlays in turn, the structure is in principle recursive. As currently I cannot see a case for recursive overlays for the purpose of purely representative markup, there is no support for such recursive overlays, however. A remark on terminology: The individual strings – { ‘n’ } vs. { ‘n’, ‘n’} – each one represents one overlay of the ambiguous token; the data structure by which the overlays are bound together is in the following descriptions referred to as an overlay stack, or, for brevity’s sake, usually as stack.
Within the stack the overlays can be referred to by the index the have within the arrays representing the stack. This integer argument is usually also referred to as overlay. A stack may contain empty overlays. We will further encounter functions, which allow to access a specific overlay by a name. A stack may contain mixed overlays with as well without a name.
STSextractToken() and STSextractUnknownToken() are extended to support the extraction of TS_OVERLAY tokens, without changing their signatures.
STS *ATSextractOverlayByNumber(ATSoverlay *stack, int overlay);
This function extracts a simple token string from the ATSoverlay stack. It selects the token string stored as overlay overlay.
STS *ATSextractOverlaysByNumber(ATS *baseString, int NofPar, int overlay1, …);
This function extracts from the ambiguous token string baseString a simple token string. It selects the overlay overlay1 from the first token of type TS_OVERLAY encountered. If further tokens of type TS_OVERLAY are encountered in baseString, overlay1 isselected again, unless an additional argument overlay2 has been supplied. Generally: If there are less parameters overlay than there are tokens of type TS_OVERLAY in baseString, overlayn isapplied as often as required.
STS *ATSextractOverlayByName(ATSoverlay *stack, char *overlay);
This function extracts a simple token string from the ATSoverlay stack. It selects the token string referred to by the name overlay. Such names can be created by
TSstatus *ATScreateOverlayName(char *name, int overlay);
This function modifies an internal table of numeric equivalents for character strings, which can be used instead of ordinal numbers of overlays.
This table of numeric equivalents is maintained via an additional element ATSoverlayNames of the
struct ETSpropertyTable
{
ETSmodeSet modes;
ETSmodeSubset activeModes;
ETSstyleSet styles;
ETSstyleSubset activeStyles;
ETScolorSet colors;
ETScolorSubset activeColors;
ETSsizeSet sizes;
ETSsizeSubset activeSizes;
ATSoverlayName *ATSoverlayNames;
} ;
where ATSoverlayName is defined as
struct ATSoverlayName
{
int id;
char *name;
} ;
If the second argument – overlay – in the call to ATScreateOverlayName() is -1, the highest id encountered so far is incremented by 1.
This obviously could be used to administer with ambiguous character strings the variance between witnesses identified by their sigla. It is not intended for this purpose, as mentioned above.
STS *ATSextractOverlaysByName(ATS *baseString, int NofPar, char *name1, …);
This function works exactly as ATSextractOverlaysByNumber(), except that the overlay in question is referred to by names supplied as character strings.
ATSoverlay *ATScreateOverlay(int NofPar, STS *overlay1, …);
This function creates an overlay structure out of the token strings, overlay1 to overlayn. The strings are assigned to overlay 1 – n in the order in which they are specified.
ATSoverlay *ATScreateNamedOverlay(int NofPar, STS *overlay1, char *name1, …);
This function creates an overlay structure out of the token strings overlay1 – overlayn. The strings are assigned to the overlays specified by the names name1 – namen.
TSstatus ATSappendOVERLAY(STS *string, ATSoverlay *overlay);
Appends the ATSoverlay overlay as a TS_OVERLAY token at the end of string.
TSstatus ATSreplace(STS *oldstring, int start, ATSoverlay *newtoken);
This function replaces the token at token position start in token string oldstring with the ATSoverlay newtoken. The function returns a status code reporting on the success of the operation.
TSstatus ATSinsert(STS *oldstring, int start, ATSoverlay *newtoken);
This function inserts the ATSoverlay newtoken at token position start into the token string oldstring. The function returns a status code reporting on the success of the operation.
TSstatus ATSreplacePartByNumber(ATSoverlay *stack, int overlay, STS *newstring);
This function replaces the token string in the overlay overlay of the ATSoverlay stack with the token string newstring.
TSstatus ATSreplacePartByName(ATSoverlay *stack, char *name, STS *newstring);
This function replaces the token string in the overlay referred to by name of the ATSoverlay stack with the token string newstring.
TSstatus ATSaddPart(ATSoverlay *stack, STS *newstring);
This function adds the token string newstring to the end of the ATSoverlay stack. It is assigned the overlay number after the highest one in stack.
TSstatus ATSaddPartByNumber(ATSoverlay *stack, STS *newstring, int overlay, TSstatus replace);
This function adds the token string newstring to the ATSoverlay stack. It is assigned the overlay number overlay.
If the overlay overlay already exists and replace is true, the function act as ATSreplacePartByNumber(). If in this situation replace is false, stack remains unchanged and an error is returned.
TSstatus ATSaddPartByName(ATSoverlay *stack, STS *newstring, char * name, TSstatus replace);
This function adds the token string newstring to the ATSoverlay stack. It is assigned the overlay name name.
If the overlay name already exists and replace is true, the function acts as ATSreplacePartByName(). If in this situation replace is false, stack remains unchanged and an error is returned.
TSstatus ATSremovePartByNumber(ATSoverlay *stack, int overlay);
This function removes the overlay overlay from stack.
TSstatus ATSremovePartByName(ATSoverlay *stack, char *name);
This function removes the overlay referred to by name from stack.
4.1.2 Ambiguous Token Strings – Weighted Ambiguity
If we say that a character can be read as “i” or “c” and say nothing beyond that, we implicitly consider both readings as equiprobable. This may frequently be true, but in other cases it is quite likely that one of the two readings is much more probable than the other. Additionally, there will frequently exist situations where two or more readings are equiprobable and both also rather probable, a spurious third one is marginally possible, but not likely. An example for such a situation would, e.g., be the output of an OCR program trying to decide between three possible readings of a character in a German Fraktur font, where the character (f) is extremely hard to differentiate from (s) at the best of times, with a marginally possible reading as the letter i when the character is broken.
Implementing support for such a situation should relate to a more general support for ambiguity in general, as discussed in [Thaller 2020]. The following proposals are particularly tentative, therefore – any implementation should be influenced by considerations how it could fit into a more systematic handling of ambiguity. To serve the purpose of this paper, exploring what it would take to implement a solution for the problem discussed here, we must try to be very concrete, however, so discussions of alternative approaches are omitted.
To support all sorts of weighting schemes, we first modify two of the data structures introduced beforehand:
struct ATSoverlay
{
int NofOverlays;
STS **overlays;
float *weight;
int status;
} ;
weight stores numeric weights which the ATS library guarantees to be between 0.0 and 1.0.
status is used as a flag, indicating some properties of the weighting system which are expressed by the following values of an enumeration ATSweights.
- ATSProbabilistic ::= The weights of these overlays sum to 1.0.
- ATSPossibilistic ::= The weights of these overlays have no known sum.
struct ATSoverlayName
{
int id;
char *name;
float defaultWeight;
int status;
} ;
If defaultWeight contains a value greater equal 0.0 and a function adding a token string to an overlay structure under this name, this token string is assigned the weight to it in the target overlay structure.
If by a reference to this name, the assigned token string is the first overlay in the overlay structure, status is copied to the ATSoverlay.
As internally there is no difference in the data structure used for an ATSoverlay between crisply and weighted ambiguous overlay stacks, the functions ATSappendOVERLAY(), ATSreplace() and TSstatus ATSinsert() work the same in both cases.
A major additional problem arises in probabilistically ambiguous stacks, as it must be guaranteed that the weights are balanced. Therefore, many of the following functions will “balance” the weights of the stack before creating or modifying it, unless a user explicitly suppresses that functionality for better control over the weighting process. Balancing of probabilistically ambiguous stacks is done according to the following rule:
It is checked whether weight1 to weightn sum to 1.0.
If the sum of weights is larger than 1.0, the difference is divided by the number of overlays with weights greater than 0.0. This value is subtracted from each of the weights greater than 0.0.
If the sum of weights is smaller than 1.0, the difference is divided by the number of overlays with weights smaller than 1.0. This value is added to each of the weights smaller than 1.0.
To manage weighted overlays, the following functions are added to the ATS library:
STS *ATSextractOverlayByWeight(ATSoverlay *stack, ATSweightRelation criterion, float weight);
This function extracts a simple token string from the ATSoverlay stack.
ATSweightRelation is an enumeration which describes, how the weight – between 0.0 and 1.0 – is used to select an overlay token string to be selected.
Its values are interpreted as follows:
- Equal ::= select an overlay with exactly the weight given.
- Maximum ::= select an overlay with the maximum weight occurring in this ATSoverlay.
- Minimum ::= select an overlay with the minimum weight occurring in this ATSoverlay.
- Greater ::= select an overlay with the smallest weight greater than the given one in this ATSoverlay.
- Less ::= select an overlay with the largest weight smaller than the given one in this ATSoverlay.
- GreaterEqual ::= select an overlay with the smallest weight greater than or equal to the given one in this ATSoverlay.
- LessEqual ::= select an overlay with the largest weight smaller than or equal to the given one in this ATSoverlay.
“Select an” rather than “select the” is no mistake: In probabilistic weighting schemes with equiprobable weights all overlays fit or do not fit all these selection criteria. In possibilistic weighting schemes, there may always be more than one overlay fitting each of these criteria in an ATSoverlay.
To solve this problem the following two functions are provided.
STS *ATSextractOverlayByWeightAndNumber(ATSoverlay *stack, ATSweightRelation criterion, float weight, int overlay);
If an ambiguous situation arises, as described in ATSextractOverlayByWeight(), and the set of qualifying overlays contains the overlay overlay, this is selected. If that overlay is not in the set of qualifying overlays, NULL is returned.
STS *ATSextractOverlayByWeightAndName(ATSoverlay *stack, ATSweightRelation criterion, float weight, char *name);
If an ambiguous situation arises, as described in ATSextractOverlayByWeight(), and the set of qualifying overlays contains the overlay referred to by name, this is selected. If that overlay is not in the set of qualifying overlays, NULL is returned.
STS *ATSextractOverlaysByWeight(ATS *baseString, int NofPar, ATSweightRelation criterion1, float weight1, …);
This function extracts from the ambiguous token string baseString a simple token string. The selection mechanism for overlays from individual TS_OVERLAY tokens is the same as described in ATSextractOverlaysByNumber(), the successive pairs ( criterioni, weighti) are used as described in ATSextractOverlayByWeight().
STS *ATSextractOverlaysByWeightAndNumber(ATS *baseString, int NofPar, ATSweightRelation criterion1, float weight1, int overlay1, …);
This function extracts from the ambiguous token string baseString a simple token string. The selection mechanism for overlays from individual TS_OVERLAY tokens is the same as described in ATSextractOverlaysByNumber(), the successive triplets ( criterioni, weighti, overlayi ) are used as described in ATSextractOverlayByWeightAndNumber().
STS *ATSextractOverlaysByWeightAndName(ATS *baseString, int NofPar, ATSweightRelation criterion1, float weight1, char *name1, …);
This function extracts a simple token string from the ATSoverlay stack. The selection mechanism for overlays from individual TS_OVERLAY tokens is the same as described in ATSextractOverlaysByNumber(), the successive triplets ( criterioni, weighti, namei ) are used as described in ATSextractOverlayByWeightAndName().
TSstatus *ATScreateWeightedOverlayName(char *name, int overlay, float weight, ATSweights type);
This function creates an overlay name name, representing the overlay number overlay, with the weight weight and the type type.
ATSoverlay *ATScreateWeightedOverlay(ATSweights type, TSstatus balance, int NofPar, STS *overlay1, float weight1, …);
This function creates an overlay structure of type type out of the token strings overlay1 – overlayn. The strings are assigned to overlay 1 – n in the order in which they are specified.
The overlays are assigned the weights weight1 – weightn.
If the sum of weights is not 1.0 and the stack is probabilistic it is balanced if balance is true. Otherwise, an error is diagnosed.
ATSoverlay *ATScreateWeightedNamedOverlay(ATSweights type, TSstatus balance, STS *overlay1, int NofPar, char *name1, float weight1, …);
This function creates an overlay structure out of the token strings, overlay1 to overlayn. The strings are assigned to the overlays specified by the names name1 – namen.
The overlays are assigned the weights weight1 – weightn.
If the sum of weights is not 1.0 and the stack is probabilistic it is balanced if balance is true. Otherwise, an error is diagnosed.
TSstatus ATSweightTokenByNumber(ATSoverlay token, ATSweights type, TSstatus balance, TSstatus partial, int NofPar, int overlay1, float weight1, …);
The crisp ambiguous token token is weighted by the weights weight1 – weightn, being assigned to the overlays overlay1 – overlayn, with weighting type type and balanced, unless balanced is false.
If weights are provided for not existing overlays an error is diagnosed. If for some overlays no weights are specified and partial is true, the missing ones are calculated during balancing if balance is true. If in such a situation partial or balance are false an error is diagnosed.
TSstatus ATSweightTokenByName(ATSoverlay token, ATSweights type, TSstatus balance, TSstatus partial, int NofPar, char *name1, float weight1, …);
The crisp ambiguous token token is weighted by the weights weight1 – weightn, being assigned to the overlays referred to by name1 – namen, with weighting type type and balanced, unless balanced is false.
If weights are provided for not existing overlays an error is diagnosed. If for some overlays no weights are specified and partial is true, the missing ones are calculated during balancing if balance is true. If in such a situation partial or balance are false an error is diagnosed.
TSstatus ATSunweightToken(ATSoverlay token);
Remove all weights from token.
TSstatus ATSreplaceWeightedPartByNumber(ATSoverlay *stack, int overlay, STS *newstring, TSstatus balance, float weight);
This function replaces the token string in the overlay overlay of the ATSoverlay stack with the token string newstring assigning the weight weight.
If the stack is probabilistic the weights in the stack are rebalanced unless balance is false.
TSstatus ATSreplaceWeightedPartByName(ATSoverlay *stack, char *name, STS *newstring, TSstatus balance, float weight);
This function replaces the token string in the overlay referred to by name of the ATSoverlay stack with the token string newstring assigning the weight weight. Afterwards stack is rebalanced if it is probabilistic and balance is true.
TSstatus ATSaddWeightedPart(ATSoverlay *stack, STS *newstring, TSstatus balance, float weight);
This function adds the token string newstring to the end of the ATSoverlay stack assigning the weightweight. It is assigned the overlay number after the highest one in stack.
Afterwards the stack is rebalanced if it is probabilistic and balance is true.
TSstatus ATSaddWeightedPartByNumber(ATSoverlay *stack, STS *newstring, int overlay, TSstatus replace, TSstatus balance, float weight);
This function adds the token string newstring to the ATSoverlay stack assigning the weight weight. It is assigned the overlay number overlay.
If the overlay overlay already exists and replace is true, the function acts as ATSreplacePartByNumber(). If in this situation replace is false, stack remains unchanged and an error is returned.
Otherwise, the weights in the stack are rebalanced if it is probabilistic and balance is true.
TSstatus ATSaddWeightedPartByName(ATSoverlay *stack, STS *newstring, char * name, overlay, TSstatus replace, TSstatus balance, float weight);
This function adds the token string newstring to the ATSoverlay stack assigning the weight weight. It is assigned the overlay name name.
If the overlay name already exists and replace is true, the function acts as ATSreplacePartByName(). If in this situation replace is false, stack remains unchanged and an error is returned.
Otherwise, the weights in the overlay stack are rebalanced if it is probabilistic and balance is true.
TSstatus ATSremoveWeightedPartByNumber(ATSoverlay *stack, int overlay, TSstatus balance);
This function tries to remove the overlay overlay from stack.
If the stack is probabilistic the weights in the stack are rebalanced. The weight of the overlay to be removed is divided by the number of remaining overlays in the stack and the resulting value is added to all existing weights. Afterwards the stack is rebalanced if it is probabilistic and balance is true.
TSstatus ATSremoveWeightedPartByName(ATSoverlay *stack, char *name, TSstatus balance);
This function tries to remove the overlay referred to by name from stack.
The remaining stack is rebalanced if it is probabilistic and unless balance is true.
5. Linking Token Streams
As token strings are conceptualized emphatically for representational purposes, the possibility to embed links into them has not been emphasized. There may be good reasons to do so – it would be quite rational, e.g., to insert an image that represents a feature of a source medium, for which no standard glyph exists, but which can be clearly identified during transcription, not by a symbol or a binary token, but by a pointer to an image file. And obviously that may lead to situations, where not an image file as a whole, but only a section of such should be included. For this purpose, two more token types and two more comparator functions are needed.
TS_LMSF represents a token which starts with an LMSFLid (assumed to be of a length of four bytes), immediately followed by an ASCII string representing a valid file name.
TS_BMSF represents a token, which is meaningful only within a BMFSSTREAM, referring to the BMSFinLink structure of the stream, consisting of an LMSFLid.
So TS_type has to be extended to:
enum { TS_ASCII, TS_SYMBOL, TS_BINARY, TS_ASCII_STRING, TS_MAX_TYPE, TS_OVERLAY, TS_LMSF, TS_BMSF} TS_type;
int TScompareLMSF(void *a, void *b, int length);
Expects the first four bytes of the two arguments to contain an integer. If that integer is negative, the two file names starting at byte four are being compared. Otherwise, the two numbers are interpreted as LMSFLids and the remainder as the name of the files to which the point. Whatsoever they point to is extracted and compared by the appropriate comparator function.
int TScompareBMSF(LMSFLid *a, LMSFLid *b, int length);
The two LMSFLids are looked up in the BMSFinLink structure. Whatsoever they point to is extracted and compared by the appropriate comparator function. The third argument remains unused.
References
[Barnet 2013] Belinda Barnet: Memory Machines. The Evolution of Hypertext, Anthem Press, 2013.
[Berners Lee 1990] Tim Berners Lee: Topology. https://www.w3.org/DesignIssues/Topology.html (accessed December 14th 2020)
[Claybrook 1983] Billy G. Claybrook: File Management Techniques, John Wiley, 1983.
[Coombs 1987] James H. Coombs et al.: “Markup Systems and the Future of Scholarly Text Processing”, in: Communications of the ACM 30(1987) 933-947.
[Exif 2019] CIPA Standardization Committee: Exchangeable image file format for digital still cameras: Exif Version 2.32, CIPA, 2019. http://cipa.jp/std/documents/download_e.html?DC-008-Translation-2019-E (accessed December 17th 2020)
[Geertz 1973] Clifford Geertz (ed.): “Thick Description. Toward an Interpretive Theory of Culture”, in Clifford Geertz: The Interpretation of Cultures, Basic Books, 1973.
[Thaller 1993] Manfred Thaller: “Historical Information Science: Is there such a Thing? New Comments on an Old Idea.”, In: Seminario discipline umanistiche e informatica. Il problema dell’ integrazione, ed. Tito Orlandi, 51-86 (= Contributi del Centro Linceo interdisciplinare ‘Beniamino Segre’ 87), Rome, 1993. Reprinted under the same title in: Historical Social Research Supplement 29 (2017), 260-286.
[Thaller 2018] Manfred Thaller: “On Information in Historical Sources“ Blogentry: https://ivorytower.hypotheses.org/56 (accessed December 14th 2020); page numbers refer to the pdf version: https://www.academia.edu/43660932/On_Information_in_Historical_Sources.
[Thaller 2020] Manfred Thaller: “On Vagueness and Uncertainty in Historical Data” Blogentry: https://ivorytower.hypotheses.org/88 (accessed December 14th 2020); page numbers refer to the pdf version: https://www.academia.edu/43660950/On_vagueness_and_uncertainty_in_historical_data
[W3C 1995] A Little History of the World Wide Web, https://www.w3.org/People/Berners-Lee/History.html (accessed December 14th 2020)
[i] For reasons which are not of interest here, computing systems always start counting at zero, not one.
[ii] Remember: we start counting with 0.
[iii] I avoid a specific reference here, as Ted Nelsons’ unconventional publishing habits make it extremely difficult to find a quotation which is stable and reliably traceable. Any attempt to read up on his hypertext model / project Xanadu, which goes beyond ridiculing the later for never delivering software, will prove the point.
[iv] To keep things simple, we always use char in the signatures of the functions in this paper; wchar varieties of them would not change anything fundamental.
[v] That example is repeated from [Thaller 1993, 268]. This whole section draws heavily on [Thaller 1993, 274-278]. The main difference is that in 1993 these considerations were used for solutions which ultimately lead to embedded markup, while the current presentation uses standoff solutions throughout.
[vi] For many properties it would make sense, to specify the sensitivity of the comparison, e.g., accepting two sizes as equal, if they are different only by a numeric value less than 3. To keep this document simple, this has not been covered yet.
[vii] For brevity’s sake we use only token strings as examples which consist of characters. The solutions have of course also to apply to situations where the same situations involve the other types of tokens.
OpenEdition suggests that you cite this post as follows:
Manfred Thaller (February 18, 2021). On annotations and markup, interpretations, and representation. A Digital Ivory Tower. Retrieved January 18, 2025 from https://ivorytower.hypotheses.org/148