Non-Linear Fully Two-Dimensional Writing System Design 2005-01-05

Note: This was never completed, and I've lost the original accompanying diagrams.

Introduction

This project is an offshoot of an earlier one, "On the design of an ideal language". While I intend to finish that eventually, my interest for now is drawn to this particular aspect as it holds what I think is the most potential for real innovation.

For some years now, I have been mulling over the idea of a non-linear "fully" two-dimensional writing system (NLF2DWS for short). The idea is a hard one to grasp, and as of yet I am still not clear on all the details of how it would play out in practice, but I will try to explain as clearly as I can the various necessary and optional characteristics, features, and potential implementations that I do understand. I will also try to include a review of existing systems that I know of, to see how they compare to these specs; and to address the various arguments about the cognitive, pragmatic, and future consequences of it.

Before starting, though, I want to make a few disclaimers. I am neither the first nor the only person to work on a 2DWS, and I claim no especial right to the idea. Others have proposed other 2DWSs, which share and differ in certain properties. I do not wish to say anything bad about them, as themselves; I will only be talking about other systems from the perspective of my concept, developed here. I will reserve the term NLF2DWS to describe it (until I can come up with something more elegant). I encourage those who would like to develop related but different ideas, or to argue against mine on some sort of systematic grounds, to give different names to their concepts, that we may better distinguish an argument that agrees on axioms/specs (that is, the major specifications as outlined in this essay) from one that does not (and perhaps claims its specs will result in a somehow preferable result).

NLF2DWS does not refer to any particular implementation. It is a specification of conlang technology, not of an actual language. All examples I give here are intended solely as examples, and very simple ones at that; I am not particularly attached to them, and in fact believe that better (more elegant, more aesthetic, simpler, more integrative, more powerful, etc) implementations can be come up with to fulfill the primary goals. I apologize in advance for my lack of artistic skill.

I will try to distinguish between ‘necessary’ features – that is, ones that are absolutely critical to having a recognizable NLF2DWS – from those that are optional or probable (i.e. consequences of the only plausible implementations I can think of), or ones that fit my own aesthetic preferences. There are also several ideas discussed here that are not integral to NLF2DWS per se, but that I feel would work very synergistically with it.

If you understand what I am trying to do here, please feel free to suggest other ways to describe it that might get better at that gestalt; I’m not afraid of criticism. Please do be careful to distinguish whether you are talking about a better way of getting at the same idea, or disagreeing with the idea itself.

For the most part, the ideas I outline here are my own (though come up with independently by others as well), and I take the responsibility for them and for any errors. I’d like to express my thanks to my friend Neil Herriot and to the many people on the Brown University CONLANG mailing list for their help in fleshing it out. I would also like to point you to Ted Chiang’s short story Story of Your Life, as the language "Heptapod B" sketched therein is very much in line with what I am looking for.

What I describe here is, I believe, a very dramatically different thing from (nearly) any existing system of writing, note-taking, or displaying information. It is intended first and foremost to be a written language. Its ability to be rendered into speech is a very low priority to me; no sacrifices will be made for the benefit of speech, if they impede a more powerful or elegant writing system. In fact, I don’t consider its ability to render speech, or to be spoken, to be particularly critical; I flatly reject the concept that writing systems need be mere codes for speech, though this may indeed be true for the vast majority of natural-language writing systems.

I also reject the notion that all humans always "think" in language – that is, that the original form of thought is of the same form and structure as speech. I know people whose "thought" is experienced as speech, or text, or moving images, or more exotic stuff. I personally do not think in any of the languages I know, except when rehearsing a conversation or when fixing something for memory – and the latter only because I have no other viable methods available for symbolizing thought. My normal thought process is much more abstract, occurring as a sort web of cascading ideas; sadly, there is at present no way for me to properly express or encode this. A NLF2DWS would address this need, and this is discussed towards the end of this paper.

I feel that NLF2DWSs have some extremely interesting implications and applications, as I will describe later. I trust that this paper adequately conveys both the idea itself, and why I consider it so significant.

As a footnote that I will not elaborate on for now, I would like to point out that, if successful, a NLF2DWS would present a potentially serious attack on Chomskian ‘embedded [linear] syntax’ – or at least require a major revision of it – and to some extent support a more cognitive linguistics style, neural theory of language.

Theory & high-level concepts

Non-Linearity (NL)

Non-linearity is the defining feature of a NLF2DWS – but in some ways, it is easier to describe what this is not, than what it is.

It is not "non-linear" in the sense used on the Wikipedia entry titled "non-linear writing systems", or in the dictionary.

It is not the sort of quote-unquote "non-linear" arrangement as in Hangul (the Korean writing system), as Hangul could for all purposes be rephrased in terms of a purely arbitrary set of symbols, with completely linear syntax and semantics, with a large but easily derived symbol set (i.e. one for every possible syllable). Not to mention that Hangul could just be rearranged as if it were a normal alphabet composed of its jomo (letters).

It is not a "tree" format, in the computer science sense (it is a multigraph). It is not a simple re-traversal of any linear writing system; e.g. the sort of "sentence structure diagrams" taught to elementary school kids, nor the parse trees and ‘surface vs. deep structure’ trees of grammarians.

Nor is it a "grid" format in any sense (though a grid design could be called "two-dimensional"), because a grid creates severe constraints on the range and potential connections of elements – in addition to being, in my opinion, very inelegant. This is not about elements being placed in particular ‘slots’ on the paper, but about them having particular kinds of interconnections amongst each other.

In fact, I believe I can say that it is not possible, short of crippled or very simple specialty cases, to directly convert a linear writing to a non-linear one without either loosing a lot of meaning (NLàL), being extremely inelegant by virtue of failing to take advantage of better design (LàNL), or becoming functionally incomprehensible (e.g. the list format in which an Nth-degree array is stored in the C programming language).

So, what is non-linearity?

At its core, NL has to do with how concepts are arranged, both on physical paper and in their more abstract form. A NL system is a multigraph; its components are, or can be, extremely interconnected. There is no single traversal method, though there may be some conventional ones. There may not be a ‘traversal’ method at all, as such; I’ll deal with that under ‘psychological ramifications’ below. A road map is a form of non-linear writing.

Non-linearity is a completely suffusive feature of a NLF2DWS. If what you are looking at is only trivially NL, then it’s probably not what I want. This means that at minimum it affects the syntax and semantics of the language, and ideally would affect the morphology, concepts used, and the very nature of certain ‘speech acts’ such as jokes and story-telling. (I’ll elaborate on that below.)

There are essentially two major forms of NL that I can think of as being plausible implementations: node-and-connection (N&C), and massively fusional (MF). I will describe them separately, but I take it for granted that any real, useful implementation will likely be a combination of both.

Under N&C, certain kinds of concepts would be ‘nodes’ – analogous to ‘roots’ in a polysynthetic language. These would then be connected to each other in various ways; these connections would likely vary in form and function, much like a kind of syntactic ‘morphemes’. Visually, a good first approximation is the standard image of a neural network – a bunch of words connected by lines.

Under MF, while there might still be ‘roots’, the morphological changes would be more integral (e.g. changes in thickness, orientation, color, shape, etc). Related nodes might be fused together in various ways – creating, for larger cases, a very large entity that on first glance appears to be a single (albeit complex) symbol, whose subparts are only discernable on closer examination. Single strokes would be part of multiple subparts; it would be difficult (if at all possible) to give firm dividing lines between where one ‘character’ ends and another begins.

MF, in my estimation, would be a more elegant but more difficult form to implement. (MF is also the kind described as Heptapod B.)

Another key feature of this version of NL is that, in principle, any element should be able to connect to any other element, so long as it is a semantically plausible / meaningful connection – and for certain kinds of connections, this may mean multiple simultaneous connections. E.g., when describing what you ate last night, there are multiple patients of that verb – all of those would connect directly to the verb.

Note that it is entirely possible to have recursive loops clearly laid out with this – in fact, it’s one of the tests for non-linearity. For example, "event A causes event B causes event C causes event A". In a linear system (like the preceding sentence), you have a sort of conceptual connection between head and tail of that list that you fill in mentally; in a NL system, it forms a simple triangle (or more complicated structure) with causal links and immediately obvious circuits.

All other features I am about to describe are, in my opinion, optional. That is, they do not define a NLF2DWS. However, I do believe that they would be present in any good NLF2DWS. Unless mentioned otherwise, I will be describing them in the context of a N&C framework. They apply equally to one that is more fusional; this is a convenience for the sake of easier comprehension.

Sub-symbolic nonlinearity & component exposure

In all current writing systems (with some small exceptions), individual symbols are entirely arbitrary. Not solely in the sense that words are arbitrary symbols compared to the concepts they represent – I have no issue with that, and believe it a necessary fact of human cognition – but that you could in principle replace every symbol with a serial number, or random geometric shape, and have no worse a system. This would be perhaps somewhat inelegant, but it would not in the least impair syntax, semantics, etc – and only marginally affect comprehension once acclimatized to.

Hangul is one exception. Its characters are made up of sub-symbols, which are in turn completely standard alphabetic symbols. They are combined in a set way to make a syllabic symbol. This could be easily reconfigured as a string – simply disregard the stacking rule – and thus does not count as non-linear in any sense discussed here.

Chinese characters (and Mayan, and Egyptian, etc) are another. They are derived from pictographs, but regularized and made highly iconic. Many characters are combinations of other characters – e.g. "forest" being three "tree" symbols, or "love" having the character for "heart" at its base. While this is not ‘nonlinear’, it is compositional in a somewhat better sense, and their arrangement does matter (e.g. some characters flipped upside down do exist as completely different characters). Its composition, however, is entirely unexposed. Every character could effectively be replaced by its serial number, with the comprehension none the worse; this is in fact exactly what a letter in Chinese looks like, if you read its Unicode version – just a long series of serial numbers.

My concept of characters in a N&C paradigm NLF2DWS would have each character be not just visually different, but have part of its semantic functions encoded in its actual form. For example, a relatively simple verb like "eat" (assuming for the moment that its ‘semantic roles’ are structured as they are in English) has two main related concepts – an entity doing the eating, and the entity eaten. Therefore, the symbol for it should likewise have "attachment points" (APs) that are symbolically related to their roles – either on a character-by-character basis, some sort of systematic method (e.g. a ‘standard’ way to designate the ‘patient’-attachment-point), or (again, most likely) a combination.

Why is this important? First, in a simple sense, it is analogous to kerning in linear fonts. Without it, all characters take up as much ‘space’ as the largest character, even if they do not use it.

Second, it helps the ease of comprehension of the system. Though symbols are still, well, symbolic, the subparts easily designate the various roles, changes within the frame structure, etc., that are important for understanding the idea. The symbols need not be composed of subcomponents that synthesize to create the overall meaning (as in Chinese), though this would be a good thing, for the sake of mnemonics.

Having the roles ‘exposed’ in this manner makes easily clear why a sentence such as "he said to her" would be ungrammatical – it is missing the required element of what he said. Changes in role requirements – e.g. those created by ‘middle voice’ and ‘passive voice’ – would be visually represented as a simple presence or absence of those attachment-points, or as some modification to them to have them be obviously optional, or obviously implicit / unattachable.

A further elaboration of this concept would have individual symbols represent, not ‘words’ in the traditional sense, but frames themselves. For example, one can represent the ‘commercial transaction frame’ – in which a seller and a buyer exchange goods for money – and many related terms quickly become simple alterations of it. Buy, sell, exchange, sale, lease, lend, borrow, rent, cost, store, salesperson, haggle, etc. as well as aspectual differences (be in the middle of a transaction, have completed, start it, etc), are all fairly obvious and simple morphological changes to particular nodes, or to the position / orientation / shape of APs within the structure (e.g. putting the ‘goods’ AP closer ot the ‘buyer’ or the ‘seller’ to indicate its present ownership, or another change to indicate posession). This has the potential to reduce the required lexicon by about one order of magnitude, while simultaneously making it more analyzable, intuitive, and easily ‘glanced’ (see below for that).

Levels of detail (LOD) / "Zoom"

When you look at a picture, a map, or any visual scene, you first become aware of a sort of broad-stroke version of it. That is, you can see where large objects are in relation to each other, where they are in relation to boundaries, what objects are connected, etc. Look more, and more closely, and you begin to make out more detail about all of these, and perhaps to see smaller connections also that may not have been immediately apparent.

A NLF2DWS has the potential to implement this.

The key criterion is that, as you look at a large writing from different ‘zooms’, you easily make out different structures. Zoom all the way out, and you should see the flow of major arguments, of major figures interconnected by (at this level of detail) certain gestalts of connection. Step in more, and you see what exactly those connections are; perhaps some description of the major figures involved; a footnote or tangent here or there; etc.

This would also correspond to different ways of thinking about a problem, or a question, or a story – the fully-zoomed-out version is the "executive summary"; the fully zoomed-in one is some specialist’s version of how some particular sub-detail is implemented.

A further elaboration – one that would likely require a rather ingenious designer – would have the zoomed-out, low-fi versions of a cluster of happenings "look like" its overall meaning. E.g., a detailed description of a war could look like the character for ‘war’, or symbolize the results, or perhaps be organized in a different level of symbols – meta-symbols, as it were, whose individual ‘strokes’ are whole ‘sentences’. (This is somewhat analogous to the effect of pointillist art, if it were made such that the component pieces themselves related to their little part of the overall picture.)

Implementation Details

Types of connections in a N&C structure

A nodes & connections structured NLD2DWS would need several different kinds of connections. These take the function of copulas of course, but also of more "meta" semantic connections as well – the kind that in normal writing systems would be exemplified by a concept outline, a "flow of argument", speaker interaction, etc.

"Copular" connections

(Taken from Describing Morphosyntax, by Thomas Paine)

Six basic forms of copulas are:

  • equation
  • attribution / description
  • proper inclusion / subset
  • locational
  • possessive
  • existential (unitary operator)

Each of these can be, in the most obvious case, a line connecting A and B, with different squiggles on the line for the different types of connection.

Of course, most languages do not distinguish between all of these different forms, and as a result have a certain amount of ambiguity – e.g. "John is a teacher" could be an equation, description, or subset. A NLF2DWS need not necessarily make as many distinctions as are possible to make.

Connecting two nodes A and B (or in the existential case, just marking A) with those would create analogues of the sentences:

  • A is the same thing as B
  • A is B-like / described by B
  • A is a B
  • A is located near / with / in / on / etc B
  • A is B’s
  • A exists / there is (an?) A

Less obviously, one could write these copular connections in visually symbolic / intuitive ways – for example, inclusion/subset relationship could be identical to a Container schema, and have the set literally "contain" the subset (as in, e.g., Fig. 5).

Of course, just as with normal languages, there is a lot of variation even within this – e.g. different types of possession (alienability, for instance), and that would be up to the individual NLF2DWS to decide on.

Conceptual connections
• Causation (A ‘causes’ B) • Theory / data (A ‘supports’ B) • Argument structure (A is the main idea, B C D are supporting points, etc) • Source attribution (A is according to / seen in / etc B) • Emotional / experiential association (A ‘brings up’ B) • Generic association (A has some unspecified relation to B)
Primary image schemas
• Container • Source-path-goal • …
Meta-connection changes
• Metaphoricity – this connection is being used metaphorically / metonymically (could be dropped for implicit / poetic use) • evidentialiality – this connection is believed to be true because I perceived it / B said so / it’s tautological / it was always true before and probably still is / etc.
Long-distance connections

If all connections are essentially lines directly connecting point A to point B, then you have some potential problems.

First, you can have ‘collisions’ – that is, connecting lines being forced to cross each other because there are too many of them. While there are various ways this could be made to be relatively easy to make out (circuitry diagrams do so), with enough of them, this would get to be fairly messy.

Second, having point-to-point connections would somewhat constrain the potential distance of any particular connection. If, for example, there is one central item which many others refer to (e.g. the main character of a novel, or the main point of an argument), you will both have potentially excessive density around that central item. Also, physically far-away ones will have to have a connecting line that wends through a large number of symbols – which is both confusing in terms of what connects to what, and could cause further collision problems.

Thus, it would probably be wise to have some sort of "remote" connection method. I can think of three: pronouns, hashing, and pointing.

Pronouns are the closest to normal language. Instead of connecting A to B, you would connect A to one of some closed set of symbols that stands for B. E.g., it could be a symbol that stands for a person, or a concept, or some other grammatical class of the language.

Hashing is a more advanced form of pronouns. Rather than be closed-class, a ‘hash’ pronoun would look like its target, but a somehow (systematically) simplified version of it – stripped down so that it is recognizable, but does not necessarily encode as much as the original.

Pointing is visual – it could be as simple as an arrow pointing in the direction of B. It could have some additional modifications to make the target easier to spot – e.g. the physical distance to target, or some info about the target (like in the pronouns).

Types of connections / modifications / fusions in a MF structure
Orientation
Shape change (= meta-symbols?)
Fusion

Problems & ramifications

But I like my linearity!

This is the first thing most people will go to when reacting to the idea of a NL language: how are you going to tell a story? Teach? Describe a sequence of events? Make an effective argument?

It can be easy to be dismissive of this issue, in both directions. I’ll break it down into a few subcomponents.

Time is linear

Barring some quibbling about it, yes it is. And indeed, most of our experiences happen within time – stories, for example, tend to be of the format "A happened, then B, then C and D and E". (Viz.: "boy meets girl, boy gets girl, boy looses girl")

However, there is a bit of a conflation happening here – the fact (that I do not dispute) that certain sequences are linear, and the idea that an effective story is therefore told in the same manner as it is experienced. That is, our paradigm of storytelling – movies, oral histories, novels, etc – has since forever been one of essentially leading the listener through the events, as if they were experiencing them for themselves (or as if watching them).

To write a sequence of events, you will need some sort of (linear) connection. This is true.

To write a causal chain, you may not. Indeed, many causes are circular – viz. Greek drama, or most psychological problems – or at least multithreaded. The latter could result in something that, while progressing more-or-less linearly, would still be very interwoven over its course – and thus would benefit from a NLF2DWS.

Stories / arguments are linear

In the linear language we are used to, if you want to make a point, or tell a story, or teach a skill, you need to control the sequence in which the target receives information.

For example, in teaching, you ensure that a student has learned a basic concept before teaching something that "builds upon it"; definitions before arguments; etc. In storytelling, you have a "story arc" – setup, tension building, climax, release, resolution. In rhetoric, you handhold the target through a series of logical steps, intended to ensure that the argument is sound and inescapably true. To tell a joke, you need the setup and then the (correctly timed) punch line.

Some of these, in fact, can be ‘spoiled’ by being told out of order – ruining a joke or a novel.

I don’t wish to say that there is no merit to this. In fact, fine control of how to present information to the best effect has become somewhat of an art form. It is well and good within its own context. A NL writing system, however, is not its context.

A logical argument, nonlinearly, is a rather straightforward thing. You lay out your asserted causal links directly, add links to the data you claim supports those assertions, or subdivide particular assertions into sub-parts that are themselves miniature versions of the whole. It would all fit together quite nicely and explicitly, and be potentially much easier to understand (and see the holes in) than when presented linearly.

A story, on the other hand, would likely be something very different from the sort that we know now. One way to describe a truly effective linear story, poem, or joke is that it is good by virtue of an appreciation of the skill of its sequencing, in addition to the effectiveness of the various imagery used.

A nonlinear story would derive its cleverness from the skill of its arrangement. Realizing that the reader can start anywhere they like, and traverse (or random-walk) the writing in any manner they like, the equivalent of a ‘punch line’ would be a sort of gestalt appreciation that comes from understanding how the various parts are interconnected, or how the microcosm fractally reduplicates the macrocosm, or as yet unforeseeable other aesthetics that may develop.

[TODO: elaborate. Perhaps an example?]

How would you fit it into a book?

You wouldn’t, preferably. At least, not one of the kind we have now.

Books work for linear languages because they do not care how they are divided up. A nonlinear language would suffer from being chopped up into 8.5"x11" (or otherwise-shaped) chunks.

While we can, of course, devise methods to make this chopping less traumatic, a better method would be to have a more dynamic system –e.g. a computer-interactive one. These already exist, for other things; see for example the "visual thesaurus". To navigate, you can zoom in or out, follow links, jump around in the net, set certain levels of zoom to "fuzz out" so as to lower clutter, etc.

Smaller examples, of course, might well fit on one page – or one wall – and the desirability of a dynamic reading interface would not pose a problem. Or one could work with a static ‘zoom’ of the sort seen in "pointillist" photo-collages, and vary the reader’s distance, use a magnifying lens and small printing, or etc.

TODO

  • Features
    • suffusive nonlinearity
    • Principle of Iconicity at subsymbolic (but still abstract!) levels
    • Other elements from ODIL?
    • Frame foregrounding
  • Psychological ramifications
    • traversal method (or lack thereof)
    • cognitive maps
    • activation-spread analogy
    • Chunking limits – 7 +/- 2
  • Applications
    • why bother?
    • poetry, aesthetics
    • better way to encode thoughts / notes
    • thought experiment
      • will it change the mode people think in? (eg convert away from thinking in speech/text -> …?)
      • processing delay linear w/ rotation away from "canonical" if no particular canonical orientation?
        • Would people force one upon it?
      • Spotlights vs. single-speck experience of thought
    • Multithreading
      • Conversations
      • Multiple ‘storylines’ intertwining
      • IM format?
    • way to encode nonlinear thought
    • laying out arguments semi-flowchart-style
    • easy skimming, easy ‘at a glance’ understanding
    • potential research
    • spoken-language agnostic
      • potential for a true auxiliary language / interlingua?
    • Glance-ability
    • Art (embedded in normal art; use in scenery; use AS scenery; …?)
      • Nonlinear poetry (= zen / contemplative?)
  • Future use
    • dynamic systems
      • user-responsive
        • change area looked at
        • change area not looked at
      • sequence-controlled elaboration
      • interactive stories / GUIs
    • 3d+
    • Simultaneous / cooperative writing
    • HUDs
  • Problems
    • time is linear
      • temporal, ergo linear, processing
    • sounds (e.g. aural or linear-written names)
    • density of space usage
    • Highlighting symbols – size, thickness, color, etc
      • To create better outline-based glance recognition (e.g. word shape)
    • Getting processing speed / method / experience similar to that of normal visual world
    • Windowing (aka pagination)
    • "hooks" (start/end/etc meta)
    • Writer vs reader ease of use – usability with a simple pen, vs high-tech layout, vs machine-generated, vs…
      • Layout prediction – one stroke being used in multiple characters
    • ‘space-filling’ connectivity constraints
    • Serialization for reading aloud
    • Difficult for people who think linearly / verbally
    • Orientation – is there an absolute (page-relative) up/down/left/right? If not, are all symbols symmetric, or arbitrarily rotateable?
    • Editing – difficulty of inserting / deleting / etc
      • Computer aided?
      • Analogous to search/edit/insert/delete/iterate problem in database design?
  • Representing a gestalt
    • = a hash????
    • Metaphors, "pointing to meaning" (frames, etc; pain/death/marriage/…)
      • Absolutely agree with words being abstract / iconic
  • Other systems
    • glyphica arcana
    • ouwiyaru
    • ithkuil
    • Heptapod b
    • Pinuyo
    • Zaum
    • Inca - quippu
    • rikchik
  • Related topics
    • Lacan – French psych, re unconscious being linguistic
    • Semantic network
    • Frames (framenet, cognet)
    • Information-presentation theory (graphs, etc)
    • Circuit design theory