On the design of an ideal language 2002-03-12 - 2006-06-08

If you are wondering what exactly I mean by "ideal", and my other goals or meta issues like that, take a look at this post on CONLANG, which goes into some detail about that. Eventually I'll integrate it into this essay, but I haven't yet.

Also, to be clear, these principles are intentionally in conflict with each other. One cannot fulfill all of them; one can only choose a preferred balance, and optimize for that.

In trying to make a language, I keep learning that I can't make a decision because I haven't yet decided on some higher-order feature of the language. I decided to compile a top-down view of what I want from it; language should not be desinged in a bottom-up piecemeal fashion if one wants the top-down principles to hold.

Therefore, I'll try to describe here exactly what the desirable qualities of an ideal language are (or should be), and how exactly one could go about putting those ideals into a more concrete form.

First, let's define "language". I'm going to use "a system for transmitting or recording ideas". Is that ambiguous? Damn right. But as you'll see, there's a reason for that.

Guiding Principles:

  1. Principle of Good Representation

    All forms of language use should be as representative as possible of the actual thinking of the target population.

    That is, as much as possible, all rules should be designed to match e.g. human neuropsychology, ways of thinking, etc. If the intent is to change these, then of course this need not be taken to mean "be the same as natural languages" - in fact, there may be methods of expression that are *closer* to "native" thought processes than currently available.

    Some possible examples:

    • basic color terms - based on biologically determined focal colors, i.e. red-green * blue-yellow * black-white
    • non-classical categories / words being defined as closely as possible to the "real" - e.g. using prototypes, graded fit, etc
    • classical categories can be defined by e.g. their functions - e.g. "sit-thing" (where "thing" is a morpheme) instead of "chair"

  2. Principle of Least Effort

    Slang, as well as general "language evolution", have generally resulted from some more-difficult form being "corrupted" to an easier one. (e.g.: "thee" being removed, "whom" -> "Who", "television" -> "TV", vowel shift, etc.)

    Therefore, the language should *start* with simplicity in mind. This means that things should be "regular" (linguistic term, meaning "hopefully the rules don't have many exceptions") as much as possible, that vocabulary should be as dense as possible (long words for oft-used concepts, especially when shorter words are not "taken", *will* be broken down with natural use), etc.

    An example from ASL is that most signs that are physically difficult to make - palm out around chest, below waist, hands together above shoulder, etc. - tend to become simplified into ones that don't involve any strain

  3. Principle of Semantic Density

    Any medium used - e.g., speech, 2d static visuals ("writing"), 3d static visuals ("sculpture"), 2d moving visuals ("movies"), 3d moving visuals ("live performance" [maybe eventually "movies", when tech evolves]), touch, etc. (I'll have more on this later) - should be used optimally.


    This means that

    1. everything that *can* be done (bounded by the PLE), is done - in speech, for eample, use of all available phonemes, tones, etc.
    2. simpler things are done first. For example, the nonsense word "aijmapnargath" should be much later on in the vocabulary than "jaf". Or another example would be ICQ numbers: start from 1 and work up. Why assign #9143018 when #1402 isn't taken?
    3. simpler things are reserved for simpler things. A word for the rotational axis of a particular molecule of some new-age plastic should be implicitly more difficult than a word for "good".
    4. all available mediums are used to their fullest potential. This is bounded by a few things:
      1. The capacity of the receiver(s) to interpret - e.g., radio & deaf people don't work well. (Clause: sometimes this is desirable, in that a multi-channel communication is interpreted at different levels by those able to receive it on different levels [e.g., signing "this is a lie" while talking, for the respective benefits of the person in front of you and the person listening to the room's mic].)
      2. the capacity of the sender and the medium to *encode* the data in the first place. Does it have to be static, as in a written document or movie? Is it interactive? Do you have the benefit of three dimensions, or four? Can you *produce* it? (e.g.: singing tones, or writing ideographs, or using color, or instruments [music carries data, damn it!])
      3. The density of the medium as used. How many WPM of English can native users do? What about ASL? Manual alphabets? Etc...
    5. yes, I said *all* available mediums.

    That means that if you're communicating with someone in front of you, and both of you are ordinary non-impaired humans, you should be using your full body movement (bounded by the PLE), full vocal capacity, etc. If it's dark, or someone's blind, you should be using touch instead. Etc.


    HOWEVER... as Axiem points out (and I forgot to mention on first revision), there comes a point at which you must trade semantic space for redundancy. Such is the case with the armed forces' alpha/bravo/charlie alphabets, and with .rar format "recovery" space (an added 1% or so of space can protect against a surprising amount of corruption).

    Thus, there should be a means of doing this - adding "buffer space" to the data - in whatever mode presented. However, it should *not* be a rigid thing; after all, I said "optimal". That means different things in different conditions - clear or foggy, quiet or noisy, etc. Ignoring this means, on one side, having to repeat (or simply losing the message, or losing precision [as is the case in many examples of humorously misplaced/missing commas]), and on the other, losing precious semantic space and thereby conveying less information.

  4. Principle of Desired Clarity

    Every statement (though "statment" may well be an inaccurate word for a gesture or other "unusual" mode) should be as exactly as semantically precise as the sender wishes.

    It should be no less - if you want to specify "table" over "some sort of furniture designed for things to be placed upon" (like shelves, chairs, desks, etc.), you should be able to do so.

    Neither should it be any *more* precise. First, if you want to know where somebody's conveyance is, you should not need to first know what method of conveyance they used (car, train, motorcycle, horse, feet...) Second, if you want to be ambiguous, you should be able to... and, as an important sub-prinicple, ambiguity should always be *implicit*. If the gender-neutral pronoun is more difficult to produce than "he" or "she", it will be received as a *deliberate* ambiguity. Of course, that too should be able to be expressed, but it should be different than *implicit* ambiguity, in that the former is inclusive and the latter exclusive.

  5. Principle of Default Simplicity

    The easiest concepts to render should be the simplest. E.g., Gender-neutral pronouns should be slightly simpler / easier than gender- or quantity- specific ones. The more complex the idea, the more correspondingly complex its expression.

  6. Principle of Iconicity

    As much as possible, the medium used should represent the thing expressed. This is hard to explain, but an intuitive prinicple.

    If you're making a sign for "rain", for example, wiggling your fingers in a downward sweep is more "natural" than, say, making a circling motion with your fists. The same works with other mediums also; harsh concepts should *sound* harsh when heard, whereas gentle ones should be more mellifluous.

    There are two cautionary notes to this principle, however.

    First, there is the danger of culture bias. Onomatopaeia in spoken languages is a good example; I doubt most English speakers would recognize the Japanese equivalent of "woof woof" or "hee haw", nor vice versa. Also, a sign that represents "money" that symbolizes a sack of coins could well be outdated in fifty years when everybody uses plastic (or other yet-to-be-devised means of exchange). So if there's any question as to the Platonic nature of the representation, it should be completely arbitrary.

    Second, there is the implication of this principle: that entities unfamiliar with the rules of expression (i.e., people who don't know the language) will have an easier time understanding it, because it is as "intuitive" / "natural" as possible. The problem is that sometimes, this is *not* a desirable feature - like when one is trying to be secretive. However, I believe that some form of encryption should be devisable, and the base nature (by the PDS) of the language should be intuitive.

  7. Principle of Cross-Modality

    Anything should be expressable in any/all available means.

    There should be absolutely *nothing* lost in "mode shift" - e.g., the written transcript of a radio talk show. This includes all subtleties and other "meta" features that one normally ignores in English, like vocal intonation, pitch, speed, sarcasm, etc.

    However, there's two clauses to this.

    First, it may be desirable (*optionally*) to drop meaning (like the fact that someone used a word in a derogatory fashion) in favor of brevity, simply because some modes (like those available in communicating with the deaf-blind) are so limited in "bandwidth". I stress however that this is an OPTIONAL and (if relevant) explicit drop; if you want a full mode shift, so be it; it'll just take longer.

    Secondly, some mediums may not allow for quite the degree of implicit or other meta-contextual meanings - how, for example, would you indicate that someone had a sarcastic voice when mode-shifting to touch signing? Pressure of the fingers? So, if need be, a shift from implicity to explicity is allowable, following the PDS: it's dropped unless you add it explicitly.

  8. Principle of Semantic Conservation

    Simply put, there should be no such thing as a "nonsense" or "incorrect" phrase. This overlaps with the PSD.

    In English, for example, the phrase "man got job now" is ungrammatical, though composed of acceptable parts - though one could guess at its "proper" translation. However, why not have this *mean* something? I call this "wasted space". Another example: the non-existent, yet short and easily-pronounced word "bock" (unless I'm missing some extremely rare jargon...). Why? Yet we have words like "inexperienced".

    There are (again) two warning clauses to this.

    First, one must leave "space" for new, yet-unformed vocabulary, and an "official" means of its creation. I find English's way - make up a word that isn't yet taken - rather haphazard. How much space to leave, and how "valuable" (i.e., short words are more "desirable"), is an open question.

    Second, similar to the previous mention of clarity vs. density, the first things to go (if there is some sort of "static" or "corruption") should be the higher-end ones; if a message is garbled, its basic meaning should remain intact; oh well if you lose the speaker's emotion.

    Third, there is the (open) question of overlap. The word "blue" in english means several things - a color, a mood (depressed), a type of media (soft-pornographic), a blue *thing* ("the blue"), etc. Or "rehd" (when spoken) - a color, past tense of the verb "read", etc.

    What to do about it? Should there be a one-to-one correlation of meaning and form? I think perhaps not. If a form can "hold" several meanings, like English words, let it, so long as a) those meanings would not, in most cases, be confused with each other (contextual clarification) and b) those meanings can easily be distinguished (by the PDC) with slightly more effort (e.g., "get", meaning #4, but less obtuse).

    Finally, there is the question of how to deal with the fact that, in a fully conserved system, "noise" would have meaning. Literally speaking, everything you hear, see, smell, etc., should (in principle) carry some meaning. How do you choose which are and are not relevant? A hard (and open) question. (Another example: somebody speaking in sign language%2itemid

  9. Principle of Noise Resistance

    Communications should be comprehensible despite whatever noise occurs during their production, transmission, or reception.

    Ideally, this would be a variable, or at least multi-setting thing, so that you can scale your noise resistance (and by extension, other sacrifices made for its sake) to the needs of the situation. However, it could also just be pegged at some decided-upon middle ground that covers a majority of relevant situations.

    [Thanks to And Rosta for pointing this out.]

  10. Principle of Entropy a.k.a. Principle of High Signal:Noise Ratio (SNR)

    The language should have as high an entropy as possible, as a weighted average over all likely contexts, conversations, and soliloquies.

    Entropy is a measure of how random any given chunk of data is. That is, how much real *information* does it have? The idea here is to maximize the amount of information you receive, and minimize the amount of repetition of unnecessary, expected, default, or otherwise excess data.