On Human Language

SpaceMonkey
3 min readNov 24, 2020

Efficiency:

  • Observation: Humans are capable of exchanging very complicated ideas about the world using a short sequence of sounds.
  • Imagine two humans having a conversation as two agents exchanging data.
  • It looks like two agents are exchanging huge amounts of data but the bandwidth is very low bandwidth which means the data must be compressed.
  • This form of compression relies heavily on the assumption that the two agents share mostly the same model of reality/universe.
  • Imagine two agents that have the same 1 TB file (call it “state”) and one of them discovers 1 MB of new important data, processes it and incorporates into its state file resulting in a new state where the diff between the new and old state is 100 MB. Now, the agent with the newer state need only to transmit 1 MB of data to the other computer instead of a 100 MB diff. Notice that it may seem like there’s an assumption that the second agent will process and incorporate that new 1 MB of data the same way. However, that “processing algorithm” could just be a part of the same 1 TB state file meaning we only need to have the shared state file.
  • In other words, the agents are capable of exchanging complicated ideas in an extremely compressed format because they rely on a pre-existing shared model of reality that describes not only information about the environment but also the agent itself i.e. how it processes new data.
  • Two humans from the same time with a similar background can have extremely “efficient data transfers” i.e. fluid conversations.
  • The greater the difference between the model of reality between the two agents i.e. having a different understanding of life, the less efficient language becomes. Imagine trying to explain “web development” to a person from 500 years ago, you’d have to explain a long list of concepts first.

Arguing:

  • Arguing rarely results in a common conclusion.
  • That is partly because of the assumption that the other person has the same “model of reality” as explained above. Remember that the model contains information about the world as well as how new information is processed and incorporated into that same model.
  • Since language relies on a shared model of reality, most words do not have an exact definition, we assume the other person roughly understands what we are talking about. This is particularly true for words for “subjective” concepts like emotions rather than words for objects. Imagine explaining colours to a blind person.
  • Another example is how most people who didn’t experience poverty can understand it conceptually but not fully “realise” what it means. The difference between conceptual understanding and realisation is whether you experienced something personally. That is, a word’s definition is tied to the emotion it evokes in a person which is part of their model of reality that we don’t have access to.
  • That results in mostly talking past each other when having an argument. The level of misunderstanding reaches down all the way to word definition since a word’s definition goes beyond conceptual understanding (what you get from a dictionary).
  • Additionally, most people are in a defensive state when arguing where they see the discussion as a fight in which they need to win. The need to win arguments comes from being in an aggressive state of mind and also to avoid the effort it takes to process the new information and update their internal model/state i.e. avoiding cognitive dissonance by simply rejecting the new idea. This is not a conscious decision.
  • We resolve this trade-off in human language by creating formal languages like mathematics. A formal language is a language that avoids this reliance on the assumption of a shared model of the world by explicitly defining the initial state of the world in the form of axioms.

--

--