Is anything inherently difficult?

Is multiplying 43 by 17 difficult? Well, it’s a little bit annoying to do in your head, but if you have a pen and paper, it’s trivial. It wasn’t always like this, though:

back in the days of Roman numerals, basic multiplication was considered this incredibly technical concept that only official mathematicians could handle … But then once Arabic numerals came around, you could actually do arithmetic on paper, and we found that 7-year-olds can understand multiplication. It’s not that multiplication itself was difficult. It was just that the representation of numbers — the interface — was wrong.

Whenever I study math, I always try to hold several examples of the concept being introduced in my head and check the expected properties against these examples in order to gain intuition. Frequently, textbooks introduce definitions without any motivation or examples and in such cases, unless I’m able to come up with an anchoring example myself, I find it impossible to really grasp the idea that is being communicated.

It seems the key to understanding a concept in math is simply learning to view it from a certain angle, usually communicated by a teacher and not communicated by a textbook.

Michael Nielsen is having similar thoughts although for quantum field theory:

In fact, what I’d really like to do is work together with a great designer and a great programmer, to explain a subject such as quantum mechanics or quantum computing or quantum field theory. I think you could do something truly special. And what I’d really, really like to do is to work on explaining all of physics or all of science in this way. Ideally, you’d have the best designers in the world, and the best explainers in the world, together in a room as equal creative partners, figuring out what is possible.

So, I’m left with a suspicion that we’re still in the Roman numeral time for everything more complex than multiplication. Are there things that are truly difficult? Or do we just lack good vocabulary to think about them?

Further reading

Bounded cognition (h/t JS Denain):

The Pirahã language and culture seem to lack not only the words but also the concepts for numbers, using instead less precise terms like “small size”, “large size” and “collection”. And the Pirahã people themselves seem to be suprisingly uninterested in learning about numbers, and even actively resistant to doing so, despite the fact that in their frequent dealings with traders they have a practical need to evaluate and compare numerical expressions.

Notes on notation and thought (h/t Misha Yagudin):

I’m collecting quotes on interesting notations—both powerful ones and bad ones—and how they influence thought.

This list focuses on notation as “a series or system of written symbols used to represent numbers, amounts, or elements in something such as music or mathematics.” This is distinct from a language (computer or natural), interface, diagram, visualization, or tool, but may overlap with them. This list also focuses on examples from math, physics, computer science, and writing systems, though I’m looking to expand it to interesting examples of dance and music notation. General suggestions and pull requests are welcome.

Terry Tao’s answer to “Thinking and Explaining”:

For evolutionary PDEs in particular, I find there is a rich zoo of colourful physical analogies that one can use to get a grip on a problem. I’ve used the metaphor of an egg yolk frying in a pool of oil, or a jetski riding ocean waves, to understand the behaviour of a fine-scaled or high-frequency component of a wave when under the influence of a lower frequency field, and how it exchanges mass, energy, or momentum with its environment. In one extreme case, I ended up rolling around on the floor with my eyes closed in order to understand the effect of a gauge transformation that was based on this type of interaction between different frequencies. (Incidentally, that particular gauge transformation won me a Bocher prize, once I understood how it worked.) I guess this last example is one that I would have difficulty communicating to even my closest collaborators. Needless to say, none of these analogies show up in my published papers, although I did try to convey some of them in my PDE book eventually.

Thought as Technology:

I believe this is what made MacPaint so exciting to 11 year-old me: it expanded the range of thoughts I could think. As a practical matter, this expressed itself as an expansion in the range of visual images I could create. In general, what makes an interface transformational is when it introduces new elements of cognition that enable new modes of thought. More concretely, such an interface makes it easy to have insights or make discoveries that were formerly difficult or impossible. At the highest level, it will enable discoveries (or other forms of creativity) that go beyond all previous human achievement. Alan Kay has asked*:* Alan Kay, What is a Dynabook? (2013). “what is the carrying capacity for ideas of the computer?” Similarly, we may ask: what is the carrying capacity for discovery of the computer?

Eli Parra:

How visual our minds are: Trying to count my breaths up to 10 & failing to, I tried visualizing the count somehow. When I found a shape that worked, it became effortless.

A GUI for counting! Maybe we could design UIs for consciouness, like how memory palaces aid remembering.

Patterns in confusing explanations

Quantum Computing Since Democritus by Scott Aaronson pg. 19:

First, though, let’s see how the Incompleteness Theorem is proved. People always say “the proof of the Incompleteness Theorem was a technical tour de force, it took 30 pages, it requires an elaborate construction involving prime numbers,” etc. Unbelievably, 80 years after Godel, that’s still how the proof is presented in math classes!

Alright, should I let you in on a secret? The proof of the Incompleteness Theorem is about two lines. It’s almost a triviality. The caveat is that, to give the two-line proof, you first need the concept of a computer.

Subscribe to receive updates (archive):