Great mathematicians speak in plain English; insecure ones speak in symbols.

Posted on: November 2025

Mathematical notation: \(\forall, \exists, \therefore, \implies\), etc. is something that deeply fascinated me the first time that I found out about it as a kid. It felt like I was turning my high school equations into some sort of arcane scripture that I could understand, but not my peers. That gave me a feeling of intelectual power that hooked me, as I’m sure it hooked many people who just like me, ended up studying Mathematics in earnest at university. This secret satisfaction that notation gave was almost like a morbid fascination: the more insane the symbols were, the harder to read the equations were, the more integrals, sums, intersections, sets, you name it, the better I felt, the smarter I felt, the more I liked it. I felt like it separated us “enlighted mathematicians” from the remaining “mortals”.

Of course, now that I reflect and I see in hindsight all of this, I realise what a whole load of horseshit this is, and how stupid I was to believe that mathematical notation gave my writing any kind of mathematical superpower. However, at some point during my last year of undergraduate, this started to change for me.

I started to realise that a simple sentence in plain English has much more value than a sentence full of notational symbols. The obvious excuse against this, is that “human language is imprecise” hence why symbols such as \(\forall\) and \(\exists\) come in handy, to avoid ambiguity, and I agree, but the fact is that plain English can achieve the exact same level of unambiguity you could want from purely mathematical notation. And in fact, something more is the case.

What I ended up realising, is that using mathematical notation is often an excuse to hide a lack of understanding behind a curtain of notational and symbolic precision. As time progressed, I started to realise that it is only when you forget about the symbols and express things in plain, unadultered, unpretentious English, that you really understand something.

For example: when you are a fresher in your first year of undergraduate, and you learn about say convergence of a sequence \(a_n\), you might immediately, robotically write \( \forall \epsilon >0 \exists N(\epsilon)>0: M\ge N(\epsilon)\implies |a_M-a|<\epsilon \). This is of course, an incredibly precise statement, but one that has the risk of masking understanding behind the symbols.

It is only when I realised, closer to my second year of undergraduate that all this was saying was: “Give me an error tolerance \(\epsilon\), then a sequence \(a_n\) converges to \(a\) if there is some point in the sequence, say \(N\) which may depend on \(\epsilon\), beyond which the error falls below \(\epsilon\)”. Try to abstract for a second the fact that you already think this is obvious, and think back to your 18 year old version trying to understand this for the first time. Don’t you think that the plain English explanation not only is equally precise, but actually explains exactly what is going on behind the scenes?

My issue of course, is not with with the fact that mathematicians use symbols. My problem is when we resort to these symbols because we have lost the understanding, the idea or intuition behind what we are talking about. Above all, my problem is when this notation use or abuse turns into a psychological defense mechanism: if you don’t really get what’s happening behind the scenes, anyone can write the quantifier-heavy \(\epsilon-\delta\) definition of continuity, but if you can’t put it into plain words, if you can’t express the intuition behind the definition without shielding yourself in the correctness of the symbols, are you really understanding it? Behind symbols, you cannot be questioned about intuition.

The sad part is when this habit doens’t die in your undergraduate (or masters) times. It is tragic how often I’ve seen such notational atrocities being commited by PhD students and lecturers alike, who perhaps do have the intuitive understand, but choose to express themselves in a notation-heavy way, as if to indicate “hey, I really know what I’m doing, look at all this notation”.

Let me share some examples. In a problem sheet I was assigned at some point, I saw: “Assume \(v_k\neq 0, v_i=0 \forall i\neq k\)”. Upon reading this I think to myself: why? what is the need to write this? In fact, behind this desire of being so precise with mathematical notation, there are still not fully clear things: “what exactly is \(k\), does it depend on \(v\)?” And no, the question indeed did not explain it.

After some pondering about what the question really asks, I ask myself: was it that difficult to just write: “Assume \(v\) only has one non-zero entry: \(v_k\)”. No unnecessary symbols, no pretentious nonsense, just straight to the point what I want to tell. I don’t need to confuse students with clunky notation and make it an exercise for them to not only solve the question but also decypher my writing.

Let me give yet another example from a course I've taken at some point in my life. I will not reveal what course this came from for respect to the lecturer, but quoting a fragment of the lecture notes: “[…] belongs to some probability space \( (\Omega, \mathcal F, \mathbb P) \). For instance, […] consists of i.i.d samples from the distribution \( X(\omega), Y(\omega) \sim \mathbb P \).” I’m not here to pick this nonsense apart, but if you know probability you will know just how bad this is, and how much it is screaming of: “look at me, I know measure-theoretic probability”.

The truth is, that nowhere after did we use the language of a probability space. In fact, the only thing we ever used was the fact that \(X\) and \(Y\) are random variables, and the linearity of expectation. There was no need whatsoever to even mention probability spaces, or write plain incorrect nonsense like \(X(\omega)\sim \mathbb P\). (If you are curious, this is saying something like " the number \( 5\) has law equal to Lebesgue measure". What it really means to say is \(X\sim \mathbb P\). And even this is saying that the law of \(X\) is \(\mathbb P\), which is not correct. \(\mathbb P\) is a probability measure on the probability space (that was introduced without reason) where \(X\) is defined on. The law of \(X\), if you really had a need to write it down, is something called the pushforward measure, denoted sometimes by \(X_\star \mathbb P\), and defined by \(X_\star \mathbb P (A) = \mathbb P(X\in A)\), and it is not (necessarily) a measure on \( (\Omega, \mathcal F) \). But again, this was never used, never needed, and completely irrelevant in the discussion ever after.)

If you want a less probabilistic analogy, this is like going to a second year PDE Methods class, and saying on lecture one: “Let \(f\in C^{\infty}(M;N)\) be a smooth map between differentiable manifolds \(M\) and \(N\)” and then proceeding to use high-school calculus for the rest of the semester. What was the point of that? Assert intelectual superiority to your students? Doing this is completely pointless, confusing and even worse, pretentious. Of course, I myself have been guilty of these at length, but now I see this nonsense clearer than ever.

The example above was a stellar case of sprinkling in mathsy terms, even when they are not understood, not relevant, not needed, or even worse, wrong. This happens more often than you believe, and it’s because people with a lack of understanding shield themselves behind the notation, think “this is how Real Mathematicians write.” They copy the notation (sometimes even incorrectly, as in the example above), the students in turn copy it, and the meaning gets completely diluted.

To conclude: I’ve met real world-class top-tier experts. They use mathematical notation and formalism only when needed. They know that conveying the idea is more important than anything else. They know that regular ass human language can be made, if you really want, as precise as mathematical language. They are not afraid to admit when an argument is not fully rigorous; words such as “morally speaking” are okay. They don’t pretend to shield a lack of rigor or understanding behind heavy notation to distract the reader, because they understand the ideas and don’t need to hide behind symbols.

So follow my advice dear reader. When you learn something new in maths, ask yourself: “can I write this in regular, plain, unpretentious English?” It is very likely that at some point the ideas will be so complicated that some mathematical notation must be introduced, and that’s okay. But can you still convey the idea behind the proof in plain words? Or do you need to shield behind the algebra? Do you know why the calculation works, or does it just work? Is it you who is speaking, or are you letting the symbols speak for you?