Skip to content

Normalisation: A True Workflow for Mathematical Physics

I’m pleased to share that my new paper, “Normalization in Computational Optics”, has just been published. This work is more than just another technical note—it is a distillation of years of experience working with the numerical modelling of physical systems, especially in the field of optics. The paper covers a set of techniques that have repeatedly proven their worth, not only in terms of computational efficiency, but in providing a more physically intuitive understanding of the systems being modelled.

While this particular publication focuses on wave equations and optical systems, the philosophy behind it applies broadly across mathematical physics. I would like to thank my co-authors for making it possible to get this paper published, without them this would still be sitting as ideas in a LaTeX file. The paper is a call to respect your units, appreciate your scales, and embrace dimensionless formulations—not just as a technical trick, but as part of a deeper workflow that sits at the interface of physics and computation.

Why Write About Normalisation?

Let’s start with the obvious question—why spend time writing about something as seemingly mundane as normalisation? The answer is twofold: first, because it’s more powerful than many realise, and second, because it’s too often overlooked.

Computational modelling is now a cornerstone of modern science and engineering. Whether we are looking at gravitational waves, quantum states, or imaging systems, the differential equations we use are only half the story. The other half lies in how we prepare those equations for computation.

One of the recurring challenges in simulating physical processes is numerical noise—those pesky artefacts that creep into your simulation and leave you questioning whether you’re seeing physics or floating point rounding errors. A primary culprit here is the mixing of scales. That is, when one part of your model is working in nanometres and femtoseconds, while another deals in metres and milliseconds.

This mismatch of scales can wreak havoc in numerical solvers. It affects stability, convergence, accuracy, and—perhaps most importantly—interpretability.

The Case for Normalisation

Normalisation is a systematic process of rescaling the variables and parameters in your model so that they become dimensionless. Done correctly, this doesn’t just eliminate units; it exposes the true structure of the problem. It clarifies what matters, what doesn’t, and what relationships are intrinsic to the physics rather than the units we’ve chosen to measure them.

In “Normalization in Computational Optics”, we demonstrate this process using a number of canonical examples in wave propagation. We focus on the wave equation, one of the fundamental workhorses of physics, and show how applying appropriate scaling yields not just cleaner numerics, but deeper insights.

From the Wave Equation to a Dimensionless Form

The wave equation is deceptively simple in its unnormalised form:

\frac{\partial^2 u}{\partial t^2} = c^2 \nabla^2 u

But when you normalise it—say, by defining a characteristic length scale L, time scale T, and amplitude A—you move from a raw equation into a dimensionless representation that reveals the interplay of physical parameters:

\frac{\partial^2 \tilde{u}}{\partial \tilde{t}^2} = \nabla^2 \tilde{u}p

What you gain here isn’t just tidiness. You decouple the mathematical behaviour from the numerical stiffness caused by clashing scales. You can then choose your spatial and temporal resolution to suit the shape of the problem rather than being enslaved by arbitrary unit choices.

Diffraction Integrals and the Benefit of Rescaling

The paper goes further than just the wave equation. We also explore diffraction integrals—tools often used in computational optics to model how light propagates, bends, and interacts with systems such as lenses and apertures.

Here too, normalisation brings tangible benefits. Many of these integrals are computationally expensive and numerically fragile, especially when used to model near-field or far-field propagation. But through appropriate scaling—based on the aperture size, wavelength, and propagation distance—we show that these integrals become far more stable and their implementation less prone to instability.

This is particularly important in optical engineering, where such integrals are used to simulate the behaviour of systems from microscopes to telescopes.

Physical Intuition Through Dimensionless Groups

Perhaps my favourite benefit of this approach is how it enhances physical intuition. When you move to dimensionless variables, you’re forced to think in terms of ratios and relationships. What matters is no longer “how many nanometres long is this aperture” but “how many wavelengths wide is the aperture compared to the beam”.

This reframing often leads to new insights. For instance, the Fresnel number, which appears naturally in the normalised diffraction formulation, becomes a guiding quantity for understanding the type of diffraction regime you are in.

These dimensionless numbers—akin to Reynolds number in fluid dynamics or Mach number in aerodynamics—give you a compact summary of the problem’s behaviour. They serve as design parameters, diagnostics, and sanity checks all in one.

From PhD Notebook to Published Paper

This work has been simmering since my PhD days, where I first began wrestling with the gap between clean mathematical theory and messy computational implementation. Back then, I often found myself bogged down by seemingly inexplicable simulation errors, only to realise—after much hair-pulling and with the help of my mentor and co-author Dr Sabino Chávez-Cerda—that the culprit was poor scaling.

Over the years, I began to formalise the process. I wrote routines to automatically scale problems, sketch diagrams to track units, and develop a mental library of characteristic scales for different physical systems. Eventually, this practice became second nature—a quiet backbone to many successful modelling projects.

So, this paper is a kind of personal milestone. It’s a chance to codify and share those techniques that have quietly served me and my collaborators over the years. And if it saves another modeller from an all-nighter of debugging a noisy simulation, it’ll have done its job.

A Call to Respect the Normal Form

In closing, I’d like to encourage a mindset shift for anyone working at the intersection of computation and physical modelling. Respect the scales. Respect the units. And take the time to normalise your equations before you throw them into a solver.

It’s not a luxury or an academic exercise—it’s a practical and often necessary step toward stable, interpretable, and physically meaningful simulations. If you’re interested in the details, the full paper is available now. It includes concrete examples, worked-through derivations, and implementation notes that I hope will be useful to physicists, engineers, and computational scientists alike.

Thanks for reading—and if you’ve got a favourite normalisation trick or horror story of numerical instability, I’d love to hear it.