Nature and Perturbation

In nature, lots of things change continuously. That is Physics 101. In fact, it’s Cosmology 101 or Astronomy 101 or Whatever 101 as well.
An example is a hot system that cools down, changing its temperature over time: depending on the moment you observe the system, you see a different temperature.
These systems can be described mathematically by differential equations, like “Each small change in temperature T per unit of time t, say \tfrac{dT}{dt}, starting from an initial value, can be described as a linear change in the difference in temperature, say -k(T_0 - T_{end}).

There’s a problem with this kind of equation, however. There is no formal, strict way to get to the exact solution in all circumstances. First of all the answer to a differential equation is no particular solution like y=23, but rather a group of functions with unknown variables. To find these variables we need those initial values, like “the temperature T at time t=0 is 100 degrees Celsius, and the temperature of the room around the boiling water is 20 degrees Celcius.”

In the above case it’s easy to find those conditions, because we can test it out in real life and model our equations based on the observations. But as soon as things become more complicated, those simple solutions will start to fail. If you have an equation like “three times the function is equal to the sum of the first derivative and the second derivative”, or

    \[6y=y'+y''\]

you have to resort to preconceived wisdom and guesswork, like “might that be e^{2x}? Then we get 6e^{2x}=2e^{2x}+4e^{2x}.”

The weirder and more complicated the systems (like in general relativity, quantum mechanics or theories like superstring theory or loop quantum gravity, working with higher dimensions in geometry (like 6-dimensional Calabi-Yau manifolds) and other complex systems, the more inaccessible the differential equations become.

How do the scientists cope with that problem? They often start from a simple, well-known system they can describe, throw it at a computer (in most cases) and tell it to start looking for ever closer solutions.
Say you want to renovate your bathroom. You ask a plumber what it will cost. The plumber is familiar with this kind of work, so he tells you that the price will be around €10,000. You ask him to make a detailed quotation. He comes back with a price of €10,175. After you give him the go-ahead he’ll finish the work with a definite bill. Now he knows exactly how many hours, how many nuts and bolts and other materials he has used. So the bill ends at €10,253.10. You give him €10,254 and tell him to keep the change.

The way he arrives at the final sum is the same way you (or the computer) should be able to find a best approximation of your differential equation. This method is described by perturbation theory. An example of perturbation is solving an equation with the use of power series, getting closer and closer to the best solution.

In a sense it is weird to realize that a discipline like math is not able to perform an exact solution within its formal framework. No doubt math will develop new methods to better approximate or even directly solve these equations. Remember Newton, Mercator and Leibniz. When they realized they had no mathematical method to solve the problems they wanted to solve, they just invented the new math they needed (infinitesimals, integrals and differentials). That is cool. Even in the 17th century that was considered cool. And nowadays it still is.