The Power of One (and Zero as well)

The area of a square, by integration. Why simple if it can be done in a more complicated way?

If someone tells us how to perform a certain calculation, we will calculate until time’s end without looking back. Before that stage however, we have to prove that our calculation makes sense.
In math we often see proofs where the numbers 0 and 1 plays an important role. With that I mean that we have to manipulate an inordinate amount of our proofs by using the numbers 0 and 1 in a number (no pun intended) of creative ways. Like: multiply by one, divide by one, adding something and immediately subtracting the same amount, effectively adding 0.

In an absolute minority of these cases 0 or 1 are not simply 0 or 1, but as complicated as possible. Like in this example. We want to get rid of the square roots in the numerator:

    \[\lim_{a\rightarrow 0}\frac{\sqrt{(x+a)}-\sqrt{x}}{a}\]

Now let’s multiply by 1 in the form of a special polynomial product:

    \[\lim_{a\rightarrow 0}\frac{\sqrt{(x+a)}-\sqrt{x}}{a}\cdot\frac{\sqrt{(x+a)}+\sqrt{x}}{\sqrt{(x+a)}+\sqrt{x}}\]

and we get

    \[\lim_{a\rightarrow 0}\frac{x+a-x}{a(\sqrt{(x+a)}+\sqrt{x})}=\lim_{a\rightarrow 0}\frac{1}{\sqrt{(x+a)}+\sqrt{x}}=\frac{1}{2\sqrt{x}}\]

which by the way is the derivative of \sqrt{x}.

Another example. If we try to deduce the product rule for the derivative \tfrac{d}{dx}[f(x)\cdot g(x)] we can use this construction based on the definition of the derivative:

    \[(f\cdot g)'=\lim_{h\rightarrow 0}\frac{f(x+h)g(x+h)-f(x)g(x)}{h}\]

If we add f(x)g(x+h) and subtract it immediately (watch your signs, they stay the same here)
we get

    \[\lim_{h\rightarrow 0}\frac{f(x+h)g(x+h)-f(x)g(x)+f(x)g(x+h)-f(x)g(x+h)}{h}\]

and reordering gives you

    \[\lim_{h\rightarrow 0}\frac{[f(x+h)g(x+h)-f(x)g(x+h)]+[f(x)g(x+h)-f(x)g(x)]}{h}\]

factoring out:

    \[\lim_{h\rightarrow 0}g(x+h)\frac{f(x+h)-f(x)}{h}+\lim_{h\rightarrow 0}f(x)\frac{g(x+h)-g(x)}{h}\]

which of course is


Integrating this gives you the rule for partial integration.

    \[\int(f(x)\cdot g(x))'=\int f'(x)g(x)+\int f(x)g'(x)\]


    \[\int f'(x)g(x)=f(x)\cdot g(x)-\int f(x)g'(x)\]

If (say in \ln(x)) you decide to choose x as f(x) you’ll take f'(x) as 1, and there you have another method to solve with the power on 1 (because \ln(x)=1\cdot\ln(x)).

Another example is the derivative of the area hyperbolic tangent. We use implicit differentiation and dividing by 1.








Now we can write


because \cosh^2(y)-\sinh^2(y)\equiv 1
Simplify by dividing everything by \cosh^2(y) and we get


Because \tanh(y)=x now we can conclude


The 0 or the 1 are so to speak enclosed in the equation but invisible at the same time. I always call it “manipulate it with a complicated form of 0 or 1”. Thank goodness math gives us that possibility. To make things more complicated to solve it nevertheless.