Suppose you want to calculate
5^65537 instead of multiplying
65537 times, it is recommended to do
((5^2)^16)*5. This results in 16 times squaring and one multiplication.
But my question is aren't you not compensating the the number of squaring times by squaring very large numbers? how is this faster when you go down to the basic bit multiplication in computers.