In some sense the idea of approximation is central to all of modern mathematics (say you wanted to find the boundary between the region with waves and without in the image above). It is actually quite amazing how quickly we can write down equations that we cannot solve as a closed form formula, say sin(x)+5cos(ln(x))=exp(x). For centuries this inability to solve the equations that come from rational formulations of the laws of Nature (such as Newton’s laws) kept mathematics from becoming a truly universal tool. Methods of approximation did exist and some, like Newton’s method, were very useful, but it is only with the advent of computation during and after World War II that approximate methods hit their stride.
Computers do some of the things we struggle with, like mistake free, repetitive calculations, very well, and this makes it possible to get approximate solutions to mathematical problems to a great degree of accuracy. Sometimes these come in minutes (see these JAVA applets of Newton’s method), though some mathematical problems are so complex they continue to stretch the limits of the largest computers in the world.
The idea of approximation often goes beyond simple mathematics, since the equations we try to solve come from a set of assumptions that is used to define a model. An interesting historical example is that of numerical weather prediction, which was tried without computers by Lewis Fry Richardson, and failed essentially because the data was not smoothed as it is in all modern weather prediction models. To put this into a simple context, imagine trying to use Newton’s second law to predict the speed of two objects falling from rest, a marble and a toy skydiver. For the marble the familiar F=ma would give accurate predictions, but for the skydiver the idea of treating the parachute as a point particle would be questionable (see the wiki on Drag for more).