Next: A proof of Boole's Expansion Theorem

# Feasible Computations

We want to investigate how many steps a computer can execute in a reasonable amount of time. This will give us a rough, but interesting, upper bound on how much we can expect out of an algorithm.

The basic measure of a computational step for a CPU is derived from the frequency of the crystal which provides the clock for the CPU, e.g., a 100 MHz frequency allows 100 million basic steps per second. A simple CPU instruction can be executed in such a step. Commercial workstations at present operate around 200 MHz, and super computers at less than than 400 MHz.

• So one-billionth of a second, called a nanosecond, and abbreviated ns, is shorter than the time needed to execute a basic cycle on current computers, and we will use it as a bound to the amount of time in a basic step.
• Computers can have large numbers of CPU's working in parallel - at present one million CPU's seems to be a safe upper bound to what is technically feasible.
• Finally let us say that a project which takes a hundred years to complete is ``hopeless''.

With these numbers we can give an upper bound for the number of basic steps a computer can carry out on a particular project. (The particular numbers are not critical to our discussion - if you want to be more certain, take the thousandfold of each one.)

Now, from the simple fact that we have the following approximate conversion of years to nanoseconds: and thus a project requiring (as ). Actually one could probably claim that even basic steps is hopeless for todays computers. In terms of powers of 2 we have as . In particular this says that trying to compute a truth table with 85 variables is just impossible.

While we are looking at upper bounds, let us take a moment to consider the sizes of the popular bounds used in complexity theory, namely , etc. Next: A proof of Boole's Expansion Theorem

Stan Burris
Fri Jan 31 11:15:02 EST 1997