Peter Seebach writes that while Computers are getting faster all the time,the user experience of performance hasn't improved much over the past 15 years. (Via kurzweilai.net) About 10 years ago people complained that Microsoft Word was too slow on the Mac. You could type faster than the processor handled input on such a large application. The same thing still holds true. Similarly,computer with a hard drive loaded a small command-line utility in under a second and a large graphics program in perhaps half a minute. They haven't changed much in the past 15 years?So the question is, where is all the CPU power going? How is it possible that a machine with a full gigabyte of memory can run out of room to run applications just as quickly as a machine with six megabytes of memory did 15 years ago?
Modern CPUs do a lot of things differently from older CPUs: for instance, they can execute multiple instructions simultaneously What's fascinating is that, for most users, performance isn't noticeably any better today than it was 15 years ago. Computers are, in fact, doing more than they used to. A lot of the things computers do are fairly subtle, happening beneath the radar of a user's perception. Many functions are automatic. Many modern systems use antialiasing to render text. This makes text easier to read and can substantially improve the usability of low-resolution monitors and LCD displays. On the downside, antialiasing sucks up a lot of processing power. Visual effects like drop shadows behind windows and menus, transparent menus and effects, and real-time effects also consume a lot of processing power. Older systems used bitmap fonts, which rendered quickly at the provided size but looked ugly at any other size. Most modern systems render outline fonts in real time, which users are now used to. Even with some caching involved, font rendering adds one more layer of processor overhead - but no vendor would dare release an interface with bitmap fonts today. The current Mac operating system gets around some graphical overhead by having the rendering hardware of video cards do additional work. The video card essentially becomes a second processor, which cuts down on the graphics processing time.
Most users feel a little put out when they can type faster than a word processor can process words. The worst days of this trend seems to be behind us now: most word processing programs started to keep up with even good typists somewhere around the 1-Ghz clock-speed mark. These days, it's the automatic features on these programs that can slow down your system. Automatic checking is a default behavior on most word processing applications. Some simply underline misspelled words and questionable grammar while others automatically correct as you type. Not only are these corrections occasionally inaccurate (many writers turn this feature off), but the behavior also requires a lot of additional processing. A certain amount of system's processing power goes to improved safety and security features for your applications. Many of these features come in the form of critical security patches, since the original code was written without enough attention to sanity checking. The problem with patches is that they add up over time, meaning that individual ones only marginally affect performance, but taken together they can amount to a decent time sink.Virus scanners are a more serious power hog than patches. Most virus scanners update themselves regularly, which makes for a small, but noticeable, amount of background activity. They also scan a lot more files than they once did.
All this scanning chews up a lot of processing time, which affects every program running on a system. For instance, a video game that uses a lot of graphical files and loads them on the fly could require the same 20 MB file to be scanned a dozen times during an hour's play. Security is a worthy and necessary use of processing power, and the alternatives are worse: spyware and viruses can consume incredible amounts of time. Another common cause of slow computers, at least for Windows users, is an accumulation of any number of programs that snoop on traffic, pop up advertisements, or otherwise make themselves indispensable to a marketer somewhere.
Program complexity is probably the biggest culprit when speedy processor still runs slow. As applications become more complex, a certain amount of their code (and thus your processing power) goes into making them more manageable. This code, which I'll call support code, actually makes programs easier for developers to write. A very large program might incorporate nested layers of support code. For instance, a Linux build of Mozilla might link to 30 or so different pieces of support code - including support code for the support code that Mozilla uses directly. The code itself is typically very efficient for its task, and it does make the job of developing large-scale applications much easier. But the code that enables all these small pieces of code to interact in a predictable manner adds a small runtime cost. Once again, a small cost repeated many times adds up to a significant performance hit.
A few of the programs on Windows run special programs at system startup. Each of these programs pre-loads its own shared libraries, which in turn allows the program to launch more quickly later. Worse yet, the bloat introduced by a second-system design is often preserved in future revisions to preserve compatibility.
Luckily, the worst is probably over. Around the time when 800-Mhz processors came out, users stopped the driving need to upgrade constantly. Most users today can complete their work without waiting hours for the computer to perform its tasks.