Segments: Slices from the Macintosh Life
What’s Your Mac’s Subjective Speed?
It’s been an interesting several months since I last sat at the keyboard to hammer out an article for ATPM. Most of my time was spent doing the ordinary tasks we do every day—some were computer-related, and some were not. Since basic word processing doesn’t tax the system very much, the fact that I only had 512 MB of memory wasn’t often an issue. I did put off some graphic-intensive work because I knew it would slow things down more than I was willing to tolerate. Otherwise, things were pretty good in my corner of the Mac universe.
Right now you are probably screaming that I needed more memory no matter how happy I was, and you would be right. Mac OS X performs better when it has more than the minimum amount of memory. Besides, I didn’t like putting off some memory-intensive tasks simply because I knew they were going to slow the system down and produce the “spinning beach ball of eternal waiting.”
When I finally upgraded the memory on my G5 to 2 GB, I noticed a difference almost immediately. Word processing wasn’t any faster, but switching among multiple programs was usually noticeably faster. Having multiple programs open had always been possible; now it was also practical. Then I had a brilliant idea: “Why not write an article that illustrates the speed benefits of having extra memory?” I could even include “real world” examples rather than the benchmark scores that are so often misunderstood.
So Where’s the Brilliant Article?
Now that I had a brilliant idea, it was time for some testing. I gathered some speed data for routine tasks such as copying a single large file, copying multiple files, applying a filter to a large graphic, and manipulating word processing files. The problem here is that with the possible exception of applying filters to a graphic file, most of these tasks can be heavily influenced by other factors such as the speed of your hard disk. With the extra memory installed, for example, a large word processing document with several graphics opened one second faster than a smaller document with the same graphics.
Some graphics-related tasks did complete faster with the extra memory installed. For that test I used Preview to apply a sepia effect to an uncompressed 300 MB scanned photograph. I scanned that photo several years ago when I didn’t know what I know now about scanning resolution. Boy, do I wish Lee’s Photoshop series had been available then. My original thought was that every current Mac has Preview, so readers could duplicate this test. Unfortunately, this test is not very practical—how many people routinely apply filters to 300 MB graphics files?
Ultimately, I decided that the tests I had the equipment to run wouldn’t isolate the effects of memory. In modern multitasking operating systems there are just too many variables to consider. It would have been difficult at best, for example, to rule out the effect of the hard drive since OS X does such a seamless job of caching and managing virtual memory.
Why Not Use Benchmarking Software?
By now some of you are probably wondering why I didn’t just use software specifically designed to benchmark my system’s performance. That’s a legitimate question. In fact, I used to do that regularly. When a new Mac model was introduced I wanted to know how my system fared against the new gear. Since I was running an LC II at the time, it usually didn’t fare very well. Those experiences have somewhat colored the way I think about benchmarking software. The remainder of my comments should not be construed as negative comments about any specific benchmarking program.
Older benchmarking programs did their magic by taking a series of tasks and isolating them as much as possible. These tasks were then run on whatever system was chosen as the “reference system.” Users could then run the same tasks on their systems and compare performance to the reference system. The problem with this approach is that the more significantly your system differed from the reference system, the harder it was to determine where the bottlenecks were occurring.
There was an additional problem with this method. Some of the results did not translate easily into the real world. Does a three or four millisecond difference in seek time translate into a noticeable performance improvement in the “real world”? If you attempted to answer that question by running real world applications, then other variables were introduced. Are the two systems using the same operating system and version of the “real world” program? Does the test procedure reflect the way that you work? I am sure there are other variables I haven’t even touched upon when you reach that point.
What About Modern Benchmarking Software?
Those concerns came to my mind in an era of single processor computers that measured speed in megahertz rather than gigahertz. What about more modern benchmarking software? Why not just use that?
I actually have CINEBENCH on my system, and have run it exactly once. I’ll probably use it more often just because I like playing with that kind of software. Why not just use that to benchmark my system? A bit of explanation is in order here.
As modern computers have become more powerful, the benchmarking tasks have, out of necessity, become more complex as well. CINEBENCH renders a photo realistic 3D scene and compares your computer’s performance under varying conditions. I understand why these tasks were chosen. The objective is to stress the system in order to see which components are causing a bottleneck. I am not sure, though, that this is representative of my typical tasks. That may be true for many other users as well.
The problem I have with this arrangement is that I am not sure how well this translates to some of my typical daily tasks. It’s my belief that benchmarking information is only truly useful to you if it was gathered in ways that match your typical tasks and working style. If most of what you do is word processing, you are probably not terribly concerned with how well your graphics card renders 3D scenes. Even if “real world” tasks are the standard, do the tasks match how you work? In the days when most people launched one program at a time and closed that program before beginning the next tasks, launch time was often used as a real world benchmark. It’s not such a good benchmark now, since many users launch programs and leave them open until they either leave the computer for the day or shut down.
In my case there is an additional problem. The Intel Macs have been out for a while now. I expect my G5 to be a bit slower than most, and perhaps all, of the current Intel Macs. That’s not a big issue for me. I’d like to have an Intel Mac, but I have always kept a Mac well beyond the point where it was considered top of the line. I also keep reminding myself that I am not racing against other users with similar systems. Although comparing my numbers to similar systems might indicate which components need to be upgraded, by that point I am usually ready to upgrade my entire system rather than individual components.
Upgrading Based Upon Subjective Speed
In thinking back upon most of my Mac purchases, benchmark data has never really been an issue. Budget constraints have always argued against purchasing the fastest consumer system out there. Hard drive purchases have more often been made on the basis of a need for additional storage space rather than a need for additional speed for that component. I’ve usually upgraded memory because I needed additional memory to run specific pieces of software. This has led me to the conclusion that many Mac users make their upgrade decisions based upon “subjective speed.”
I suppose I really should define what I mean. It’s really quite simple: “How much longer does a given task take than you are willing to wait?” The larger the gap is between your expectations and your system’s performance, the more likely it is that you will explore the prospect of upgrading all or part of the system. The upgrade process will only be completed when the subjective performance gap is large enough to outweigh the total cost of the upgrade—including additional software upgrades and other costs. In my case, the total cost of upgrading to an Intel Mac has to include the cost of upgrading some of my software from PowerPC to Intel.
If you keep your systems as long as I tend to keep mine and you upgrade based upon subjective speed, benchmark programs are of limited value. By the time you look at upgrading it’s probably time to upgrade the entire system rather than individual components. Benchmarking programs can be helpful in these circumstances, but I would bet that most users rely as much or more on recommendations from their fellow Mac users.
While you ponder your Mac’s subjective speed, I’ll start work on next month’s column. I’ll probably look at setting up presets for Internet radio stations you listen to frequently in iTunes.
Also in This Series
- About My Particular Macintoshes · May 2012
- From the Darkest Hour · May 2012
- Shrinking Into an Expanding World · May 2012
- Growing Up With Apple · May 2012
- Recollections of ATPM by the Plucky Comic Relief · May 2012
- Making the Leap · March 2012
- Digital > Analog > Digital · February 2012
- An Achievable Dream · February 2012
- Smart Move? · February 2012
- Complete Archive