This post was originally written by Glenn Paulley and published in May of 2008 on sybase.com. While old, this question still comes up and the same answer still applies today.
I don't keep detailed statistics about this, but I reckon that somewhere between 2 and 4 times a month I receive a request from a customer for capacity planning advice. Their questions are typically ones like these:
- Right now my largest system handles X users, can it handle double that? Do I need a faster machine?
- As my load increases, I'm starting to suffer performance problems. Should I purchase more memory for my system?
- I'm developing a brand-new application on SQL Anywhere. What will the performance be like?
My usual response to these inquiries is "I don't know", which unsurprisingly for the customer is a disappointing answer. But capacity planning is highly workload dependent, so making any recommendations without thorough testing and analysis is ill-advised. In this I am governed from two principles that I have learned from two people for whom I have a great deal of respect:
- There are no right answers, only tradeoffs - William Cowan, Professor, University of Waterloo.
- All CPUs wait at the same speed - Gord Steindel, Technical Services Manager, Great-West Life Assurance Company.
I think the most important thing that customers can do to get a handle on performance evaluation and capacity planning is to do it systematically. Ivan Bowman and I recently wrote a white paper that describes the most important performance factors for SQL Anywhere applications, and describes a systematic approach to performance evaluation that permits a thorough analysis of performance and use of the conclusions of that analysis to drive planning decisions.