Sometimes I wonder how many folks find this subject of intense interest and how many are totally bored with the idea of comparing benchmarks? Why even have benchmarks? After all, most vendors seem to think that companies are not interested in them so long as the product will "do the job in the real world." OK, let's pursue that line of thought for just a moment.
Let's say that you are buying a car for your company. Some of the constraints are the mileage, longevity, repair frequency (time and costs), ROI of the auto, etc. All of these are benchmarks. Let's say that you are not buying a car for yourself nor for your company but you buying for the military and you want an armored vehicle. Then the constraints would be how effective is the bullet proofing, what firepower is mounted standard, what is the performance in sand and mud, what is the carrying capacity, etc. Again, all of this is benchmarking.
Now, let's say that you pick up any car magazine in your local bookstore. Check the charts and you will find under performance the times for 0 - 60 mph, standing 1/4 mile performance and top speed. Again, benchmarks. And, agreed, none of these are real-world benchmarks since very few people take their Corvette or Camero from 0 to 60 more than once or twice and almost none of them would ever take their auto over 150 mph. BUT, these are benchmarks for automobiles. This is why we have such things as the Le Mans 24 Hour race - another benchmark to determine the best of breed.
Benchmarks for databases have been here since their advent and no one has ever complained about them. OK, not very much. Spec-Int and Spec-fp tell the potential customer how well the database that is being considered for purchase will perform under certain standard, non-real-world conditions and from this they will extrapolate whether a particular product has improved its performance as compared to previous versions or against other products OR how well the product has improved from one date to another, and the client will try to determine whether the product will or will not perform as expected in their particular application(s).
So why, Oh Why, can't the rulebase / BRMS vendors agree on at least one benchmark for performance of their products. Mostly it is fear that this will become the only basis for comparison of products.
This belief is based on past experience when a potential customer would use the published benchmarks of KBSC or other independent consultants to beat them across the head and shoulders so that the vendor would have to lower their price to get an order. (Actually, the vendor didn't have to lower the price but salesmen are notoriously weak-willed creatures who find it easier to sell on the basis of price alone without any consideration to customer needs and their ability to fulfill those needs.)
After all, one does not purchase a Corvette on the need for transportation, they purchase the Corvette because they like its looks and its performance. The same thing can be said for a Mercedes, Cadillac or pickup truck - they are purchased NOT on performance alone but also on a myriad of little factors that make up the overall buying experience.
All of that to say this: I think I'll have the new benchmarks ready for most rulebase / BRMS systems ready before September 15th this year and ready for publication. Right now we're going to use Waltz-50, WaltzDB-200 and the Manners128-2009 when loaded on Unix (Mac OS X), Windows XP (32 bit) and Windows 64 bit. That's a lot of work between now and then but with the help of some friends maybe I'll make it.
Watch this space for more news on benchmarks. BTW, the BEST benchmark (after reviewing the standard benchmarks) is to put the product on your system, use your data, use your process, use your rules and run your tests. Only then will you know for sure whether the potential product is suitable for your application(s).
Looking forward to seeing all of you at the October Rules Fest. Check out http://www.OctoberRulesFest.org for more details on the world class lineup of speakers and activities.