Greetings:
As is my wont (don't you just love Old English?) I will be running my annual "First Quarter - No Quarter Given" benchmarks beginning in January of 2009. Right now I'm limited to existing benchmarks of Waltz-50, WaltzDB-16 (almost all Rete-engine vendors) and some of them for the WaltzDB-200, a new version of WaltzDB-16 but using 200 variations rather than 16 - that should prove interesting.
Also, the platforms will be limited to Mac OS X running on a Dual G5 with 4GB of RAM and a Core2Duo running with 3GB of RAM. I have beefed up the Windows-32bit XP (Dual Threaded single CPU) machine to 3GB of RAM just to be able to run certain software that is incompatible with either Mac (Unix) or Windows Vitria-64-bit OS on i7 CPU.
IF, and let me emphasize the IF, I can get the time I will have benchmarks for Decision Table (only?) engines with a 10K and 100K Telecom Benchmark that will do nothing more than show off processing power of single-row data validation. So far, the DT vendors have not been very helpful in coming up with a benchmark of their own this past year so I pretty much label all of the a 5 in terms of performance - meaning that it's neither good nor bad; pretty much an unknown. Mostly because my editor won't all me to give them a zero. :-)
Also, we might do some of what Peter Lin and/or Mark Proctor suggested in the way of "micro-benchmarks" that would remove any level of cheating. If we throw in Gary Riley's Sudoku and/or the 64-Queens problem, we'll have something else that is not actually business related but will give some indication of engine performance.
The benchmarks will be
1. Waltz-50
2. WaltzDB-16
3. WaltzDB-200
4. 10K Telecom
5. 100K Telecom
6. MicroBenchmarks
7. Sudoku
8. 64-Queens
The classes of vendors will be
1. Rete-based engines, internal objects (CLIPS (?), JRules, Advisor, Jess, Drools, etc.)
2. Rete-based engines, external objects (CLIPS, JRules, Advisor, Jess, Drools, etc.)
2. Compiled Java Engines (Visual Rules, OPSJ, JRules, Advisor, Drools, etc.)
3. Sequential Compiled Java Engines (Visual Rules, JRules, Advisor, Drools(?), et al)
4. Decision Table Vendors (Corticon, Visual Rules, Haley Office System, VisiRules, etc. but could include JRules, Advisor and Drools)
Folks, that's a LOT of work for one little old Texas boy unless I can find someone independent to help AND if I can get some help from the vendors writing these benchmarks to be checked by myself and any independent help that I can get. If you want to help (and thereby ensure your name be placed with the other immortals of rulebase benchmarking) send me your name and we'll get you started.
Remember, to help with the overall project, you MUST be independent and NOT working for any of the vendors that are being tested. (You can be working on any vendors project as long as you are being paid by the client and NOT by the vendor.) To help with the project from a vendor point of view, all I need is the code for all of the tests in the appropriate syntax for that vendor. I (we) still have to read it and verify that nobody cheated but that should be really helpful and will be duly noted in the tables that will be published.
Maybe, just maybe, (no agreement yet) InfoWorld or some other equally high-visibility journal will be willing to publish these benchmarks in the form of an article of some kind. Otherwise, it will be just another blog on benchmarks. :-)
SDG
jco
Who stole my spring??
-
After a nice 20C degrees day yesterday, I woke up this morning to this:
10 years ago