Thursday, November 29, 2007

Senior?

OK, what does it mean to be a Senior anything? In the normal use of the word, a Senior Citizen is someone over 50. That's the only qualification - age. However, in the technical world, usually it means 10+ (sometimes 15 or 20) years of experience AND someone who is at the top of their profession. In the Java world, a Senior Consultant is "normally" someone who knows OO design, most of the Java API, is usually (not always) certified at one level or the other, and has been a team lead on at least one enterprise project using Java and/or J2EE and/or Spring, etc.

But what about the rulebase world? Where do we draw the line between a "Senior Consultant" and an "Architect" - or is there any difference? What follows here are just my thoughts and not anywhere referenced - so feel free to reference this blog as authoritative. Since I'm over 40 (OK, over 50) and have 23 years experience in IT (yes, since 1984) and another 10+ years prior to that with experience in Electrical Engineering - so I have the longevity if nothing else. Does that alone make me a "Senior" anything? Nope. Not even the 15 or 20 published articles and white papers would server to "officially" make me a Senior consultant without considering their context, audience, etc.

General Patton once commented that he had a mule that had been through two world wars and many battles, but the mule was still a mule. Time alone is not sufficient. The "Senior" rulebase consultant must have been recognized by his/her peers (in the industry, not just with in the company) as someone with exceptional expertise in something to do with rules and/or a rulebased system. And, that person should have experience with MORE than just one particular rulebase tool or one vendor. In addition, that person should have experience with presentations, demonstrations, talks, and, mot of all, at lease ONE SUCCESSFUL enterprise design work that has held up for several years. Installing a rulebase system is easy - any Java monkey can mimic what has been learned in a one-week class and install a system. BUT that system should still be in place, albeit with modifications, two or three years later.

So, what can we, as the industry "leaders" do to correct this all-too-obvious problem? We can demand (probably without results but we don't know until we try) to have a certification board across all vendors that would be vendor-agnostic that would verify and validate a candidate. A written exam is a rather silly way to do this since we have proven over and over (Microsoft and Sun are the latest victims of this craze) that written exams mean nothing. I have seen non-programmers study for and take the Sun Java Programmer certification and pass the first time. I have also seen experienced Java programmers, who did not study for the exam, fail.

Some of the best consultants in the rulebase industry have not one single written exam proclamation on their wall to proclaim their proclivity and productivity in a given field. Except, maybe, for a Masters Degree or a Ph.D. - but even that is not usually on the wall. No, what we need is to have agreement among the techies of all of the vendors (something close to Nirvana) that would be sufficient to satisfy all. I'm thinking of something like a Master's degree thesis or a Ph.D. thesis level of a white paper submitted to the panel - not some project paper but something that requires thinking and thought application.

So, here's my suggestion: Take this idea to the President or CEO of your company and explain that a "Senior" position means nothing today. But, if we could all get agreement on the meaning of that title and validate that person's claims to being a "Senior Consultant" then the company would be able to charge basically twice what they have been getting in the past for the services of that person. Customer would benefit because they KNOW that the person on their project has been examined (not tested - examined) and found to be one of the best in the world. This would mean that a potential rulebase project would have a 90% or better chance of success rather than the 40% or less that we are getting today. Now, how much is a 50% probability of success worth to a client or to a vendor? Let's have some fun and find out.

Rulebase Conflict Resolution

In the book (rather old now) "Pattern Directed Inference Systems" D.A. Waterman and F. Hayes-Roth made the comment in the "Overview" part that, "Linear growth in computation time with the size of the data base (sic) or the the number of production rules is avoided by only considering data in the active part of memory when rules are applied and by associating with each node a list of rules that reference the node. When the node is activated the associated rules are selected for consideration. Thus the number of rules being tested against memory at an time depends on the number of currently active nodes rather than the size of the production system."

John McDermott and Charles Forgy continue this line of thought on page 177 with a section called "Production System Conflict Resolution Strategies" where they explain how a good CR strategy supports the rulebase interpreter by processing the rules in a manner such that only certain ones are selected. This is an excellent example of using meta-rules. In this short, article-length section they discuss CR, sensitivity, stability, special case rules, recency rules, evaluation, etc.

Although they don't discus this in detail in the limited space made available to them, most vendors seem to feel that there are two major approaches in selecting the next rule - width or breadth. Not true. The major approach should be either LEX - Lexigraphical (as in ILOG JRules) or Means Ends Analysis, MEA - as in OPSJ and others. MEA is basically an extended and enhanced version of LEX. The first example of using the Rete Algorithm (OPS) the inventors used LEX but in later versions this was improved with MEA. There is a good discussion of MEA and LEX in the book by Cooper and Wogrin in their book "Rule-Based (sic) Programming With OPS5." This is something more for vendors than for programmers unless you are using a rulebase that allows you to select your own method of CR.

Without a good CR approach, you will find that the engine spends way too much time thrashing around. Once the Agenda Table is built, depending on the CR, then selecting the next rule is a piece of cake for the engine. In addition to CR there are a lot of other topics that we, the rulebase community should be considering and I'll put all of that in another post later.

SDG
Yaakov

Tuesday, November 20, 2007

Freddie Mac loses $2B - AGAIN

See http://biz.yahoo.com/ap/071120/earns_freddie_mac.html for reference.

The last time it was a $2B (OK, maybe $1B) error in bookkeeping. This time, they once again have made some "errors in judgement." So, to correct the problem they aren't going to restructure anything - that would mean admitting that what they had put in place was really bad. Instead they are going to hire Goldman Sachs and/or Lehman Brothers "... to help it examine possible new ways of raising capital in the near future." Fannie Mae, their sister company, is having the same problems for the same reasons.

This is what happens when you put in a rulebased system (extensively) into a company with cheap labor, no thought process other than "Gitter Done" and just dumping rules into a rulebase because they look good at the time. I'm really familiar with this system only because they tried to get me to work on the project (I refused) and later tried to get m to test the project (I refused) and later still tried to get me to debug the project (and I refused the third and last time) when they refused to allow me to re-write it properly. This is what I mean when I say that rules are declarative and should not be applied in a procedural manner as was being done there. Decision Table on top of Decision Table on top of Decision Table. There are times when you have to walk away from a project to protect your own fragile reputation as a rulebase Architect.

They tried to hire some first class people a long time ago to help but only paid $40/hr including expenses. In McLean, Virginia. About the same expense scale as London, Berlin, or NYC. They tried to hire some of my friends - they refused as well. The ones that I know who did go there soon left with horror tales of mis-management and abuse of the BRMS tool that they were using on various projects. Unfortunately, the vendor had little control over what they did since they would not hire the vendor's professional services either.

Monday, November 19, 2007

Blogs on top of Blogs

Well, my buddy Mark Proctor has been really busy blogging about ruleflow and (slight oblique reference to Drools documentation on that subject) work flow and how the two things make it easier for business guys to state their problem. I agree with the ruleflow documentation in that a ruleflow is only PART of the overall solution. So, where's the catch?

If you give a man a fish, he will eat today. If you teach him how to fish, he'll never go back to work. OK, bad analogy. How about having a hammer and everything looks like a nail? If you give business people spreadsheets - what we call decision tables but it's still a spreadsheet to them - or a ruleflow - what looks like a typical workflow engine to them - then THAT'S what they will do (and only that) because it is very familiar and it just "feels right". What can I say? We gave them the gun, they shot themselves in the foot with it, and then we blame them for doing it. And we, the rulebase community, claim that we told them the right way to do and that they just messed it up.

OK, all of the above is procedural programming. You can't get around it. It's just another way to do Java or C++ or COBOL or VB or C# or whatever else procedural language is being used. And that is NOT a rulebase because a rulebase is, primarily, declarative programming. Why, oh WHY can't we, the guys who are supposed to know better, still pushing decision tables, decision trees and ruleflow as answers to complex problems. It probably IS an answer to a specific business problem, but not all problems can be solved that way - which is why we invented a rulebase in the beginning; to solve complex problems that defy conventional, procedural programming.

Bottom line? Don't use a hammer when a screw driver is better suited for that purpose. Sure, the hammer will work to drive a screw into a wall, but it won't hold up very long. A screw will work better. (Don't get snarky on me - sometimes a screw is just a screw and not anything sexual.) And PLEASE do say that I'm against ruleflow, decision tables, decision trees, work flow, etc. I'm not. But a rulebase project just HAS to have it boundaries and applications just like any other tool. Use it properly or it will backfire and kill you.

OK, maybe it's off the subject of what Mark was originally trying to write about. But we just HAVE to draw the line somewhere and it's up to me, Mark, Peter, Dr. Forgy, Haley, ILOG, Fair Isaac, etc., etc., to start drawing and try to write "quality" business - projects that have a chance of being extensible, expandable, and all of the other "-able" handles. BTW, here are the links to Marks stuff so you can read it for yourself. You might start with the first two. The others are along the same lines.

The blog link
http://blog.athico.com/2007/11/vision-for-unified-rules-and-processes.html

The Drools documentation
http://docs.jboss.com/jbpm/pvm/

Another blog on same subject
http://www.dzone.com/links/a_vision_for_unified_rules_and_processes.html

And yet one more - pretty much the same thing in a different wrapper
http://digg.com/software/A_Vision_for_Unified_Rules_and_Processes

SDG
Yaakov

Sunday, November 18, 2007

Obama - TUCC - NOI

OK, this isn't a normal blog about rules but I only have the one blog space. Normally, I don't write about politics - those guys are WAY too obvious in what they say and mostly about what they don't say. But Obama seems to be working both ends against the middle. He attends Trinity United Church of Christ, TUCC, in Chicago where he is very active in the church and the pastor is a close confident - all of this part taken from his press releases. I came across this trying to de-bunk some stuff that was coming from my right-wing Christian friends but when I went to Snopes I found

http://www.snopes.com/politics/obama/muslim.asp

Sorry, but I find it really, really hard to even THINK about a candidate, whether white, black, yellow, pink or purple, whose own church would be so racially biased as this one. The Black Value System is just one step removed from that espoused by the Black Muslim and their leader, Louis Farrakhan.

http://www.tucc.org/black_value_system.html

A VERY flattering bio of Louis Farrakhan by the Nation of Islam, NOI, is available on-line at
http://www.noi.org/mlfbio.htm

A Not-Quite-So-Flattering, and fairly long, documentary about Louis Farrakhan is found on a "New Politics" issue from the Southern Illinois University-Ewardsville, publication on the Rise and Decline of Louis Farrahkan:
http://www.wpunj.edu/~newpol/issue22/chajua22.htm

Finally, there is a rather neutral entry on Wikipedia that deals with the issue of Farrakhan without calling names at
http://en.wikipedia.org/wiki/Louis_Farrakhan

Now, am I saying the the TUCC is the same as NOI? Short answer, no. Long answer, No Way. But, it looks like TUCC is walking a mighty fine line between being mainline Christian and could slide down the slippery slope of extreme racism like the NOI.

But, back to the main issue: According to his own press release, Senator Barak Obama has a personal relationship with Christ. Unfortunately, he also steadfastly maintains that he not only attends the Trinity United Church of Christ in Chicago, but also ascribes to their social and religious values. Read this carefully and if you don't agree with me, fine. If you do, then pass this along to whomever might benefit from it. If you don't have access to a high speed, the text is printed below.

Me? If Obama makes it to the White House, I'm thinking seriously about immigrating to Denmark. It would be worth having to learn Danish and Danish history just to get away from all of the drain bamaged librals here in the USA.

Wait a minute! I'm from Texas now. We can secede from the Union - legally. Sounds good to me. And then we (Texas) could make the remaining 49 states our trading partners along with the Federal Government in Washington. Hey - and maybe get some of that foreign aid money from Washington. You know, the billions that they pour into other countries they could give to us. It should be a whole lot more than we're getting right now. :-)

SDG
Yaakov

Friday, November 9, 2007

Weekly Blogs

It's been said by those who have lots to say that if you have a blog then you shold blog daily. Some do it several times a day. I can't do that. But, I shall try (note the word "try") to do this weekly. And this blog technically counts as this weeks blog. :-)

Benchmarks: So, what has happened this week in the wonderful of Rulebased System benchmarks? Well, for one thing, the past week has been one of rapid-fire comments about benchmarks from Ming Lin (Traveloicity), Mark Proctor (Drools), Peter Lin (Independent Consultant), and Daniel Selman (ILOG). Not many other weighing in so I will see if I can get some time to run some of those benchmarks as well as fire up the old Telecom bencmarks. That should stoke the fires considerably.

New Things out and about rulebased systems: Versata has a new release 6.3.0. Visual Rules now runs with Eclipse. Paul Haley is still MIA - meaning we can not seem to find out where he finally settled down outside of Haley Systems. Mark Proctor et cie are at a Drools conference. Willie Hall (formerly Neuron Data, formerly Blaze Software, formerly Mind Box) has followed Johnathan Halprin and Marty Saulenas in the Mind Box exodus to Fair Isaac's green pastures. James Taylor (formerly at Fair Isaac) is now promoting Open Source - after a fashion. Jess has a new verison,7.0p2, out now. Have not heard much from ILOG on the official front recently except for news blurbs where they got this deal or signed that contract.

Open Source: Lots of different definitions here but, personally, I think that the Apache license is probably the most open of all and the one to which I, personally, subscribe. For example, I "could" (not that I would) take the source for Drools, rename everything and re-write the screens, and then call it "Texas Rules" - and it would be quite legal from what I have heard about the Apache license. Mark Proctor would know more about this than would I. Anyway, it seems quite neat and many companies are finding out that if they want to be in the software business rather than the banking busines (for example) that they can have their own proprietary version of most any Apache software - and re-sell it to their industry.

BTW, if any of you are in the insurance market, it seems that ACORD, http://www.acord.org , is THE company for de facto global e-commerce standards for insurance. This little company, pretty much unknown outside of insurance folks, was called to my attention by Daniel Brookshier who works for No Magic, an OO design company. He seems to think that they really are straight up and above board. Maybe we could have something like that for rulebased systems? But the customers would have to lead the charge on this one - not the vendors. Anyway, check it out and see what you think.

Next Week: We'll be talking about parallel rulebase systems and (maybe) statistical analysis of applications for the insurance industry that is based on the MYCIN work done back in the late 80's. Both interesting topics. And more on Benchmarks. I promise.

Meanwhile, remember, "To thine own self be true" is a nice thought - but being true to God is even more important. :-)

SDG
Yaakov

Saturday, November 3, 2007

Benchmarks - Final Bleat

OK, here's my final bleat on rulebase engine benchmarks. (Right - YOU believe me, don't you?) Anyway, benchmarks cannot, should not, be done by the vendor but rather by some independent party; a firm that does not have any vested interest in any of the outcomes other than the truth and fairness of play. If a vendor produces the benchmark, then the vendor would have to allow anyone (and by that I mean someone outside of the company, including the competitors of said vendor) to double check those figues, even if it means supplying a time-bombed version of the software to the company or person double checking the facts.

If the benchmarks are produced by a company other than the vendor, the the company should be totally independent of that vendor, someone at "arms length." This means that a company that is a "partner" of the vendor could not produce those benchmarks. However, if the company were a company that simply "used" those products in the normal performance of the business of the company then that would not only be OK but preferred. (Someone like a consulting company that is NOT a partner with any of the rulebase vendors, for example.) However, a company who did NOT use a rulebase in the pursuit of its normal business normally would not be eligible either since that company probably would not have the expertise on-board to discriminate between tests and understand if and when one vendor tries to "cheat" on the tests. And vendors do cheat.

On the subject of cheating, all vendors should be held accountable to the same standards. For example, if one vendor is allowed to use compiled code (in an form except compiling the Java or C or C++ classes themselves) then the other vendors should be allowed to do the same thing. An example of this is the compiling of the rules into Java that is run with a JIT compiler. OPSJ has ONLY this method of running the OPSJ rules. With JRules, Blaze Advisor and others it is an option. Drools and Jess can be run in interpreted mode only and so have put themselves in an unfair position. Corticon and others are NOT based on the OPS format of most benchark rules and they, also, sometimes have an advantage and sometimes a disadvantage. The comany doing the benchmarks has to do everythingl possible to make sure that the playing field is level and, where it isn't, point that out to the readers. (Ergo, the reason for the pure independence of the benchmarking company.)

Also, the company producing the benchmarks should not only make the code and data readily available, but should also produce meaningful benchmarks; meaning that Miss Manners has about run its course and probably should be dropped from contention. It's nice to see the results, but the test itself is way to easy to cheat, whether in the code itself or "under the covers" with optimization aimed strictly at that benchmark. (Yes, some companies have been doing that for years and we, the poor schmucks in the industry, have just now caught on.)

OK, let's assume that my company (KBSC) ran the benchmarks. We're independent. We aren't partners with anyone, much to our financial loss. We do NOT produce any kind of variant of the rulebase in questions. We showed our code. What else can we do to ensure the integrity of our results. Here are some vairants that should be posted with the benchmark:

Machine Name
Machine Model
Machine Speed
Machine RAM Total
Machine RAM available (did something else have the machine busy at the time?)
The command line used to run the tests
(the -Xmx and -Xms and -server mean something to the machine)
Operating System and version
Java, C++ or C# version used
Number of "warm up" cycles
J2EE or EJB used
(if so, which one and which version and show setup)

There are lots of variables in running a benchmark. Everyone must be shown because one vendor runs better on one OS and another vendor runs better on another OS. The same thing applies for the Java version used, J2EE, etc. Unfortunately, since I (personally) left the independent arena, lots of folks have jumped into the gap but few can step up to the plate and meet all of those requirements on all of the benchmarks that they produce. (OK, Steve Nunez is trying really hard but Illation is, after all, not only a partner with ILOG but produces their own knowledge software as well.)

How do we fix the independent benchmarking problem? Well, we need an independent agancy. And it must be like UL or ACORD (the de facto global e-commerce standards body for insurance) but designed for testing a rulebase and evaluating a BRMS; and we need it now! So, here's a thought: What IF we (the industry) did the following? We, the industry and customers, form an independent laboratory. Companies (vendor and/or customers) with 1,000 or more employees would contribute $50K annually toward such an organization. Companies with 100 - 999 would contribute $20K annually. Individuals would contribute $100 per year for membership. This would entitle them to the results of any and all tests being run, input to the company for fairness (but they would have to agree that the sole arbiter of any dispute would be the benchmarking company) and monthly reports on what's happening. Stockholders, if you will, but non-voting stockholders.

And the company itself would deal with anything to do with BRMS, including evaluation of products, benchmarks, an annual ranking of products based on various criteria such as speed, integration with other products, technical support, initial costs, professional services effectivenss and their costs and anything else that might impact the customer. Vendors, of course, would be expected to contribute on the same scale as those companies subscribing to the service. (OK, I took a page out of Forrester's book!!)

[Personal grousing time] The reason that I had to leave the pure independent route was that I financially could not survive and do pure research - and nobody in the BRMS market would step up to the plate and provide the wherewithal to be independent and still survive. I even tried (in cahoots with InfoWorld) to form a company that would do what I described above but IW and the reset of the BRMS vendors couldn't see an advantage for them nor the readers. Well, maybe the readers of this blog can see what such an outfit could do to help the poor sod customers by showing them who's who and what's what BEFORE they try to do it themselves.

SDG
jco