Tuesday, March 24, 2009

In Memory of Bubba the Rules Cat

Greetings:

For those who knew him, Bubba the Rules Cat (aka, Richard Halsey) was one of the more interesting folks in the rulebase space.  I worked on a couple of projects with him; Ericsson for 18 months and GMAC Insurance for another five or six.  Whatever the subject of conversation during the day, Bubba rarely watched TV and would spend the evening hours researching arkane and little-know facts and technology.  The next morning he had several sites for our consideration.  He was one of the very best researchers that I have had the privilege of knowing.  And he had a low tolerance for fools and bad products.

Bubba was 68 years old.  He was born June 14, 1942 and died "suddenly" on January 17, 2009.  He loved his mother and his wife, Wanda with a deep affection.  He spent six years in the Air Force with tours in Germany and South Africa and attended the University of Central Florida.

In his local community he was called "The Mayor of Vizcaya" - maybe due to his efforts in organizing a home owners association and he was later elected the president of that group.  The HOA put a plaque in his honor at the base of a pear tree that he planted there.

I remember when we were working in St. Louis and staying at the same motel he got up early every morning, ate breakfast, went outside to smoke and feed the birds.  He told me that his mother always wanted to come back as a bird so if she made it he was going to be sure and bring her breakfast every morning.  

Bubba wanted to be cremated and buried next to his mother so when he died Wanda did just that.  They had a memorial service in Pensacola and then his brother, Jim, took his ashes to New Jersey for another service and to be buried there.

Quite a guy.  He lived a full life.  He and I used to chat at least once a month about various and sundry things and I will sorely miss those calls.  I do know that the last few years (he retired for all practical purposes a few years ago) were probably the happiest that I have known him to be.  He quit smoking about the time he retired but he still had his two (or sometimes three) Cuba Libras every day - one at lunch, a nap, a long walk on the beach, and then a couple of more.  He had a specific method for making them using only the best rum and only smooth-skinned limes.  They really were quite good.

So - Bon Voyage Richard.  We'll miss you.  But I'm looking forward to seeing you again one day.  :-)

SDG
jco

Sunday, March 22, 2009

IBM/ILOG in Health Care - NY Times

Greetings:

Well, it had to happen. The Doctor Will B.R.M.S. you now is an article that appeared in the NY Times. It does give a good shot-in-the-arm to the industry as a whole and ILOG in particular.  There's also an article on Google in the health industry that is interesting reading.  I only hope that someone there actually read the MYCIN book before starting out on this project.  What's new is old and what's old is new.

MYCIN was done in the late 70's and early 80's by the uber-geeks at Stanford in CA and CMU in PA.  And it did a good job.  Back then nobody trusted computers so they didn't use it in diagnosis even though it did a better job than did the doctors.  Today, maybe it will have a better chance.  

SDG
jco

Rulebase Benchmarks 2009

Greetings:

In concert with certain other "technical" persons in this field (Dr. Forgy, Gary Riley, et al) I would like to propose a new-and-improved benchmark for rulebased systems. Waltz and WaltzDB are still valid benchmarks but, unfortunately, most folks (read: programmers) have trouble visualizing the lines of logic that would help move lines from a 2D drawing to a 3D drawing. This is elementary for a "normal" engineer because they have had to endure Mechanical Engineering 101, aka Drafting Class.

I was "privileged" to see some recent emails from a "Benchmark" consortium and the vendors (surprise of surprises) did not like the Waltz nor the WaltzDB benchmarks because they were not "real world." First, the vendors have not defined what is a "real world" and second, since when was a benchmark "real world"?? The "real world" of most vendors these days is composed of financial problems that, while they are sometimes quite large, are never complex. (Think Abstract Algebra or Partial Differential Equation level of "complexity.")

WHY would we need a rulebase that can handle both massive numbers of rules and objects as well as very complex rules? Think about Homeland Security where there are thousands of Ports of Entry (POE) as well as perhaps a million travelers every day and millions of cargo boxes being shipped into the USA. The database query alone could take days when you have only a few minutes to determine (1) does a threat exist and (2) what is the level of that threat? An automobile of a specific nature parked too long in one place (think OKC) could be a "clue" on the threat level. A new object of a certain type on a roadway (think IED) is a potential threat in some areas but not in others.

One person from Saudi might not be (most likely is not) a threat. But knowing that he/she is related to another person of the same village entering at another port combined with another person of the same village either already here or entering from another port might quite possibly raise the threat level. (Meaning, what are the odds of three persons from the same, small village in Saudi Arabia entering the USA over a one, two or three day period?) These decisions sometimes have to be done within seconds while a person is standing at the counter - not hours later when that same person has been passed through and disappears into the local population.

Think of what happens in health underwriting when the things that must be considered are many and related. For example, a bad back (or knee or foot or whatever) could lead to declining health and possible heart attack depending on the severity of the injury. A heart attack could lead to even more declining health and death. Family history can and does play a huge part in underwriting. For example, being overweight means a potential increased risk in diabetes. If most everyone in the family has had diabetes (of either type) then the risks escalate. Having a family history of heart problems as well makes the problem even riskier. This is a large and complex problem that needs fast resolution. Assuming, of course, that the data are available in the first place. The reasoning process here is (can be) extremely complex and most times the human underwriter is the only person who can make that kind of determination when a rulebase would be a much better approach.

Fraud detection is a complex issue that is normally addressed from a superficial viewpoint rather than something "in depth" that might be reasonably accurate. Some of the issues of fraud detection (or homeland security or underwriting) could be handled with Rule-Based Forecasting (RBF) system as well as possibly linking the rulebase with Neural Networks to help predict what will happen. It has been shown (back in 1989) that neural net was much better at forecasting a time-dependent series than even the far more popular Box-Jenkins method of analysis and forecasting. It isn't much good at non-temporal situations but that's another story.

Let us return to our primary discussion of what should compose a rulebase benchmark. A rulebase benchmark should be composed of several tests:

Forward Chaining
Backward Chaining
Non-Monotonicity
Complex Rules
Rules with a high level of Specificity
Lots of (maybe 100 or more) "simple" rules that chain between themselves
Stress the conflict resolution strategy
Stress pattern matching

Just having an overwhelming amount of data is not sufficient for a rulebase benchmark - that would be more in line with a test of the database efficiency and/or the available memory. Further, it has been "proven" over time that compiling rules into Java code or into C++ code (something that vendors call "sequential rules") is much faster than using the inference engine. True, and it should be. After all, most inference engines are based in Java or C++ code and the rules are merely an extension, another layer of abstraction, if you will. But sequential rules do not have the flexibility of the engine and, in most cases, have to be "manually" arranged so that they fire in the correct order. An inference engine, being non-monotonic, does not have that restriction.

Simply put, most rulebased systems cannot pass muster on the simple WaltzDB-16 benchmark. We now have a WaltzDB-200 test should they want to try something more massive.

New Benchmarks: Perhaps we should try some of the NP-hard problems - that would eliminate most of the "also ran" tools. Also, perhaps we should be checking on the "flexibility" of a rulebase by processing on multiple platforms (not just Windows) as well as checking performance and scalability on multiple processors; perhaps 4, 8 or 16 (or more) CPU machines. An 8/16 CPU Mac is now available at a reasonable price as is the i7 Intel (basically 4/8 cores) CPU. But these are 64-bit CPUs and some rule engines are not supported for 64-bit platforms. Sad, but true. Some won't even run on Unix but only on LInux. Again, sad, but true.

So, any ideas? I'm thinking that someone, somewhere has a better suggestion than 64-Queens or Sudoku. Hopefully...


SDG
jco

Saturday, March 21, 2009

Benchmarks 2009

Greetings:

In concert with certain other "technical" persons in this field (Dr. Forgy, Gary Riley, et al) I would like to propose a new-and-improved benchmark for rulebased systems. Waltz and WaltzDB are still valid benchmarks but, unfortunately, most folks (read: programmers) have trouble visualizing the lines that would help move from a 2D drawing to a 3D drawing. This is elementary for a real engineer because they have had to endure Mechanical Engineering 101, aka Drafting Class.

I was "privileged" to see some recent emails from a "Benchmark" consortium and the vendors (surprise of surprises) did not like the Waltz nor the WaltzDB benchmarks because they were not "real world." First, they have not defined what is a "real world" and second, since when was a benchmark "real world"?? The "real world" of most vendors these days is composed of financial problems that, while they are sometimes quite large, are never complex. (Think Abstract Algebra or Partial Differential Equation level of "complex.")

WHY would we need a rulebase that can handle both massive and complex? Think about Homeland Security where there are thousands of Ports of Entry (POE) as well as perhaps a million travelers every day and millions of cargo boxes being shipped into the USA. The database query alone could take days when you have only a few minutes to determine (1) does a threat exist and (2) what is the level of that threat? An automobile of a specific nature parked too long in one place could be a "clue" on the threat level. A new object of a certain type on a roadway is a potential threat in some areas but not in others.

One person from Saudi might not be (most likely is not) a threat. But knowing that he/she is related to another person of the same village entering at another port combined with another person of the same village either already here or entering from another port definitely raises the threat level. (Meaning, what are the odds of three persons from the same village entering the USA over a one, two or three day period?) These decisions sometimes have to be done within seconds while a person is standing at the counter - not hours later when that same person has been passed through and disappears into the local population.

Think of what happens in health underwriting when the things that must be considered are many and related. For example, a bad back (or knee or foot or whatever) could lead to declining health and possible heart attack depending on the severity of the injury. A heart attack could lead to even more declining health and death. Family history can and does play a huge part in underwriting. For example, being overweight means a potential increased risk in diabetes. If everyone in the family has had diabetes (of either type) then the risks escalate. Having a family history of heart problems as well makes the problem even riskier. This is a large and complex problem that needs fast resolution. Assuming, of course, that the data are available in the first place. The reasoning process here is (can be) extremely complex and most times the human underwriter is the only person who can make that kind of determination when a rulebase would be a much better approach.

Fraud detection is a complex issue that is normally addressed from a superficial viewpoint rather than something "in depth" that might be reasonably accurate. Some of the issues of fraud detection (or homeland security or underwriting) could be handled with Rule-Based Forecasting (RBF) system as well as possibly linking the rulebase with Neural Networks to help predict what will happen. It has been shown (back in 1989) that neural net was much better at forecasting a time-dependent series than even the far more popular Box-Jenkins method of analysis and forecasting.

But, I digress. Let us return to our primary discussion of what should compose a rulebase benchmark. A rulebase benchmark should be composed of several tests:

Forward Chaining
Backward Chaining
Non-Monotonicity
Complex Rules
Rules with a high level of Specificity
Lots of (maybe 100 or more) "simple" rules that chain between themselves

Just having an overwhelming amount of data is not sufficient for a rulebase benchmark - that would be more in line with a test of the database efficiency and/or the available memory. Further, it has been "proven" over time that compiling rules into Java code or into C++ code (something that vendors call "sequential rules") is much faster than using the inference engine. True, and it should be. After all, most inference engines are based in Java or C++ code and the rules are merely an extension. But sequential rules do not have the flexibility of the engine and, in most cases, have to be "manually" arranged so that they fire in the correct order. An inference engine, being non-monotonic, does not have that restriction.

Simply put, most rulebased systems cannot pass muster on the simple WaltzDB-16 benchmark. We now have a WaltzDB-200 test should they want to try something more massive.

New Benchmarks: Perhaps we should try some of the NP-hard problems - that would eliminate most of the "also ran" tools. Also, perhaps we should be checking on the "flexibility" of a rulebase by processing on multiple platforms (not just Windows) as well as checking performance and scalability on multiple processors; perhaps 4, 8 or 16 (or more) CPU machines. An 8/16 CPU Mac is now available at a reasonable price as is the i7 Intel (basically 4/8 cores) CPU. But these are 64-bit CPUs and some rule engines are not supported for 64-bit platforms. Sad, but true. Some won't even run on Unix but only on Linux or only on Windows.  Again, sad, but true.

Anyway, the next blog on ORF 2009 will be about Ms. Manners - the new version where we don't tell you HOW to solve the problem (we don't give you the rules) but you have to get the right answer.  Probably, in order to get the "right" answer we will have to provide the data.  Unless, of course, some kind vendor would find a college intern to do that.  :-)  


SDG
jco

Monday, March 16, 2009

A Rare Retraction

Greetings:

OK. I was wrong. Visual Rules is NOT a spreadsheet-looking development GUI but rather more of a modeling environment. My upcoming article in InfoWorld (hopefully, next week or the week after) will more than demonstrate that. Their GUI is one of the coolest in the industry eclipsed only by (maybe) Haley Expert Rules (not Haley Office Rules.) There! I reviewed the product in July of 2008 and again in December 2008 but somehow (old age? Nahhh - I've always been like this.) I lumped it in with Corticon and other spreadsheet-looking GUI environments.

OK, I blogged the error. Now I've blogged the retraction. Sorry, David K.

SDG
jco

Sunday, March 15, 2009

Ladder Logic Relays, Computers and MYCIN

Greetings:

If you have a sufficient mass of grey hair (or should that be "gray hair") you might remember "Ladder Logic" that was used for relays (those electro-mechanical monstrosities long, long ago) that were used in order to control machines in the dark ages before solid-state, ladder-logic that came to be called a PLC (Programmable Logic Controller) as popularized by both Square D and Allen-Bradley companies.  A good view of this kind of logic is available on http://en.wikipedia.org/wiki/File:Ladder_diagram.png if you would like to see it.

Today's spreadsheets and decision tables are about as archaic as that particular concept and serve basically the same function.  

IF this is true AND this is true AND this is true 
THEN do that

How simple.  Just like Ladder Logic.  Yet, when many of these IF-THEN rules are strung together, how complex they swiftly seem to become.  (Not really - but the human brain cannot usually do more than four or five things at once.)  So, we bunch them up and put them in groups to make it simpler for the simple minds of humans.   And we are left with simple solutions for simple minds.

But, think about it...  What is happening is not very complex.  To move just ONE STEP toward complexity consider what the group of MYCIN did; they implemented the concept of "probability of belief and disbelief" into the rule as well as some other statistical concepts and nobody has EVER gone back to that concept since it is too complicated for the simple minds of most "computer scientists" - those minions to whom we trust our programming today.  If you can obtain the book by Buchannan and Shorliffe on the MYCIN project, GET IT, READ IT, UNDERSTAND IT !!  

Then, after you read it and understand it, let's move beyond MYCIN.  What if we integrate rulebased systems and neural nets?  What if we integrate digital computers and analog computers?  (Typical response to that one is:  "Tried it.  Failed.  Not possible.")  Cyc (Doug Lenat's huge, monstrous, immensely complicated 'real' KnowledgeBased System to be completed in 2025) may or may not be the answer, but surely we can move toward something in between the brain-dead, incredibly simplistic BRMS that are out there and immensely complex Cyc Project.  Any ideas?  

SDG
jco

Saturday, March 14, 2009

Why Can't The English Teach Their Children How to Speak?

Greetings:

English is the language spoken (usually very poorly) around the world.  But there are some countries where English should be spoken and written properly.  (England, Scotland, Wales, Ireland, USA, Australia, New Zealand and Eastern Canada.)  It's the national language of India and Pakistan but you could not tell this from the language spoken or written there.   While in London and Southampton for long periods I was determined to develop a true British accent.  The problem was that not one person there spoke like the other one.  Finally, I gave up and decided that speaking properly with a decent mid-western-USA accent was far better than ANY of those terrible dialects.  Even if it does have slight Texas twang.  :-)

What set me off this time was the phrase from a fairly popular blog, "Is the data flat?"  Data are plural, Datum is singular.  So the sentence should read, "Are the data flat?"  It could read, "Is the data structure flat?" or "Is the structure of the data flat?"  Agenda (plural) is a list of agendum (singular).  

The other one drives me nuts is when someone says, "Me and Bob are going to lunch.  Want to go?"  It should be, "Bob and I are going to lunch."  What about where someone uses "were" for "we're" going to do something.  Or ending a sentence in a preposition as in, "Where are you going to?"  when  "Where are you going?" is sufficient.  Or the infamous, "What did you do that for?"   The most egregious of all, "You need to get aholt of the data."  OUCH!  Get a hold on the data?  Even that is terrible.  

What about the spoken variety?  Such as using "git" rather than "get."  Or using "caint" (rhymes with paint) for "can't", which rhymes with pant.  Or saying, "O'tel" rather than "Hotel".   I had the ignomious displeasure of listening to a linguist preach today - but she used the most terrible English when I KNOW that she fully knows how to speak properly - she was either heavily influenced by her congregation or her surrounding growing up and just never got out of improper usage of the English language.  (She speaks and writes about 10 or 12 languages fairly  fluently.)

I'm reminded every day of the musical "My Fair Lady" in which Professor Higgins asks the eternal question, "What can't the English teach their children how to speak?"  If you haven't seen it, check it out of your local DVD store and at least watch the opening song by Rex Harrison.  "The way an Englishman speaks absolutely classifies him  From the moment an Englishman speaks he makes some other Englishman despise him."  (Also from the movie and that song.) 

So brush up on your English by reading something of quality.  Shakespeare comes to mind.  Or the bible.  Get out of the code books for a while.  STOP reading all of the infernal, terrible documentation produced by someone in another country - or even in our own English-speaking countries when it was written by some dim-witted, High School Dropouts!

SDG
jco

Corticon and Tibco

Greetings:

There is a link called "The Orange Mile" [takeoff on "The Green Mile" ?] that published a "review" of Corticon as used with Tibco. They were even less impressed than I was when I reviewed Corticon several years ago - about 2006 or so. The link to the Orange Mile article is at http://orangemile.blogspot.com/2008/06/corticon-rule-engine-review.html so you can read it for yourself. I could not find an author's name or homebase link so it must be anonymous or something.  The link to my article is at http://www.infoworld.com/article/06/07/07/28TCcorticon_1.html if you would like to read the original broohaha.

Corticon calls what they do DETI, or Design Time Inferencing. The math that they claim to be part of the optimization process is doubtful since they don't show you what they do. ("Pay no attention to the man behind the curtain." - Wizard of Oz) That being said, what they do is basically static and, to my way of thinking, not a flexible as a true rulebased system.

On the other hand, you must consider that ILOG, FICO, and others now have sequential rules that are nothing more than what Corticon is doing. Even the "Decision Tables" by FICO, ILOG and Drools are extremely similar to Corticon in that each row is a rule that is static as well.  True, the row (rule) is processed by the rule engine BUT it's still pretty much a static process.  FICO has just recently introduced a "gap analysis" tool that Corticon has had for years.

Visual Rules and VisiRules are code generators, Visual Rules from a spreadsheet and VisiRules from a model driven process. They, too, are static processes.  Visual Rules generates plain-jane Java code while VisiRules generates a high-level processor for Prolog. 

Being a purist, I prefer a "real" inference engine that can deal with anomalies and incomplete rules and does have some kind of Conflict Resolution Strategy - as has been mentioned and discussed in other blogs at this link.  This is the BEST way to handle complex logic, incomplete rules and anything that might require thinking.

On the other hand, if ALL that you are doing is processing straight "out of the procedure book" business logic, why would you NEED and inference engine????  The answer is obvious:  You don't.  They may as well use a spreadsheet and get it over with as easily as possible.  And, so long as you don't have over a few hundred rules in each set, it should work quite well.  

All of that being said, why be critical of a company who is doing the same thing to the rulebase industry as the others except they don't have the fall-back position of a real inference engine should they need it? They (ILOG, FICO et al) gave their "stamp of approval" to the bastardization of the rulebase industry when they started down the Decision Table, Decision Tree, Compiled Sequential route a long time ago. Now that somebody is giving them the "come-uppance" that they deserve, they begin to whine like a mule. It makes you want to gag at the gall and hypocrisy of it the "Big Four" vendors.

SDG
jco

Thursday, March 12, 2009

Are Business Users "Dummies?"

Greetings:

In almost every enterprise organization to which I am called for consulting, there is a HUGE rift between the business analysts (who firmly believe that IT is a closed society and that they are keeping everything a secret - AND the Business Analysts want control over their own logic, meaning the rules) and the IT (who are just as firmly convinced that the business analysts are idiots and can not be trusted with writing rules.)  Cooperation seems to be something that only 98-pound weaklings engage in while whining about how the world mistreats them.  Simply put, all of this garbage HAS to stop. 

All of this mistrust is, of course, built on decades of mainframe culture in which "users" were not programmers but pretty much the folks who had to pay the bills for whatever IT decided to do - and IT sure as heck wasn't telling anyone the inner secrets of their domain.  It was all about "empire building" and control.  

And I have seen situations where the business users actually got their grubby little paws on a rule engine (Advisor, JRules, whatever) and were told by the marketeers (ie, sales persons of dubious parentage) that they did not need IT.  And they believed it.

Bottom line:  The company suffers and gets wrapped up into internal "politics" of who gets to do what.  The whole mess is totally disgusting!!  IT needs to let go of the rules of business and let the business folks take control of (and therefore the verification and responsibility for) the business rules.  By doing so this would remove the "impedance mismatch" between the two organizations.   After all, IT has enough on its plate just getting the proper architecture for the rules, setting up the J2EE or EJB part of the project, getting the data straightened out and verified, writing new GUI screens, etc.  

I actually went through the MBA program (a long time ago, admittedly) but even then the business guys knew how to do partial differential equations, abstract algebra, constraint-based programming and they were really, really good at doing spreadsheets and documentation.  FAR better than what IT guys were doing who usually completed introduction to basic calculus and, if pressed, could use the spreadsheet and/or word processor almost as good as their departmental secretary -  meaning, rather poorly.

Before you accuse me of being a business "geek" remember that my degree was in Electrical / Electronic Engineering with a focus on Microwaves and Computers.  Before that I spent 10 years in the trenches as an ET doing long-range radar, S-Band radar, VHS radar, etc. etc.   So, I was trained as a "Dilbert" and grew into the MBA stuff.  

I know both sides of the fence and I know it well and I am saying that both the IT and the Business Departments need to grow up, quit acting like spoiled brats and learn how to share.  Until that happens, neither side wins and both (meaning the company itself) will probably lose considerable resources to their incessant squabbling and whining.  Times are hard and we don't have the luxury to fight among ourselves any longer.

So, like any marriage, IT must learn how to cooperate with the other side.  Even when you don't understand the "why" question answers.  Accept it that the business guys HAVE to think positively and that they really believe that everything will turn out wonderfully well.  And the  business guys must understand the need for structure upon which IT will insist - they have been burned many times by not considering the worst thing that could happen.

Now, go back to work and do something for the company BEFORE you think about your own department or group!!!!

SDG
jco

Monday, March 9, 2009

Greetings:

I have added a new signature to my "normal" quotes from the Bard of Avon.  This one is especially applicable to today.  

"NOW do you believe?"  (Morpheus to Trinity in "The Matrix")
Buy USA FIRST ! Then the manufacturers and congress will follow the lead of those whom there were supposed to be leading!

Maybe, just maybe, this myth of a "World Economy" will be proven to be as false as those who propagated it for their own economic gain.  There is an old expression:  "Charity begins at home."  Well, so does buying and selling.  You can start by frequenting those businesses in your OWN home town.  If you can't get it there, then check out neighboring cities.  The best deal for you, for me, for all of us, is to buy from our neighbors and friends rather than from some unknown in another town or country - even if it is a few dollars (or cents) cheaper.

If WE start, the the manufacturers will follow.  Once they are in like, our own whores, the politicians, will follow like blind sheep.  It has always been thus; the leaders will follow those whom they are  supposed t lead.

SDG
jco

Drools Adds"Command Design Pattern" Calls to Version 5

Greetings:

Well, version 5 isn't even out the door good before Mark begins adding things.  Check out http://rafb.net/p/DTK56i23.html for his latest example code.  What he's is doing is trying to have the same code for statefull or stateless session for system calls to Java.  Also see  http://en.wikipedia.org/wiki/Command_pattern for more info.  Checking out the lines with Mark:

l. 61: This if for doing an insert.  It marshals the XML into a Batch Execution Command.  

l. 88:  This means that it will return the BatchExecutionResults which supports full marshalling back to XML of the results.

l. 519:  Shows how a pipeline works.  it is a Unit Test and the String makes it more readable.  I prefer StringBuffer but Mark says that I'm old and should be using StringBuilder now.  Oh, well.

l. 113 shows how you would insert a set of elements.

l. 293 is what Mark calls "the money part" of the dual state machine

l. 307 is where they created a batch command to insert the four kinds of cheese.  Also, notice that they specify two queries which are viewable on lines 297 and 301.

So, what does that mean for ordinary Joe the plumber?  Well, the queries have an identifier where they can have their results added to the BatchExecutionResults.  Also, when executing a batch (shell script to us Unix geeks) you can get back specific facts or globals via an 'out" attribute.  Also, you can execute queries and have the results as part of the same BatchExecutionResults.  

With a stateless session (because the session is tossed after the execution) you now get a single shot execute that will work with statefull or stateless.

Anyway, most of this is Mark's words via iChat late last night so I hope it's all correct.  But, you might want to check it out anyway.

The link above will probably disappear shortly but here are some more along the same lines that should be more permanent:

https://hudson.jboss.org/hudson/job/drools/lastSuccessfulBuild/artifact/trunk/target/javadocs/stable/drools-api/org/drools/runtime/help/BatchExecutionHelper.html

https://hudson.jboss.org/hudson/job/drools/lastSuccessfulBuild/artifact/trunk/target/javadocs/stable/drools-api/org/drools/runtime/pipeline/PipelineFactory.html

https://hudson.jboss.org/hudson/job/drools/lastSuccessfulBuild/artifact/trunk/target/javadocs/stable/drools-api/org/drools/command/CommandFactory.html

SDG
jco

Saturday, March 7, 2009

Free PDF of Red Book on WebSphere

Greetings:

A good friend of mine, Colin Renouf (Senior geek/techie at Lloyds Bank in London) has written a few Red Book (IBM stuff) on WebSphere.  You can download them for free at http://www.redbooks.ibm.com/redpieces/abstracts/sg247347.html?Open if you're into that kind of thing.  Personally, Colin is one of the bright spots in the UK and I highly recommend his work to anyone interested.  Enjoy...

SDG
jco