Tuesday, June 30, 2009

Agile Programming

Greetings:

Recently I've been thinking that I need another blog - one that I would use just to vent and blow off steam at the stupidity of some of the things that we do as programmers. One of those is something called "Agile Programming" where two programmers work on one computer, both at the same time; one writing, one watching and commenting. This was all hyped up a couple of years ago that, in my own opinion, was to help the "less developed" programmers ramp up quicker by being teamed with a more experienced programmer. (Surely you wouldn't put two of the newbies together, would you?)

Anyway, the sheer though of having to explain everything that I do to a newbie who hasn't taken the time to read the first book on rulebase theory, has forgotten everything that he/she might ever have known about statistics, has never had an accounting course in their life and reads only blogs like this for information (OK, they don't even read blogs any more; they just listen to "tweets" and such) sends me into tremors of sheer terror. Don't get me wrong; there is a place for training. But that place is NOT when you are behind schedule (aren't you always?) and you need to get something "done" by the end of the day/week/month.

So, when do the newbies get training from the old timers? When those things are scheduled for (maybe) an hour or two each day or each week. When the trainer can have time to properly present the complete thought process that goes behind each precept of a programming paradigm. When the logic of a though process is presented in a proper methodology rather than a series of "Why" statements followed by an answer that is intended to just shut the other person up rather than teach. This is because you are trying actually to THINK through a laborious process. When done in this manner, sometimes learning takes place. Hopefully.

At our last Java MUG meeting the speaker was pointing out the time that it took to build an idea or a process in your head and that, if interrupted by various things, how long it would take to get back on track. It was actually very revealing. Without intending to do so, he proved my point. Once you are interrupted you can not just turn around and pick up where you left off. Depending on how long was the interruption time and for what reason, it could easily take 15 or 20 minutes to get back to that level of concentration and/or that step in the workflow process.

Which is why I take the phone off the hook, shut the door to my office and put my headphones on when I am trying to concentrate and work on something that requires that level of focus and thought. BTW, during the meeting I did ask if anyone had ever done Agile Programming. Only two guys raised their hands and only one thought that it was a good idea to do programming in that manner. BUT, he did qualify that it was only effective for about four hours a day. According to him, productivity and quality of programming went (way?) up when Agile programming was used. That tells me that the problem "might" have been way too many newbies who needed to improve their programming skills. But, that's probably just my prejudices talking.

When did I personally use Agile Programming? Only two clients have ever tried to force me to do that. One was for a major insurance company in the USA and the other was in the UK. At the USA company it came down to either I would have to leave or they would have lower their expectations of what I was doing. After my rebellion all of the other senior consultants did the same thing and the whole idea was thrown out.

The client in the UK kept sending me really low-paid technicians hired from 4th-world countries who didn't have the foggiest idea of how to use a rulebase nor what was the thought process. I was working as an emloyee at that time for a major BRMS vendor and, at first, they resisted. But when the 4th-world vendor sent two or three of their techies to school and became a partner, then I was asked to reconsider. I told them the same thing; I could do that but production would suffer if I had to stop and explain every little thing to a newbie. Besides, I was not being paid to train personnel outside of my company. If they needed more training then they could go to the vendor-provided school for more advanced courses. They didn't and neither did I.

So, did I ever try it and have it fail? Nope. My clients wanted me working on rulebase problems 100% of the time and were not willing to pay me my normal rate to train a newbie; not even one of their own employees.

Comments to this blog are always welcome. Even when you believe in something as silly as Agile Programming and how it can improve the quality and quantity of programming code. I would ask only that if you do think Agile Programming is a good thing that you tell us of your personal experience and how it helped the quality and quantity of the programming code. And be specific; not that the programming was "better" but how this "better" was measured with and without Agile Programming. What benchmarks did you or your company use that would help guide the rest of us in measuring the "quality and quantity" of the programming code. And if you believe, as I do, that Agile Programming is for really strange people then tell us why in very specific term and in reference to personal experience. Thanks,

SDG
jco

Thursday, June 18, 2009

More on Benchmarks for 2009

Greetings:

Just another post to TRY and get some discussion started on benchmarks. All I seem to get from the vendors is, "Those old things? Those are not 'real-world' benchmarks!" OK, no argument here... So, does anyone have a "real-world" benchmark? No? Fine, then we will use what we have and hope for the best since they seem to work fine on checking the rule processing performance (using almost "real-world" rules) on most engines.

Checking performance on a modeling tool, such as Corticon, Visual Rules, Rule Burst, VisiRules, any Decision Table or Decision Tree, is absolutely NOT what checking rule processing power is all about. [I'll probably catch some flak on that remark but those just my personal feelings.] If you already have the rules hard-coded using either straight-up Java or some kind of modeling tool that produces Java code (such as sequential rules or something else along those lines) then that SHOULD run faster than a real rulebase engine that is designed from the ground up to be an inferencing engine based on whatever algorithm you like.

However, I would like this year to allow a couple of things that I have not done in the past: (1) Allow a "warm-up" time of maybe three or four code passes through all of the rules using different date each time and then running the rules for 10 consecutive passes using 10 different sets of data and taking the average time for the benchmark time. In years past, rules did not run under EJB/J2EE or similar environments (we had Java for several years before we had J2EE/EJB) and we did not allow such things. However, with the increased overhead of having to have that in the core part of the engine I think that it should be allowed. (2) I'm going to drop the old version of Miss Manners 8, 16, 32, 64, 128 and 256 and substitute Miss Manners 2009 - which is the ORF example for this year. (3) The other two benchmarks from the old days are still good, Waltz-50 and WaltzDB-16. (4) However, we are introducing a new WaltzDB-200 this year just to really get some long lead times. (5) We will run these all on the following systems

a1. Mac Core2Duo, 3GB RAM, OS X Leopard, 64-bit (which is Free BSD Unix with a pretty face)
a2. Mac Dual-Quad Core, 8GB RAM, OS X White Leopard, 64-bit [maybe...]
b. HP Intel, 3GB RAM, Dual Threaded, Windows XP, 32-bit
c. Dell Intel i7, 4-core, 8-threads, 6GB RAM, Windows Vitria 64-bit

I might try to work in some Linux if there seems to be any significant speed differentiation on an Intel running Linux or Windows - but experience teaches that usually the Windows version runs faster. But I will check it anyway just to be sure. (6) The systems that I am hoping to check will be (alphabetical order)

a. Blaze Advisor 6.7
b. Drools Version 5.x
c. CLIPS Version 3.0b
d. Jess 7.0-p2
e. JRules Version 6.7.3 or 7.x (depends...)
f. OPSJ Version 6.0
g. OPSJ-NT Version 1.0

Probably, I will publish the results here, along with the previous years of Performance benchmarks, as well as on the KBSC home page. The comparisons of 32-bit and 64-bit should tell us something about scalability. The comparisons of different OS should tell us something about scalability and transportability.

One more thing: If any of the other vendors can demonstrate a suitable version of the benchmarks I will include them - but NOT the same thing that I did a few years ago when I allowed a "similar" version of the benchmarks to be used by a vendor that could not code straight-up IF - THEN - ELSE rules using a NOT statement in there somewhere.

I do expect cheating on the part of the vendors. Somehow, I must find a benchmark somewhere that will not allow that so I'll probably throw in a one that has lots of NOT statements in it or something really rude like that. I know that the vendors don't really pay attention to benchmarks any more so I'm hoping that the customers of these and other vendors will stress performance benchmarks to their suppliers as another check of good engineering. Layering GUI after GUI after Model after Model is cool EXCEPT when you forget how to perform under the pressure of millions of transactions per day that need complex, real rulebase-type analysis.

SDG
jco

Thursday, June 11, 2009

2009 Benchmarks - Again

Greetings:

Sometimes I wonder how many folks find this subject of intense interest and how many are totally bored with the idea of comparing benchmarks?  Why even have benchmarks?  After all, most vendors seem to think that companies are not interested in them so long as the product will "do the job in the real world."  OK, let's pursue that line of thought for just a moment.  

Let's say that you are buying a car for your company.  Some of the constraints are the mileage, longevity, repair frequency (time and costs), ROI of the auto, etc.  All of these are benchmarks.  Let's say that you are not buying a car for yourself nor for your company but you buying for the military and you want an armored vehicle.  Then the constraints would be how effective is the bullet proofing, what firepower is mounted standard, what is the performance in sand and mud, what is the carrying capacity, etc.  Again, all of this is benchmarking.

Now, let's say that you pick up any car magazine in your local bookstore.  Check the charts and you will find under performance the times for 0 - 60 mph, standing 1/4 mile performance and top speed.  Again, benchmarks.  And, agreed, none of these are real-world benchmarks since very few people take their Corvette or Camero from 0 to 60 more than once or twice and almost none of them would ever take their auto over 150 mph.  BUT, these are benchmarks for automobiles.  This is why we have such things as the Le Mans 24 Hour race - another benchmark to determine the best of breed.

Benchmarks for databases have been here since their advent and no one has ever complained about them.  OK, not very much.  Spec-Int and Spec-fp tell the potential customer how well the database that is being considered for purchase will perform under certain standard, non-real-world conditions and from this they will extrapolate whether a particular product has improved its performance as compared to previous versions or against other products OR how well the product has improved from one date to another, and the client will try to determine whether the product will or will not perform as expected in their particular application(s).  

So why, Oh Why, can't the rulebase / BRMS vendors agree on at least one benchmark for performance of their products.  Mostly it is fear that this will become the only basis for comparison of products.  

This belief is based on past experience when a potential customer would use the published benchmarks of KBSC or other independent consultants to beat them across the head and shoulders so that the vendor would have to lower their price to get an order.  (Actually, the vendor didn't have to lower the price but salesmen are notoriously weak-willed creatures who find it easier to sell on the basis of price alone without any consideration to customer needs and their ability to fulfill those needs.)  

After all, one does not purchase a Corvette on the need for transportation, they purchase the Corvette because they like its looks and its performance.  The same thing can be said for a Mercedes, Cadillac or pickup truck - they are purchased NOT on performance alone but also on a myriad of little factors that make up the overall buying experience.

All of that to say this:  I think I'll have the new benchmarks ready for most rulebase / BRMS systems ready before September 15th this year and ready for publication.  Right now we're going to use Waltz-50, WaltzDB-200 and the Manners128-2009 when loaded on Unix (Mac OS X), Windows XP (32 bit) and Windows 64 bit.  That's a lot of work between now and then but with the help of some friends maybe I'll make it.

Watch this space for more news on benchmarks.   BTW, the BEST benchmark (after reviewing the standard benchmarks) is to put the product on your system, use your data, use your process, use your rules and run your tests.  Only then will you know for sure whether the potential product is suitable for your application(s).

Looking forward to seeing all of you at the October Rules Fest.  Check out http://www.OctoberRulesFest.org for more details on the world class lineup of speakers and activities.

SDG
jco

Wednesday, June 3, 2009

Open Source Leeches

Greetings:

InfoWorld ran an article this week on "Open Source Leeches"  and it got some responses, one from Daniel Selman of IBM/ILOG.  While the article is spot-on about some companies using Open Source software and keeping their own solutions secret in order to "get the jump" on their competition, it is decidedly UNFAIR to broadly paint all companies with the same swath using a really wide brush.   

For example, IBM/ILOG contributes back to Eclipse.  And the mentioned Southwest Airlines continually contributes back to the Drools project.  Mark Proctor (Drools) is always commenting that "There is no such thing as a free lunch."  His way of handling "leeches" is to just quit responding to their questions after a while.  

My own thoughts here are that many companies either (1) don't NEED to make changes to the product and can use it right out of the box or (2) don't feel comfortable making changes to a vendor product.  In the second case, that is  normally the result is decades of using vendor products "as is" and only making suggestions.

So, maybe Dave Rosenberg (and Bill Snyder) need to research the "problem" more thoroughly and give us some facts and statistics.  For example, how many companies are using Drools, Eclipse, JBoss, etc. for free.  And, out of those companies, how many have made NO contributions to improve the product.  I have a feeling that the "NO contributor" companies will be a really small percentage.  

After all, the "spirit of community software" is NOT that you have everyone contributing to the product, but that you have a few really solid professionals leading the way for the newbies.  Personally, I use lots of "free software" (Mark hates that expression) and rarely contribute to the effort - not because I don't want to but I would have to learn a whole new product each time.  And I only have time for one or two "projects" in my life at one time.   So, I use a lot of them and contribute to only a couple of them.  

Again, folks need to get the facts straight before they leap to unsupportable conclusions.

SDG
jco


Tuesday, June 2, 2009

ORF 2009 Super Early Bird Discount Extension

Greetings:

October Rules Fest 2009 has extended the Super Early Bird discount of 20% off of normal registration fees to midnight, June 6th, 2009. Regular registration is $500 so that represents $100 savings. If you have any questions, check out http://www.OctoberRulesFest.org for more info on speakers, agenda, etc. The hotel should be determined not later than Friday of this week, June 5th. Anyway, be SURE to register THIS WEEK to get the best savings. :-)

With parking at ANY hotel in downtown Dallas being astronomical (cheaper than New York, San Francisco, London or Paris though) it might be more economical to park-and-ride. There will be directions on the ORF web page at a later date about how to do that.

SDG
jco