Sunday, May 17, 2015

Real Programmer Don't Eat Quiche


For those who regularly visit my almost hidden-from-view postings, I thought that we might revisit Bernstein's now-famous (infamous) take-off from "Real Men Don't Eat Quiche" book.  


Real programmers don't eat quiche. They like Twinkies, Coke, and palate-scorching Szechwan food.

Real programmers don't write application programs. They program right down to the base-metal. Application programming is for dullards who can't do systems programming.

Real programmers don't write specs. Users should be grateful for whatever they get; they are lucky to get programs at all.

Real programmers don't comment their code. If it was hard to write, it should be even harder to understand and modify.

Real programmers don't document. Documentation is for simpletons who can't read listing or the object code from the dump.

Real programmers don't draw flowcharts. Flowcharts are, after all, the illiterate's form of documentation. Cavemen drew flowcharts; look how much good it did them.

Real programmers don't read manuals. Reliance on a reference is the hallmark of the novice and the coward.

Real programmers don't write in RPG. RPG is for the gum-chewing dimwits who maintain ancient payroll programs.

Real programmers don't write in COBOL. COBOL is for COmmon Business Oriented Laymen who can run neither a business nor a real program.

Real programmers don't write in FORTRAN. FORTRAN is for wimp engineers who wear white socks. They get excited over the finite state analysis and nuclear reactor simulation.

Real programmers don't write in PL/I. PL/I is for insecure anal retentives who can't choose between COBOL and FORTRAN.

Real programmers don't write in BASIC. Actually, no programmers program in BASIC after reaching puberty.

Real programmers don't write in APL unless the whole program can be written on one line.

Real programmers don't write in LISP. Only sissy programs contain more parentheses than actual code.

Real programmers don't write in PASCAL, ADA, BLISS, or any of those other sissy computer science languages. Strong typing is a crutch for people with weak memories.

Real programmers' programs never work right the first time. But if you throw them on the machine they can be patched into working order in a few 30 hour debugging sessions.

Real programmers don't work 9 to 5. If any real programmers are around at 9 A.M., it is because they were up all night. 

Real programmers don't play tennis or any other sport which requires a change of clothes. Mountain climbing is OK, and real programmers wear climbing boots to work in case a mountain should spring up in the middle of the machine room.

Real programmers disdain structured programming. Structured programming is for compulsive neurotics who were prematurely toilet trained. They wear neckties and carefully line up sharp pencils on an otherwise clear desk.

Real programmers don't like the team programming concept. Unless, of course, they are the chief programmer.

Real programmers never write memos on paper. They send memos via mail.

Real programmers have no use for managers. Managers are a necessary evil. They exist only to deal with personnel bozos, bean counters, senior planners and other mental midgets.

Real programmers scorn floating point arithmetic. The decimal point was invented for pansy bedwetters who are unable to think big.

Real programmers don't believe in schedules. Planners make schedules. Managers firm up schedules. Frightened coders strive to meet schedules. Real programmers ignore schedules.

Real programmers don't bring brown-bag lunches. If the vending machine sells it, they eat it. If the vending machine doesn't sell it, they don't eat it. Vending machines don't sell quiche. 

I added the emphasis but thanks to for the "real deal" quote.  Loved it.


Monday, November 24, 2014

Apple Streatches the Truth - Just a bit...


OK, new OS X has been out for a while now so I thinks to my self think I that I shall upgrade my MacBookPro to the latest and greatest.  But I did stop to read the comments.  Apple said that the comments were running about 4.2 or something out of 5.0 possible.  Cool, thinks I to m'self.  So I went to read of the glowing comments.  Here is how 17 of them lined up.

Comments on upgrades to Apple Yosimite
Score 1 2 3 4 5
Number of Ratings 13

1 3 17
Number * Score 13 0 0 4 15 32
Average Score


2 hours to install
So far so good Amazing

Chrome does not work

Great job

Missing applications

Runs great on my PC  ???

Three hous to downloand and will not install

Error in installing


Won't download for Mac Mini

WiFi disconnects evry 4-5 min

Cannot get it to download

Won't download

Bad design.  And I can't UN-Install

Won't download

Won't download

The comments under the score of 1 are how 13 of the comments lined up.  Not very flattering.  I think the one where the person said that they had missing applications was the most disturbing.  The others were that it took so long to install.  Now, admittedly, that person or persons might have had a really slow internet connection.  At any rate, what I saw (and only 17 most recent comments were viewable) was that Apple only scored about a 1.89 / 5.00.  That is a D+ or C-, depending on the curve.  (I figure that anything below a 1.0 is an F and anything between 1.00 and 1.99 is a D so that would be D+, right?)

What I did see was that those three persons with a good experience were all 5s and one 4.  No 2s.  No 3s.  Only one 4.  And only three 5s.  Well, anyway, just to prove that I do not have lot of common sense, I updated my iPhone to 8.1.1 - So far, so goo.  It took over two hours to download and install.  I am running at 15Mb from my IP so that might have something to do with it but everything else download rather quickly - even very large 250MB files take only a few minutes to download.  Maybe it is the iPhone itself that is so blinking slow...


Sunday, November 16, 2014

Benchmarks 2015


Yes, we will be doing benchmarks for 2015 with a presentation (maybe) at either Decision Camp 2014 and/or at Business Rules and Decisions Forum 2015.  In either case, I would like to have a panel discussion  of some kind after the one-hour (or less) presentation, much like we did back in BRF 2006.  Back then, it was a two-hour afternoon presentation that year when Dr. Forgy (PST OPSJ), Dr. Friedman-Hill (Sandia Labs Jess), Mark Proctor (Red Hat Rules and Drools), Daniel Selman (ILOG JRules now IBM ODM) and Pedram Abrari (Corticon) were all on the panel.  My part should be far briefer since all I want to do is to introduce the concept of BRMS/RuleBase benchmarks, show what we have done over the years, where we are today, and then moderate the panel discussion.

Maybe this year we could expand to 10 representatives comprised of the above plus Gary Riley (NASA CLIPS), Dr. Jacob Feldman (Open Rules), D/M/M X (Visual Rules), Carlos Seranno-Morales (Sparkling Logic SMARTS), D/M/M X (FICO Blaze Advisor).  The more participants the more confusion but I will do my best to moderate and not let someone "hog" the microphone nor to interrupt others until they are finished without being heavy-handed.

I will probably send out invitations to all of them and ask that they be represented.  Likely, only those blatantly opposed to benchmarks of any kind or those champions of the past years will be there so it should be a lively discussion. 

Another thing that I have done in the past was to allow only those "interpreted" version of the rulebase to run.  This year the vendor will be allowed to use the compiled Java version as well as "interpreted" version of the rulebase.  I will try and run both versions if available and will report on the difference in two different tables, one compiled and the other interpreted.  The interpreted versions will probably be a bit smaller since most modern rulebases do not run an interpreted version, only the compiled Java or Compiled-C/C++ versions.

Me?  I really like the idea of "Standardized" Benchmark(s) where everyone can compete with whatever rules they like so long as it solves the problem.  I am not a fan of micro-benchmarks since they do not measure the overall performance of the engine.  The old OPS-type benchmarks were, seemingly, deliberately designed wrong just to task the engine by loading and unloading the Agenda Table with as much junk as possible.  Probably I will still run the Waltz-50, WaltzDB-16 and WaltzDB-200 and add the Cliques and Vector Cover benchmarks.  Both of the latter are NP-Hard benchmarks and are, for all practical purposes, impossible to cheat.  And the vendors will have a free hand with the last two.

The first three are standard and will be dependent only on the Conflict Resolution Process (CRP), as so adroitly pointed out by Gary Riley at .  As Gary pointed out there, the speed of the benchmark is highly dependent on the CRP but it also dependent on the number and type of CPUs used, the Hard Drive type and speed, the amount of RAM used and the command line used in either Java or CLIPS to give the process the "correct" amount of RAM and Swap space.  The original code for all of the OPS benchmarks are at if you want to start at ground zero.

Oh, one other thing:  Later (probably in June or July) I will publish what I have so far on the benchmarks and try to get some suggestions on improving them, discarding them or adding other benchmarks that are more "real world" (as some have suggested.)  If you want the full nitty-gritty please request such in the comments section and I will send you one.   I do not plan on publishing them until the conference.