[rescue] product quality (was: ham gear)
Jonathan C. Patschke
jp at celestrion.net
Sun Dec 4 03:49:38 CST 2005
...moving to geeks at ...
On Sun, 4 Dec 2005, Peter Porter wrote:
> Settling for lousy performance and a lousy product is... well...
> lousy. I thought our country was built on people determined to make
> the most of technology, to push it to new limits, and to work as hard
> as they could. Not bottom of the barrel, unmotivated bums. Where's
> the dedication to our work?
It's a whole lot of "nobody ever got fired for buying $vendor" and a
little bit of "computers are just -like that-; they crash a lot"
sprinkled atop a deep-dish crust of choosing immediate "profits" or
"results" over long-term prosperity. There's also a lot of back-
scratching and conslutants realizing that there's more money in
prolonging the problem than actually solving it. Combine that with
suits' propensity to believe glossy magazines and other suits rather
than the people that manage their data and data-processing equipment,
and you have modern business computation.
> I apologize for my ranting, I'll give it a rest now. (and I agree
> with custom vertical applications, managed per paragraph 2)
Oh, the stories I could tell. I've witnessed one of these massive
trainwrecks that took place over about three years:
Scope of the problem:
Enumerate @professionals eligible for $process and maintain a
contact database of their addresses, specialties, and
certifications. Then, let individual people file complaints and
view information about open actions on those professionals online.
How long would it take a competent PHP programmer to bang out something
like that? That's not a hard problem. It's an address book with lots
of tags that can applied to the professionals, a bunch of ACLs to
control who edits what, and personal logins that can have a bunch of
filters applied (since it was a government agency, they received -all-
the action data about each professional; individual citizens need merely
prove access to certain cases). Importing the existing data could be
difficult, as it was all stored in a custom MVS application, but there
was a standardized record export/interchange format for the data, as
dictated by ANSI.
How was this problem "solved"?
* IBM WebSphere Application Server to host the application. We had to
run three separate versions because the different pieces of the
application were written such that they weren't upwards-compatible
with the J2EE specification.
* Oracle to hold the address/action database.
* 14 multiprocessor 64-bit IBM servers.
* 4 different COTS applications to handle different bits of "moving
data to and from the legacy mainframe", since apparently DBA Barbie
says "managing EBCDIC flat files is -hard-!"
* A COTS data-massager (essentially perl/awk with a GUI and lots of
regexp recipes) that's so bloated it requires a dual-processor IBM
p615 with 2048MB memory all to itself.
* Crystal Reports to generate....FORM LETTERS which would be emailed,
anyway. Any moron could do that with sed and fmt, or, better yet,
LaTeX.
* RoboHelp to handle the static-HTML portions of the site because God
forbid anyone learn HTML or how to use Dreamweaver.
* Microsoft SQL Server and IIS for RoboHelp, because RoboHelp's
definition of "static HTML" isn't; it requires ISAPI garbage.
* IBM MQ Series and MQ Workflow to handle the movement of data -within
the application- since they burned through so many contractors, none
of whom documented the code, that none of the APIs were remotely
compatible without switching to a more generic data representation
mid-flight.
* IBM Content Manager[0] to...SERVE UP STATIC IMAGE FILES.
* IBM DB/2 to hold Content Manager's data (because we can't use the
filesystem for holding files--we need to bloat them into BLOBs and
stuff them all in the same opaque container so that when it blows
up, it ALL blows up)
* IBM WebSphere Business Integration Server Foundation. I had a MEGO
episode from suitspeak overload while installing it, so I can't even
tell you what the damn thing does. It was expensive, though, so
presumably it did -something-.
There's a lot more to it, but I'm still recovering and have repressed a
lot of memories of that project from hell. And, no, the project isn't
fully operational after all that. I mean, just take the Content Manager
portion of the problem aside for a moment. You have $bignum images,
each of which has a serial number (no filenames). It's a two-banana
problem to find an efficient way to store and retrieve those images
using a single SQL table, a hash function, and an assload of
directories. Integrating with Content Manager took eight MONTHS.
Seriously, I've considered putting together a short book called "Good
Enough For Goverment Work: IT Boondoggles in the Public Sector". I have
enough stories[1] to fill a chapter or two. I'm sure other folks do, as
well.
Now, where did I put that Scotch....
[0] This wins my vote for "shittiest software product on the planet".
Yes, it beats out Windows ME, ancient versions of dBase, SCO
OpenDesktop, Outlook Express, and Microsoft Exchange. Anything it
can get wrong, it gets wrong.
[1] Like when an IBM contractor spent two weeks rewriting[2] the spooler
in TCL because he was -convinced- that AIX's print subsystem
wouldn't DTRT if the printer or the server was downed while jobs
were in the spool. Or the Price-Waterhouse contractor that
reimplemented cron in Java because "IBM would never approve of using
a platform-specific tool like cron".
[2] Inside of which he called the system spooler!!
--
Jonathan Patschke ) "Buy the best there is, because it's sorry enough."
Elgin, TX ( --Henry Zuehlke
More information about the rescue
mailing list