[rescue] Parallel ports [was Re: Slightly OT: ?Bad Cap Saga]

Curious George jorge234q at yahoo.com
Fri Aug 22 02:26:43 CDT 2008


--- On Fri, 8/22/08, der Mouse <mouse at Rodents-Montreal.ORG> wrote:

> > But, my point is, you can move to a different interface *if* you
> > rethink what you expect *of* that interface.
> 
> True.  It's, perhaps, analogous to moving software to a different
> language by rewriting rather than translating.

Good analogy.  But, I am suggesting you *pick* (taking a risk
that you may pick wrong!  :> ) an interface that you think may
stay around for a while (USB, FireWire, etc.) and then come up
with a protocol to talk to <whatever> hardware you might
*eventually* want to put on the other side of that.

> >> I've moved small fractions of my applications
> into the kernel,
> >> inside the lpt driver.  This was feasible because
> I was talking to,
> >> y'know, a parallel port: [...]
> > You would have had to move it to user-land and use the
> established
> > device interfaces.
> 
> Right.  At a severe (crippling, I suspect - hm, I should measure this)
> speed cost: at least two syscalls - user/kernel crossings - per byte,
> instead of one for the whole string.

Why can't you use an existing "printer interface"?
Or, open /dev/lpt and write()-ing 500 bytes at a time?
One trap per call.

> > You may find that you end up stuck with an old kernel/OS as new
> > kernels move to embrace features that older machines won't support.
> > Then you end up having to back-port changes, etc.
> 
> I know.  I've been doing that for years already.

Why spend your time/talent "mowing the lawn"?  Move on
to something new...

> > So, you're already bastardizing any approach you come up with to fit
> > thowse constraints.  Why not pick a different set of constraints?
> > And, use that to give you another set of *capabilities*??
> 
> Well, I think "bastardizing" is an unfair slant.

Well, the existing "parallel port" obviously severely limits
what you can talk to -- and *how* you can talk to it!  E.g., if
you want to have 64 outputs, you end up having to hack together
an even bigger kludge (e.g., treat 6 output bits as a "bit
selector" and another output bit as the "data", etc.  But, what
if you want all 64 outputs to update "simultaneously"?
Then you need to double buffer the whole thing and build something
to transfer the first stage latch to the final, "output" latch, etc.

My point is, the software (and hardware) that you put in this case
are totally different from the software/hardware that you are
using for your 12 (?) relays...

Why not come up with a protocol that lets you generalize future
"things" that you might want to talk to (USB tries to do this)

> But, that aside,
> there is no reason.  Trying different things - having
> different things
> available - is good.  I'm not arguing _against_ the
> availability of
> serial ports, Ethernet, USB, whatever.  I'm arguing
> _for_ the
> availability of parallel ports - for more choice, not less.

But The Industry has decided against this.  So, you can either
hold onto old hardware *or* come up with a new approach that
becomes your new "parallel port" (geek port) -- hopefully
a bit more future safe!
 
> Yes, this is motivated in part by my having done some
> things that fit
> that particular choice.  So?  I've also done some stuff
> that used a
> serial port instead of a parallel port as its interface to
> the host.
> 
> > Spend your time on something more constructive and useful -- design a
> > parallel port replacement technology, etc.
> 
> Hm?  "Replacement"?  As in, design a PCI (or whatever) parallel port?
> It may come to that.

No.  As in "geek port" based on <whatever> hardware "transport"
interface.

> But would that really be "more constructive and useful"?  You just
> finished telling me all about how I didn't really _want_ parallel
> ports, so I'm not sure how that'd be constructive or useful to design
> parallel ports for current buses.  Or, if you mean something _not_ a
> parallel port, then I don't know what you do mean....

"An interface that allows you to portably move data in and out
of a (wide variety of) computer for <whatever> needs"

> > *Really*, look at some of the little MCU's out there.
> 
> I may.  Indeed, in a few cases I already have.  But
> that's not really the style of design I enjoy when I want
> to hack digital logic; it's
> just another computer, and a severely hampered one at that
> - its only advantage, really, is the kind of direct interfacing
> to the world that
> (surprise!) a parallel port gives.  (Well, okay, not
> _only_.  There are
> things like size and power consumption.  But when paired
> with a "real computer" to drive it, it loses those anyway.)

The challenge I am issuing is to come up with something more
general purpose (on the "application hardware" side) than a
"parallel port" and more *portable* and future-safe on the
"host" (PC) side.

*Think* about what types of things you are likely to need
in a <unknown_future_hardware> application.  And, what
you would like a "foundation" to provide for you.

For example, when I design computer controlled peripherals
that interface to mechanisms, I almost always include some
hardware that forces the "drives" (outputs) to some "safe"
state if I haven't talked to the interface recently
(i.e., something similar to a watchdog except this "resets"
the interface logic -- the "field" -- instead of the host
CPU).

Folks who design MCU's spend a lot of resources trying to
figure out what those "features"should be for the MCU itself.
E.g., timers, high current outputs (vs. "regular" outputs),
edge triggered inputs, A/D converters (vs. "comparators"),
etc.

And, there are lots of variants of each of these.  E.g., some
"timers" are just "counters" with a timebase (sometimes you
can select which timebase is used -- some subharmonic of the
operating clock frequency).  Some let you change the counter
modulus (so, you can cound N cycles of timebase X).  Some
let you count external events (costing you an I/O pin).
Some route the "Terminal Count" (carry) out of the timer
to an external pin so you can use it to trigger some external
logic.  Some will let you use an external input as a "gate"
for the counter (i.e., count only while this input is hi/low).
Some will *capture* the count value when an external input
pin changes state.

(counter/timers are *very* important  :> )

E.g., I designed a barcode reader ~30 years ago on a Z80
platform.  The counter timer chips that I used had all sorts
of "modes" that you could operate them in.

My "barcode wand" produced a single digital output:  hi when
it saw black, low when it saw white.

When idle, I would initialize the counter timer to a preload
value of "1" (i.e., once armed it would start counting from
1 down to 0.  when it got to 0 it would be reloaded with the
"start value").  I then set the preload value to "256".
(note this is a seperate latch in the chip -- so the counter
would know what value to reload when the timer "expired").

Then, I programmed the counter to *trigger* (arm) when the
signal on it's control input "went hi".

So, when the wand eventually "saw" black -- the start of the
first bar -- the timer would start.  The timebase for the
timer/counter was "system clock / 16".  So, 16 system clock
cycles later, the timer would count down to 0 (from 1) and
reload the new start value -- 256 (or mabe it was 255?).

At the instant the timer "expired" (hit 0), it was programmed to
generate an interrupt.

My ISR did several things -- quickly.

First, it took a snapshot of the "system timer" -- a 16 bit
freerunning counter that was roughly the equivalent of the
jiffy -- except I am now reading a high resolution timer
and not just "hundredths of seconds".

Then, it looked at the current value in the "barcode counter" that
had been reloaded with 256.

Then, it reprogrammed the counter/timer like before -- except,
it had the timer arm when the control input (from the wand)
went *low* (i.e., back to "white" from "black").

All the timer snapshots were just pushed into a FIFO.
Another system task would watch this FIFO and pull out
these groups of counter values.

Note that I can now tell *exactly* when the white-to-black
(or black-to-white) transition was detected.  To a resolution
of a few microseconds!  REGARDLESS OF MY INTERRUPT LATENCY!!

<grin>

I knew *when* (system timer) I serviced the interrupt.
And, the value in the barcode counter/timer tells me
"how late" I was in servicing the interrupt!  Recall,
it was initialized to 256 when the actual traqnsition
occurred (actually, 16 clock cycles *after* that
transition but this is a fixed constant!).  So, if I
subtract the value recorded from this counter/timer
when I serviced the IRQ from 256, I know how many
clock cycles have elapsed between the IRQ being signalled
and my actually servicing it!

As the Lectroid leader said at the end of Buckaroo Bonzai:
"So what?  Big deal!"

It actually is *crucial*!  If you are scanning bars that
are 0.007" wide and the wand is moving at 100 i/s, the
"width" (time between transitions) of each bar is ~70us.
If you have variable IRQ latency of many microseconds,
then you are distorting the width of those bars (spaces)
by that latency.

Try doing anything similar on a desktop machine running
a "modern OS". :-/

The point is this was possible due to the characteristics
of the counter/timers used.  Other implementations wouldn't
work this way.  Some might work *better* (e.g., if they
could "capture" the value of the system timer at the time
of the transition -- like an Am9513... expensive!)



More information about the rescue mailing list