The following thread was obtained from the Internet newsgroup on Virtual
Reality as of 12/12/92, editing by Dan D. Gutierrez (The AMULET BBS).

----------------------------------------------------------------------------

From: diego@minerva.st.dsi.unimi.it (Diego Montefusco)
Subject: Re: SCI: Direct Neural-Electronic Interfacing
Date: Sun, 13 Dec 92 00:13:40 CET

In <9212100952.AA29789@ghost.dsi.unimi.it>, on Dec 9, you wrote:

> From: sheng@mings4.cc.monash.edu.au (Ms X. Sheng)
> Subject: Re: SCI: Direct Neural-Electronic Interfacing
> Date: Thu, 10 Dec 1992 00:23:22 GMT
> Organization: Monash University, Melb., Australia.
>
>
> mauah@csv.warwick.ac.uk (Mr I D Bygrave) writes:
>
>
> I heard somewhere, Beyond 2000 maybe ?, that a group of scientists have
> developed a device for blind people.  What they do is directly connect
> a video camera to their visual nerves (forgive my lack of bio knowledge)
> so the blind people can actually 'see' thru the camera.  So, hook the camera
> up to a pc, and there you have it, an neural interfaced HMD.

Sounds EASY... it makes me wonder why there aren't such devices in toy
stores!

It is a pity you didn't read that they have just got to the point of letting
blind see flashes of light or colours and, for example, a big stripe...
One of the worst thing in todays HMD is poor resolution: this way we could
make it even worst!!

Diego

-----------------------------------------------------------------------------
 Diego Montefusco                        B E W A R E ! !
  Via Pirano, 4                        PLEASE REPLY ME TO:
  20127 Milano                    montefus@ghost.dsi.unimi.it
    ITALIA                           (not to the account
phone:  +2  27001467                 I'm e-mailing from!)
-----------------------------------------------------------------------------

Subject: Re: SCI: Direct Neural-Electronic Interfacing
Date: 12 Dec 1992 20:44:47 GMT
Organization: Space Sciences laboratory, UC Berkeley


I read somewhere recently that a Stanford professor has been researching
the use of EEG signals to control a 2-d cursor (mouse). It uses a
conventional EEG-style sensor array (skin electrodes -- no implants)
and he claims to get over 60% accuracy. That is, "think" of where you
you want the cursor to go, and it goes there. No moving parts.

This story sounds quite wild, and I'm going to try to dig up the
source and post it here. (I think I read it. Hmmm.....maybe I dreamed it..)

Dave

-----------------------------------------------------------------------

From: chadwell@utkvx3.utk.edu (Chadwell, Leonard)
Subject: Re: TECH: Neural Interfacing
Date: Sat, 12 Dec 1992 15:45:00 GMT
Organization: University of Tennessee Computing Center

In article <1992Dec12.041658.16932@u.washington.edu>, dstampe@psych.toronto.edu
 (Dave Stampe) writes...

>There seems to be a lot on misinformation about neural connectivity
>going around, so let me add mine (B-{))
>
>First, let's survey the techniques used so far: the up/down computers with
>pattern recognition and scalp electrodes, multi-electrode EEG, direct
>cortical contacts, nerve interface chips, and high-resolution NMR.
>
>The EEG and control computers use measurements from electrodes on the scalp,
>thus average millions (nearly billions!) of neurons' outputs.  This is
>equivalent to debugging software by putting an AM radio next to your
>computer keyboard and listening to the noise: it gives you clues but
>nothing really detailed.  Even arrays of 40 or more electrodes do not
>help too much: they can indicate some underlying mental activity that
>involves large groups of neurons (alerting responses, preparation for
>movement, and so on) but don't show anything really fine.  Basically, the
>"control" systems that try to get simple commands from EEG rely on the
>subject to do something unnatural internally to produce a large enough
>EEG response for the computer to pick up.  A simple one: blink your eyes.
>You can measure a pulse of alpha waves (10 Hz) in the occipital scalp area.
>But then this is just 1 bit of information, and physically blinking your
>eyes is much easier to detect.
>
[stuff deleted]

Sorry, Dave, but L. Pinneo at SRI DID use EEG systems to do cortical wave
pattern matching to qive qualified thought pattern detection.  Running on a PDP
in 1974, and a skull cap with electrodes (no shaving or such required), the
cursor on screen could be issued 7 commmands by thought:  UP,DOWN,LEFT,RIGHT,
SLOW,FAST,STOP.  The main limitations on the system were the RAM (around 32K)
and the speed of the processor.  The commands were merely thought, and the
system could accurately recognize the commands on 60% of the people who were
tested on the system, and with modification to pattern matching code, could
also match those people.  With the progress made in raw computing power and
memory capacity, more commands could be recognized with greater accuracy.

If ANYONE has contacts with or in Stanford Research Institute, please try to
get this projects information to me.  I have been unable to locate anything
else on this project.  It would be very useful.

CHADWELL@UTKVX.UTK.EDU

----------------------------------------------------------------------------

From: chadwell@utkvx2.utk.edu (Chadwell, Leonard)
Subject: Re: TECH: Neural Interfacing
Date: Fri, 11 Dec 1992 18:05:00 GMT
Organization: University of Tennessee Computing Center

In article <1992Dec11.053410.29965@u.washington.edu>, kahn@Csli.Stanford.EDU
 (Ken Kahn) writes...
>
>I recall seeing a film in the 70s (this was before video tape) of an SRI demo
>which could distinguish about a dozen mental states (that people could be
>train to put themselves into).  Someone with a helmet was thinking to control
>a cursor in a maze (up, down, left, right).  It was a neat demo.  Low
>bandwidth though.  About 2 or 3 bits per second. Maybe 15 years later Fujitsu
>will go somewhere with this.  I've been surprised that I haven't heard any
>followups, at least for the severely handicapped.
>
> -ken kahn
Stanford Research Institute (SRI) DID have this.  Lawrence Pinneo did the
research, and it was even reported in Time Magazine in 1974 (I think).  If
you consider the advances made not only in computers, but also electronics,
since 1974.......A new version of this should be able to do an awful lot more
than the 7 commands the old version did.  If anyone can get SRI to release
the data on this experiment, get it to me pronto.....

CHADWELL@UTKVX.UTK.EDU

----------------------------------------------------------------------------

From: jnewman@hamp.hampshire.edu
Subject: Re: SCI: Direct Neural-Electronic Interfacing
Date: Fri, 11 Dec 1992 19:25:43 GMT
Organization: Hampshire College

 (Marcus Brodeur) writes:
>> bold new frontiers to be conquered, certainly, but many (such as
>> direct-neural-interfacing) are certainly NOT going to be conquered in
>> the next century or so.  This is not pessimism on my part (indeed, I
>> am an optimist), but rather a realistic evaluation of progress in that
>> field particularly, also gleaned from discussions with several
>> individuals in the field of neurobiology and neuroscience.
>
>> Marcus Stefan Brodeur.
>
> You are overlooking that there are two directions to every computer
> interface.  I will conceed that I've not seen much about direct
> computer->human neuron interfacing, but the other direction is
> extremely plausible.
>
> The fundamental research into neuron->computer interfacing is already
> 10-20 years old.  The bibliogrphy posted here recently by The Garth
> shows that clearly.
>
> For about $1000 any schmoe can now go into EEG
> ("brainwave") research.  If you're content with electromyography
> (minute muscle signals), you can build most of what you need from
> stock Radio Shack parts for $20, and easily buy the rest for another
> $200.

  In fact, just last year, a Japanese scientist actually managed to synthesize
neurons and get them to lin up in an electronically useful; manner.  He cites
uses being primarily medical/cybernetic.  Still think it won't happen in a
cnentury?  Hah.  We're talking about the next 5-10 years!

				The opinions expressed are, of course, my own.
				Why would I care about yours?

				-Grendel 2.0

----------------------------------------------------------------------------

From: mauah@csv.warwick.ac.uk (Mr I D Bygrave)
Subject: Re: SCI: Direct Neural-Electronic Interfacing
Date: 11 Dec 1992 10:29:37 -0000
Organization: Computing Services, University of Warwick, UK

In article <1992Dec11.053831.1167@u.washington.edu> AKOSSOWSKY@TURBO.kean.edu
 (Andy Kossowsky) writes:
>
>You give technology too much credit. I can certainly see
>how nanotech could facilitate the BUILDING of a Direct Neural Interface,
>but only after someone DESIGNS one.  Researchers know so little
>about the actually workings of the brain's neural network, so even
>if nanotech sprang into existance fully developed tomorrow, we
>wouldn't know what the heck to tell the little 'nanites' to do in there!
>
>ApK
Certainly if nanotech were to magically 'appear' overnight we couldn't make
full use of it. But it isn't going to do that, and in the time it takes to
develope these capabilities, we can start thinking about some good ways to
use them once they do arrive.
For a better discusion of these ideas, take a look at sci.nanotech.

-Ian D. Bygrave Undergradate Computer Science at University of Warwick, UK
	mauah@csv.warwick.ac.uk
	ibygrave@dcs.warwick.ac.uk
"We, alone on earth, can rebel against the tyranny
 of the selfish replicators"-Richard Dawkins. The Selfish Gene.

----------------------------------------------------------------------------

From: dstampe@psych.toronto.edu (Dave Stampe)
Subject: Re: TECH: Neural Interfacing
Date: Fri, 11 Dec 1992 15:34:28 GMT
Organization: Department of Psychology, University of Toronto

kahn@Csli.Stanford.EDU (Ken Kahn) writes:
>
>I recall seeing a film in the 70s (this was before video tape) of an SRI demo
>which could distinguish about a dozen mental states (that people could be
>train to put themselves into).  Someone with a helmet was thinking to control
>a cursor in a maze (up, down, left, right).  It was a neat demo.  Low
>bandwidth though.  About 2 or 3 bits per second. Maybe 15 years later Fujitsu
>will go somewhere with this.  I've been surprised that I haven't heard any
>followups, at least for the severely handicapped.
>
There seems to be a lot on misinformation about neural connectivity
going around, so let me add mine (B-{))

First, let's survey the techniques used so far: the up/down computers with
pattern recognition and scalp electrodes, multi-electrode EEG, direct
cortical contacts, nerve interface chips, and high-resolution NMR.

The EEG and control computers use measurements from electrodes on the scalp,
thus average millions (nearly billions!) of neurons' outputs.  This is
equivalent to debugging software by putting an AM radio next to your
computer keyboard and listening to the noise: it gives you clues but
nothing really detailed.  Even arrays of 40 or more electrodes do not
help too much: they can indicate some underlying mental activity that
involves large groups of neurons (alerting responses, preparation for
movement, and so on) but don't show anything really fine.  Basically, the
"control" systems that try to get simple commands from EEG rely on the
subject to do something unnatural internally to produce a large enough
EEG response for the computer to pick up.  A simple one: blink your eyes.
You can measure a pulse of alpha waves (10 Hz) in the occipital scalp area.
But then this is just 1 bit of information, and physically blinking your
eyes is much easier to detect.

High-resolution NMR (nuclear magnetic resonance imaging) can detect
smaller areas of neural activity: nearly down to 1 millimeter square.
There are two problems with this (at least at present).  First, the
brain state must be held for at least a second to be picked up.  Even
then, the noise level is so high that in brain research 20 or more
sessions must be averaged to get a "significant" result (1 bit error
in 20).  And NMR machines are huge-- they have to be to create the
enormous and highly controlled magnetic fields they use.  I'm not
saying that future technology won't fix this, but nothing's on the
25-year horizon.
The underlying problem is that neural activities in the brain seem to be
divided at (at least for our purposes) two levels.  First, activity is
clustered into cortical "columns", about 1 millimeter square and 4 to 8
mm thick.  The activity in these can give simple clues (for example, area
of attention), but not much more.  Second, within each column (? The jury's
still out on whether this occurs between columns as well) data is
represented by the activity of millions of individual neurons.  The form
of this data is diffuse and cannot be seen without direct cortical probes
or some other method-- even electrodes directly on the brain surface
will not be able to get all of it.

So how about direct brain contact (chips or electrode arrays)?  First,
it's still surface contact, so is not specific enough.  Second, penetrating the
cortex with pointed electrodes is definitely antisocial, since nerves,
neurons, and possibly blood vessels are being damaged.  Not something you
want to do more than (or even) once.  Even given nanotech (and so far most
of that is glorified science fiction speculation-- remember how SF picked
up on stuff like Tipler machines for FTL or time travel?) you still face
other problems.
The main problem with direct neural contact is interpertation.  On a
local level, neural patterns are nothing like a map of the world or
even a map of stimulus characteristics.  Usually you have a lot of
"noise" in the position of elements in the map (after, the brain isn't
printed-- it "just grew").  Also, several types of information processing
are usually intermixed in stripes, columns and cortical layers.  So what
you need is a computer with 10^8 inputs (or more) to learn the current
connectivity of the brain area of interest, and translate data to/from
the brain representation.  I leave the training of this as an exercise to
future generations. (B-{))

Now for the real problem with brain connections, and the reason why I
feel that the current external stimulation method of VR will be the
only real method.  Everyone seems to assume you can push signals
to or from the cortex of the brain and everything will work.  Nothing
could be more wrong!
Every sense (except smell) goes through extensive neural processing by or
is processed in parallel by important subcortical structures.  These also
happen to be the most inaccessible parts of the brain.  They are
responsible for much of our "fluency" in viewing and interacting with
the world, and just inserting input at the cortex excludes them.  The
results from direct cortical stimulation will be unnatural and may
be closer to hallucinations than sensory input.  So you won't get real
VR this way-- another way to interface, perhaps, but not as useful.
For example, let's say you're controlling a machine, and something
unexpected happens in some corner of the area.  Without subcortical
control, your reaction time will be very slow, and you may not even
notice it occuring!

As a real-life example, some researchers tried disconnecting part of the
somatosensory (skin sensation) cortex from input to control pain in
terminally ill subjects.  Not only did this not work, but the pain became
harder to localize and thus harder to ignore!  Why?  Because other
connections to the limbic system (a deeply buried section of cortex,
highly involved with emotion and control) also gets connections to
pain input through subcortical structures.  (Damaging the limbic system
isn't an option, for obvious reasons).  The same type of warning goes for
vision, where subcortical structures are very important for the control of
attention, noticing changes in the scene, spatial localization, and so
on.  Audition is even worse-- what gets to the cortex is so heavily
preprocessed that cortical stimulation probably would be difficult (at
least if you're looking to get 100% "vericidal" sound sensation.

So-- what IS possible?  I think that neural connections to the cortex are
interesting for research (in fact, essential to understand the brain).  But
for VR, you want to get realistic interactions and stimulation.  The
last place you can do this is where nerves enter the brain (even the
spinal column is a bit iffy, because we don't know all the processing
that goes on in it).  Because nerves aren't well matched spatially,
you'd need a computer to learn the connectivity and map your input
and output to the nerve.  How to connect in without damage?  Somebody
else's problem, I suspect.

--------------------------------------------------------------------------
| My life is Hardware,                    |         Dave Stampe          |
| my destiny is Software,                 | dstampe@psych.toronto.edu    |
| my CPU is Wetware...                    | dstampe@sunee.uwaterloo.ca   |
| Am I a techno-psychologist, or just an engineer dabbling in psychology?|
---------------------------------------------------------------------------

From: M.E.Morris@bnr.co.uk (Michele Morris)
Subject: Re: SCI: Direct Neural-Electronic Interfacing
Date: 11 Dec 1992 11:34:34 GMT
Organization: BNR Europe Limited, Harlow, GB

In article <=8j1Hr*h9b@atlantis.psu.edu> vincent@wilbur.psu.edu (James Vincent)
 writes:
>
>A little more food for thought on the subject: there exists today electrodes
>that can measure single neurons firing. Ultra small electrodes are placed
>directly over the synapse and detect firing. Given enough of these and some
>sort of neural net to figure out what all the firings meant ......

I hate to tell you this, but if you're thinking of the very fine
electrodes used, for example on cat brains, to investigate the
functional mapping of the visual cortex, then you're asking to addle
your brains. Not only do these electrodes, fine though they are,
destroy neurons on the way in, I believe that prolonged use also kills
the neurons whose action potentials they are measuring. Rather than
expanding your sensory horizons you'd be destroying your brain. I
think I'll wait for a non-invasive, or at least less-destructive
method.  Something more akin to a cochlear implant, but with much,
much, higher resolution.

Cheers M'dears ... Michele

*************************************************************************
email: M.E.Morris@bnr.co.uk    phone: +44 279 429531   fax: +44 279 441551
BNR Europe Limited, London Road, Harlow, Essex, CM17 9NA, England.

	I think it's kind of interesting the way things get to be.
	The way the people work with their machines.
**************************************************************************

From: vincent@wilbur.psu.edu (James Vincent)
Subject: Re: SCI: Direct Neural-Electronic Interfacing
Date: Thu, 10 Dec 92 15:16:27 GMT
Organization: Penn State Center for Academic Computing

A little more food for thought on the subject: there exists today electrodes
that can measure single neurons firing. Ultra small electrodes are placed
directly over the synapse and detect firing. Given enough of these and some
sort of neural net to figure out what all the firings meant ......

-----------------------------------------------------------------------------

From: mauah@csv.warwick.ac.uk (Mr I D Bygrave)
Subject: Re: SCI: Direct Neural-Electronic Interfacing
Date: 10 Dec 1992 18:09:18 -0000
Organization: Computing Services, University of Warwick, UK

In article <1992Dec10.071733.27060@u.washington.edu>
 snoswell@wattle.itd.adelaide.edu.au (Michael Snoswell) writes:
>
>mauah@csv.warwick.ac.uk (Mr I D Bygrave) writes:
>>Direct neural interfacing would be a simple problem to solve with
>>nanotechnology, and could probably be acheived with pre-assembler
>>nanotechnology.
>
>This was treated rather well in "Queen of Angels" by Greg Bear.
>
>It certainly made me think at the time about how to go about producing a direct
>neural tap.
Something similar to the cell repair machines suggested by Drexler could be
 adapted
to make recording and active change of the structure and activity of the brain,
at the neuron/microcircuit level, quite possible.
Some of this data would be quite easy to interpret in terms of sensory
 environment.
i.e. motor control, front end of the sensory systems.
I would not like to make a guess upon how long it would take to have a sound
basis of science that would enable us to interpret other data in terms of what
the subject is thinking, i.e. thought detection/control.

IMHO, with some help from other great minds :^)

Ian Bygrave-undergraduate at University of Warwick,UK
	mauah@csv.warwick.ac.uk
	ibygrave@dcs.warwick.ac.uk

-----------------------------------------------------------------------------

From: "Dan D. Gutierrez" <73317.646@CompuServe.COM>
Subject: Re: SCI: Direct Neural-Electronic Interfacing
Date: 10 Dec 92 12:05:25 EST

I'm not sure exactly what you have in mind here, but I agree that nanotech
yields many possibilities in the NEAR future too!  I had the rare
opportunity to discuss biotech issues, specficially nanotech, with a
gentleman who is on the forefront of investing in such ventures.  Through
his business contacts and activity in the field, he predicts that real
nanotech products will emerge within 12-15 years!  I was shocked to hear
him say that, given the time frames in "Engines of Creation."  Oh, by the
way, the gentleman was William H. Gates...

Also, Drexler's new book NANOSYSTEMS is an excellent beginning in
establishing an engineering framework for nanotechnology.  I'm reading it
now and am excited to say the least.

BTW, you mention sci.nanotech?  Is this a newsgroup on Internet?  How can
I gain access (coming in here from CompuServe...).

Thanks in advance.

_ddg

-----------------------------------------------------------------------------

From: aw1@Ra.MsState.Edu (Andrew Wargo)
Subject: Re: SCI: Direct Neural-Electronic Interfacing
Date: Thu, 10 Dec 1992 22:17:36 GMT
Organization: Mississippi State University

it seems likely that one could have one's head drilled, and electrodes
implanted, then..one can 'plug in' whenever one wanted to interface, then
'unplug' when one wanted to appear unmodified.
Personally, I LOVE the idea of direct neural interface, and dearly PRAY and
HOPE and pine away that I'll be able to have one that'll allow me access to
comm functions, higher level engineering and mathematical functions, etc.
instantaneously.

--
Regnat Populus!  | aw1@ra.msstate.edu | kith-kanan@rhostshyl.cit.cornell.edu
The People Rule! | awargo@nyx.cs.edu  | K'than @Soucon
		 |                    | Weyrling to Bronze Kydarth

-----------------------------------------------------------------------------

From: kahn@Csli.Stanford.EDU (Ken Kahn)
Subject: Re: TECH: Neural Interfacing
Date: Thu, 10 Dec 1992 16:57:34 GMT
Organization: Stanford University CSLI

Re:
>>The first article is from the front page of the Wall Street Journal, Dec
>>3rd 1992.  According to the article, Fujitsu Ltd. is working on a
>>mind-reading computer.  Though limited to YES & NO thoughts, the initial
>>step will be to develop a helmut that can monitor brain waves without using
>>electrodes.

I recall seeing a film in the 70s (this was before video tape) of an SRI demo
which could distinguish about a dozen mental states (that people could be
train to put themselves into).  Someone with a helmet was thinking to control
a cursor in a maze (up, down, left, right).  It was a neat demo.  Low
bandwidth though.  About 2 or 3 bits per second. Maybe 15 years later Fujitsu
will go somewhere with this.  I've been surprised that I haven't heard any
followups, at least for the severely handicapped.

 -ken kahn

-----------------------------------------------------------------------------

From: Andy Kossowsky           <AKOSSOWSKY@TURBO.kean.edu>
Subject: Re: SCI: Direct Neural-Electronic Interfacing
Date: Wed, 09 Dec 1992 21:41:28 EST

RE: Direct Neural Interfacing.

In article <1992Dec6.075414.18229@u.washington.edu>
 dream!Marcus_Brodeur@bikini.cis.ufl.edu (Marcus Brodeur) writes:
>
>In a message dated Sun 29 Nov 92 3:26,
>U55533%uicvm.uic.edu@ohstvma.acs.oh wrote:
>
>>  I predict that full direct neural immersion will not be available for
>>200 years,
>
>  There are many
>bold new frontiers to be conquered, certainly, but many (such as
>direct-neural-interfacing) are certainly NOT going to be conquered in
>the next century or so.

Direct neural interfacing would be a simple problem to solve with
nanotechnology, and could probably be acheived with pre-assembler
nanotechnology.
Given that nanotechnology will become a reality within the next century
[read Engines of Creation, by K. Eric Drexler for a very convincing
argument] or even 50 years, then I would put the timescale for this ability
very much closer. Say 20-50 years, and certainly within my lifetime (which
will be considerably longer thanks to nanotech.).
For a better discussion of these ideas read the news group

	sci.nanotech

Ian Bygrave:undergraduate at University of Warwick,UK
	mauah@csv.warwick.ac.uk

--- Reply by Andy Kossowsky :

You give technology too much credit. I can certainly see
how nanotech could facilitate the BUILDING of a Direct Neural Interface,
but only after someone DESIGNS one.  Researchers know so little
about the actually workings of the brain's neural network, so even
if nanotech sprang into existance fully developed tomorrow, we
wouldn't know what the heck to tell the little 'nanites' to do in there!

ApK

-----------------------------------------------------------------------------

From: keithley@apple.com (Craig Keithley)
Subject: TECH: Neural Interfacing
Date: Wed, 9 Dec 1992 02:17:34 GMT
Organization: Apple Computer, Inc.

In the last few weeks I've seen two articles about neural interfacing.

The first article is from the front page of the Wall Street Journal, Dec
3rd 1992.  According to the article, Fujitsu Ltd. is working on a
mind-reading computer.  Though limited to YES & NO thoughts, the initial
step will be to develop a helmut that can monitor brain waves without using
electrodes.

The second article referenced the Hot Chips IV conference.  According to
the article, a talk was given by Gregory Kovacs of Stanford University.
Kovacs is working on a passive neural interface that connects electrically
with the nervous system, allowing both control of, and feedback from an
artificial limb.  The neural interface consists of a 32x32 array of holes
that the nerves pass through.  The article on the Hot Chips IV conference
can be found in Circut Cellar Ink, The Computer Applications Journal, issue
#30, December 92.

Neural interfacing is still in its infancy, despite the somewhat successful
research done at UCLA in the early/mid 70s.  While those experiments
focused on 'evoked potentials', they were also faced with very limited real
time signal processing.  Perhaps modern day DSPs, room temperature
superconductors, and a little magic could provide better results.  For some
very limited details on the UCLA research, look up Ronald Olch's
dissertation titled: Computer system architecture for a brain computer
interface.

And I've seen an occasional mention (on PBS, etc.) of Air Force research
into advanced fighters and neurally controlled flight systems.  Nothing
tangeable, just an interesting sign that the Air Force is doing research
into neural interfaces.

Unfortunately, conversation with Ronald Olch (and others) leads me to
believe that research is still ongoing, and, so far, has failed to produce
significant results.  Perhaps Kovacs and Fujitsu will make the necessary
breakthrough(s).

Craig Keithley
Apple Computer, Inc.
keithley@apple.com

-----------------------------------------------------------------------------
