Red for CUT

Blue for Clarify or clean up b/c key point isn’t clear

Green for comments

 

Physioinformatics

A systems based, physiologically robust, reference architecture for designing and refining interactive human-computer interface systems in ways which increase operational throughput of information.

The term“physio-informatics” will be used in this dissertation to denote informatic systems which are either biologically/physiologicallybased (primarily neurologic ie neuro informatic) information systems and/or informatic systems which are designed to support interaction (dynamic exchange of information) with such systems

The intent of this work is to develop a systems based, physiologically robust, reference architecture for designing and refining interactive human-computer interface systems in ways which increase operational throughput of information. Extending the perceptual dimensionality of information presented to the human and enhancing the expressional capacity of the human to convey intent to the informatic system achieve this increased throughput.

Interactive Human-Computer Interface Systems

Interface system to match the human nervous system's ability to transduce, transmit, and render to consciousness the necessary information to interact intelligently with information.

"The physiologic basis of a reference architecture for designing interactive human-computer interface systems"

The development of a physiologic based reference architecture for designing and developing interactive human computer interface systems

In the various models of human computer it is customary to think in terms of inputs and outputs

Input from the computer to the human and out from the human to the computer… or input from the human to the computer and output of the computer to the human…

The purpose of this dissertation is to develop a systems model which is thought to be more representative of reality than traditional models in that it is consistent with the phenomenological aspects

 

 

Context And Initial Motivation

The capacity of computers to receive, process, and transmit massive amounts of information is continually increasing. Current attempts to develop new human-computer interface technologies have given us devices such asgloves, motion trackers,3-D sound and graphics.Such devices greatly enhance our ability to interact with this increasing flow of information. Interactive interface technologies emerging from the next paradigm of human-computer interaction are directly sensing bio-electric signals (from eye, muscle and brain activity) as inputs and rendering information in ways that take advantage of psycho-physiologic signal processing of the human nervous system (perceptual psychophysics). The next paradigm of human-computer interface will optimize the technology to the physiology -- a biologically responsive interactive interface.

 

INTERACTIVE INFORMATION TECHNOLOGY

Interactive information technology is any technology which augments our ability to create / express / retrieve / analyze / process / communicate / experience information in an interactive mode. Biocybernetics optimizes the interactive interface, promising a technology that can profoundly improve the quality of life of real people today. The next paradigm of interface technology is based on new theories of human-computer interaction which are physiologically and cognitively oriented. This emerging paradigm of human computer interaction incorporates multi-sense rendering technologies, giving sustained perceptual effects, and natural user interface devices which measure multiple physiological parameters simultaneously and use them as inputs. Biologically optimized interactive information technology has the potential to facilitate effective communication. This increase in effectiveness will impact both human-computer and

human-human communication, "enhanced expressivity".

"BIOCYBERNETIC CONTROLLER"

Interactive interface technology renders content specific information onto multiple human sensory systems giving a sustained perceptual effect, while monitoring human response, in the form of physiometric gestures, speech, eye movements and various other inputs.  Such quantitative measurement of activity during purposeful tasks allows us to quantitatively characterize individual cognitive styles. This capability promises to be a powerful tool for characterizing the complex nature of normal and impaired human performance. The systems of the future will monitor a user's actions, learn from them, and adapt by varying aspects of the system's configuration to optimize performance. By immersion of external senses and iterative interaction with biosignal triggered events complex tasks are more

readily achieved.

This paradigm shift of mass communication and information technologies is providing an exciting opportunity to facilitate the rapid exchange of relevant information thereby increasing the individual productivity of persons involved in the information industry. Areas such as computer-supported cooperative work, knowledge engineering, expert systems, interactive attentional training, and

adaptive task analysis will be changed fundamentally by this increase in informatic ability. The psycho-social implications of this technologically mediated human-computer and human-human communication are quite profound.  Providing the knowledge and technology required to empower people to make a positive difference with information technology could foster the development an attitude of social responsibility towards the usage of this technology and may be a profound step forward in modern social development. Applications which are intended to improve quality of life, such as, applications in medicine; education, recreation and communication must become a social priority.

An overview of the field of interactive human-computer interface systems

In the time between those early days and the late 70’s the ability for the computer to respond to a task given it by a human was, for the most part, limited by computation power. From the early days of computer programming where each logical connection was “hard wired” by an army of technicians to the time of punch cards, the concept of interaction with a computer had a very limited and specific interpretation. A great advance was made when the computer had a “typewriter” like mechanism which allowed computers to be programmed through “terminals”. As the speed of the computer increased along with its capacity to respond to the human, it began to become apparent that the humans ability to convey intent to the computer and the computers capacity to display results of its calculations to the human would become a limiting factor in the “interaction” between humans and computers.  Innovators at Xerox PARC, MIT, NASA, DARPA and other computer research facilities began to rethink the concept of how humans and computers would be able to communicate more effectively. (in this context the term “communicate” is used to mean the ability to intentionally exchange meaningful information).The first significant breakthrough to make it out of the lab was the “WIMP” interface (Windows, Icons, Mouse, Pointer) graphical user interface, referred to in the nerd zone as the GUI (pronounced gooey). In the late 70’s with the commercial release of Apple’s“personal” computer, with it’s GUI, to the general public …….blah..blah

 

In the mid 80’s computer systems used by industry were becoming fast enough, and display technology and was becoming sophisticated enough to be able to “render” graphic images for engineers in computer aided drafting and design jobs to be able to begin to manipulate these rendered images with ever increasing speed and resolution. At the same time new techniques were being developed by scientists to enable them to “visualize” a graphic image that was the result of a very complex set calculations. These new areas of CAD/CAM and scientific visualization continued to evolve with faster and faster computation and ever increasing quality of graphic images. In the late 80’s it was recognized that the compute power and graphic display techniques

While it is true that there was much work was done in the human factors of “Man-Machine” interfaces throughout the late 70’s and the 80’s, this work dealt with the physicality of the environment and information displays, and much of that work was done in the context of very specialized tasks. Tasks for specifics kinds of work such as piloting fighter airplanes or space craft, controlling complex industrial processes such as nuclear power plants, or complex chemical processing plants were well studied and refined. However these task differ from interactive human computer systems in that the humans are controlling some machine or physical system and were not primarily interacting with information as represented by computer systems. (one exception to this would be the interaction with a computer simulation of a complex physical system). The primary efforts for researching and refining these systems were in the field of ergonomics, dealing with the energetics of the human interacting with their environment, and cognitive science, the mental computation required to perform effectively in the environment.

In the late 80’s a new concept began to take hold in the field of human computer interaction, the concept of “immerse systems” where the computer systems began to encompass the humans senses and track the movements and position of the human in an effort to develop a synthetic environment with in which the human could interact in a more natural way. These virtual reality systems sparked a brief but important revolution in the thinking and gadgetry of human computer interaction.

From an evolutionary neuro-information processing perspective this technology creates a new potentiality for response to perceptual awareness: it canalizes not a single response to a single stimulus, but rather multiple responses to multiple stimuli born of a single though multi-dimensional sensorial perceptual state.the combination of these different rendering modalities with somatotopic placement, in order to achieve and demonstrate spatial coding of the rendered information.

Optimizing the human computer interface will rely on the knowledge base of physiology and neuroscience, that is, the more we know about the way we acquire information physiologically the more we know the optimum way for a human to interact with intelligent information systems. The next paradigm will see the "THINNING" of the human-computer interface to a biological sheer as the

interface will map very close to the human body.

 

PHYSIOLOGICALLY ORIENTED INTERFACE DESIGN

Knowledge of sensory physiology and perceptual psychophysics is being used to optimize our future interactions with the computer. By increasing the number and variation of simultaneous sensory inputs, we can make the body an integral part of the information system, "a sensorial combinetric integrator". We can then identify the optimal perceptual state space parametersin which information can best be rendered. That is what types ofinformation are best rendered to each specific sense modality, "a sense specific optimization of rendered information. Research in human sensory physiology, specifically sensory transduction mechanisms, shows us that there are designs in our nervous systemsoptimized for feature extraction of spatially rendered data,temporally rendered data, andtextures. Models of information processing based on the capacity of these neurophysiological structures to process information will help our efforts to enhance perception of complex relationships by integrating visual, binaural, and tactile modalities. Then by using the natural bioelectric energy as a signal source for input; electroencephalography, electroocculography, and electromyography (brain, eye and muscle) we can generatehighly interactivesystems in which these biological signals initiate specific events. Such a real-time analysis enables multi-modal feedback and closed-loop interactions.

The following dissertation is concerned with developing a “reference architecture” (a formalized conceptual framework for thinking) for designing physiologically robust interactive human computer interface systems.

The purpose of the reference architecture will be to provide insight into the various components of the system in the context of how they might affect the flow of information as information is passed through them

The primary focus will be to consider the flow of information between the human and the computer in a sustained, iterative, experiential interaction

In the context of this dissertation it will be assumed that the intent of developing this reference architecture is to map the information flow during/caused by the intentional /volitional interaction with information between a conscious human and a computer system

An exchange of information between the an experienced perceptual state and an external physical state is mediated by a biologic / physiologic information transporter system

This system is multi modal – multi scale – concurrent hetero-purpose poly-dyno- morphic simul-tasking

For this discussion we will assume that interface systems which support Human computer interaction can be modeled as a system where information flows between various components of the system in a specific manner

End introduction here (just add refs) and I can clean up the text if you like! Chapter 2-your own theory begins with the following

 

 

Hypothesis

An understanding if the human neuro physiology allows for exploitation of predictable adaptive capabilities

Assumptions

The nervous system is the primary information infrastructure for humans

The nervous system supports the transduction transmission representation and response to information in the environment

Time is perceived as a unidirectional vector

  

Universe of discourse

The phenomena of interest (perception) occurs at the anthroscopic scale

Mind happens at an anthroscopic scale..

Anthroscopic scale—the natural scale of perceptibility of an individual human…

From meters to millimeters ..from decades to deci-seconds

 

 

Neurocosmology

Anthroscopic epistemology is biased by Neural systems

Consider a system which has as its primary components 3 fundamentally distinct sets of parameter values..

Or three “state spaces”a state space is a set whose members are defined by an n-tuple ofvalues which correspond to the parameter values of One set is a set of parameter values of a computationalsystem

Another set is a set of directly measurableparameter values of a physiologic system

The third set is a set of parameter values which are directly experienced by a conscious human

Each set is distinguishable from the other in that the computational system and the physiological system are physically separable and the physiological and perceptual state space are phenomenological

While it is acknowledged by the author that the basis of perceptual experience is most probably supported by a specific set of physiologically distinct systems

It is beyond the scope of this dissertation (and frankly unnecessary ) to develop a robust explanation/model for the specifically/physicallydistinguishable aspects of conscious experience (the mind) and assumed neural matrix (the body) which is thought to at least co-occur with those conscious experiences.Suffice it to say that a user in the context of an experiential interaction with information does not routinely confuse the two.

There are three fundamental state spaces required to describe the complete system

Information which is described by the state of an externalcomputational system

Information which is described by a the state of a physiologicsystem

Information which is described by thestate ofperceptible components of a conscious experience

Each of these systems is considered to be a fundamental source of information and that there is an ordered flow of information between these 3 fundamental state spaces.

As information is exchanged between these three state-spaces there are direction specific boundary crossing transfer functions which restrict/bias/interfere and/or otherwise constrain/restrain the capacity/fidelity

From the perspective of the physiologic system there is an information exchange with a persistent external information system and an emergent internal information system

Thus it is considered a basic tenant for this model that the body (comprised of all its information processing physiologic systems) mediates the exchange of information between the computer and conscious experience.

This at first glance may seam some what obvious to the casual observer. However it is this physiologic mediationof information (with all of its specific processes)which is the basis for the development of this reference architecture.

Simply put the 3 coupled state spaces of the interactive human computer interface system being proposed may be described (from the perspective of a conscious human user of this interactive interface system) as being comprised of/byan external computational system, the imperceptible physiologic processes which mediate the exchange of information with that system and conscious experience and the perceptual qualities of experienced information

Information, which is externally generated

Information, which is biologically/physiologicallymediated

Information, which is directly experienced

It is the assertion of the author that the information flow between external sources and direct experience isbiased/restrained/constrained/limited/enhanced/facilitated in understandable and predictable ways by the physiological mechanisms of human information processing.

Basic principles

that human perception is mediated , for the most part, by the nervous system.

That the physicality of the nervous system constrains perception ..space, time, mass and energy

and physiology of the human nervous system restrains perception ….

Complexity, functionality, capacity the intra-activity of the nervous system sustains perception

thus the form and function of the nervous system influence various parameters of perception

Definition

A state of any system is defined by the set values which describe the condition of the system in any given point in time (the value of all the state vectors)

A system will have a state space whichrepresents/containsall possible states of that system

Perceptual states are experiential units of awareness ..an instant of awareness

Perceptual state space--- the set of all perceptual states

Perceptual state’s are in a constantlyemerging in time (as in a sustained conscious experience )

and thus the perceptual state space is dynamically

Perceptions are the integration of awareness with experiential information

Forged by combining the information of sensation with the interpretativemental constructs

A stream of consciousness is a linked set of perceived experiences

Core concepts

Mind happens

Mental constructs comprise the contents of consciousness

Mental constructs are forged by integrating information of experience and sensation

A neuro computational matrixis responsible for the actual function of forging mental constructs

A notational system has been derived which represents the flow of information between the environment and the neurally mediated experience of consciousness

This formal descriptivenotational systemwill enable the creation of“most probable maps” of information flow between humans and their environment

Perceptual states

A Perceptual state spaceis the set of all experience-able conscious awareness

Perceptual dimensions are experienced with varying levels of awareness

A perceptual state is a unique momentary experience of conscious awareness

A perceptual state may be comprised of several experiential dimensions

An experiential dimension is specific , undecomposable quality

A given mental condition can bias the occurrence probabilities of emerging perceptions

Emerging perceptions can be dynamically influenced thus Influencing the quality of the experienced states

Perceptual state space modulationis the intentional act of influencing emerging perceptual states such that the perceptual state space which will contain the next series of perceptual states is a specific subspace

Anthroscopic neurophysiologicinfo matrix supports/provides the basic infrastructure for the continuous intentional/willful interaction with the environment

Biologically mediated information exchangecouples perception to the environment

The physio info metrics of the neural info matrix determines the through flux

Observations

Information is differential from the energy which conveys it

Consciousness occurs at the anthroscopic scale

The nervous system is the primary tissue for sustaining consciousness

The nervous system couples the phenomena of experience to the environment

Humans and computers exchange information

The exchange of that information can be abstracted

The flow of information can be parameterized by temporal, spatial,ergo-dyno-morphic flux

Any state of information is characterized by a specific state-space parameter set.

Assertion

The nervous systems capacity to transduce, transmit, characterize , experience and respond to information of environmental conditions limits the know-ability of the environment

Theoretical construct

The development of a descriptive mathematical model that can map the transformation of information as it is exchanged between various components of the interactive human-computer interface system.

Develop a model which most generalizes the phenomenological aspects

Notational system which exploits interdisciplinaryinteraction

A languaging system which can classify emerging observations

Expression space E-àE

Core symbols

Psi – mind

Phx – life

Phi –physics

Khi—synthetic

Sky—rendering

Sns—transductive sensing

Med—healing intent

Edu—knowledge transfer

Com—directed communication

Rec—recreational enrichment

Grok – to intentionally seek to comprehend at a profound level

Quantifiable information flux capacity of a physiologic system

Physio info metrics --- the quantitative measure of the information carrying capacity of the of a physiologic system

The fundamental nature of the nervous system (neuro info matrix) determines its operational capacity

Physiologically mediated information exchange between external environment and experiential awareness

Both the physicality and the physiology contribute to the set of bio-physical restraints

Defining the research areas and methodology required to gain necessary knowledge in the following areas:

neuro-physiologic restraints and limits of the computer to human linkages

psycho-physiologic capacity for optimal cognitive function within an extended perceptual environment

psycho-motor function for simultaneous interaction with multiple human to computer devices

Demonstration of the functional integration of relevant human-to-computer input devices into the system.

Demonstration of various multisensory rendering systems integrated into the system.

Demonstration of an integrated system of human to computer input devices with the multisensory rendering systems.

Demonstration of an interactive, experiential environment optimized for intelligent interaction with information.

Anthrotronics – human instrumentation systems

Anthrotronic systems designed for interactive information exchange continue to evolve

Applying first principles of physio info metrics facilitatesdesign innovation for operational refinement of the evolving interface system

The development of hardware and software systems based on principles derived from the reference architecture, and implementation of such systems in real world settings

The combination of applied physio info metric principles with an operational notational system creates a research tool capable of mapping the time evolution of information propagation through a perceptual cybernetic system

Focus will be to develop a more generalized capacity to address the interface issues of human-computer interaction requirements

Physiologically Oriented Interface Design: the next paradigm of human-computer interface will optimize the technology to the physiology -- a biologically responsive interactive interface. paradigm of interface technology is based on new theories of human-computer interaction which are physiologically and cognitively oriented.

Research in human sensory physiology, specifically sensory transduction mechanisms, shows us that there are designs in our nervous systems optimized for feature extraction of spatially rendered data, temporally rendered data, and textures. We will develop these interface techniques and technologies consistent with the basic neuroscience issues of modality, duration, intensity, distribution, frequency, spatial displacement, contrast, inhibition, threshold, adaptation, transduction, conductance and transmission (to name a few)

The ability to achieve the integration of a set of advanced human-to-computer input devices into a

single interface system and demonstrate data fusion to enable meaningful correlations across various input modalities will significantly enhance progress toward this end.

Viewing the entire body as a perceptual and expressional technology opens up possibilities for exploiting the heretofore untapped richness and greater volumetric potential of its informatic capacities. Hence, we propose to develop an interactive environment incorporating new ways to render complex information to the user by optimizing the interface system to match the human nervous system’s ability to transduce, transmit, and render to consciousness the necessary information.

Efforts have been to research, prototype, and demonstrate the implementation of a data analysis subsystem designed to enhance the ways that relevant data may then be rendered optimally to the operators sensory modalities, utilizing such techniques as linear and nonlinear multivariate analysis tools for the processing of multiple data sets in a variety of ways, including graphical analysis (phase portraits, compressed arrays, recurrence maps, etc.) and sound editing (mixing, filtering ).

We have designed the interface so as to optimize the salience and content of data sets. Some data which conventionally would be displayed visually might be processed so as to be perceived (in the suit) in a tactile or auditory manner

Synesthetic possible through this technology make it possible tofeel the sound, see the pressure and ultimately be able to reconfigure the rendering parameters of the interface based on the specific elements of a situation. Seeing colors may be more appropriate in one context whereas hearing them may be more suitable for another; many factors will determine the tailoring of rendered data: which data will be shunted to which renderer? Novel interface controllers are essential here.

Using this reference architecture we can demonstrate interactive environments that combine new ways to render complex information with advanced computer to human input devices, such that it renders content specific information onto multiple human sensory systems giving a sustained perceptual effect, while monitoring human response in the form of physiometric gestures, speech, eye movements, and various other inputs, as well as providing for the measurement of same.

Weassessed the limits of usability of traditional input devices such as mouse, joystick and keyboards to determine when interface complexity precludes their use as primary inputs.

Weresearched and evaluated the users ability to integrate several input systems so as to have a multiplicity of simultaneous interaction options.

We researched the capacity for filtering and combining data streams from the various human to computer input devices.

We researched the capacity for developing user defined gestures for controlling various parameters of the interface system.

We researchedthe functional integration of relevant human-to-computer input devices into the system.

We researched various multisensory rendering systems integrated into the system.

We researched an integrated system of human to computer input devices with the multisensory rendering systems.

We researched f an interactive, experiential environment optimized for intelligent interaction with information.

This increased throughput is achieved by extending the perceptual dimensionality of information presented to the human and enhancing the expressional capacity of the human to convey intent to the informatic system.

-Quantitative human performance assessment tools for clinical, educational and vocational applications

have been developed and refined

-Interactive systems for the disabled which empower them to participate more actively in their own environment

 

Clean up the above for theoretical chapter—which leads into “case 1” all the visualization stuff

Ashley is going in the “case 2” chapter for the development of THG1 and 1rst generation neattolls?  I don’t remember the outline.  

In any case, the next sections on neattools and thgs needs to be cleaned up, shortened and put in the context of the kids.

 

ASHLEY

In a birth accident, a C1 with some brain stem involvement [pictures] she can move here head, she is vent dependent. When we first started hanging out with Ashley, her way of interacting with the world was with the head stick (which she was good at and proud of). Her grandmother wanted us to experiment to see if we could identify some other avenues for Ashley to start interacting with computers and other things.

One of the first things we did was to take things she already knew (stick) and to adapt it to an existing interface (pen mouse). Just very quickly, by understanding the technology we were able to turn what Ashley actually could do into something where she could interact with the computer. But really that was limited function.

What we really wanted to do with Ashley was to start exploring the biomuse at that time to control the computer. This is some photographs when Ken Katuahara of ABCs Americal Agenda cameout and we showed that we were able to plug in Ashley, only the second time she had ever even seen this technology) and using her facial muscles she was able to navigate around in a virtual environment. Direct muscle (bioelectric) control in a Virtual environment.That was done by over time adapting a series of interfaces to her face which picked off very specific muscles from her face and to use those as independent data channels. The idea was that as Ashley had control of her face we would use that as her sort of her "fingers" to give here differential control into the computer.

So here’s a system [photo] looking back over head we had to build software called Neat Dos for having different muscle signals cause different outputs (gesture recognition?). Then some of the engineers I was working with built a remote control car that had a camera (radio shack remote control car and radio shack transmitter) and Ashley with a set of VR glasses and a TV seeing what camera was seeing was able to drive this car out onto her back porch and to start playing with her niece and nephew. And so this car actually became an extension of Ashley's intentional actions with the world. And this is where we got into Biocontrolled telepresence where a direct interface from muscle activity is used to control a telerobot interacting in a fairly complex environment (ie with her niece and nephew). It was very interesting to watch her play as she would chase them around and run into them and you could really tell by looking at the car that she understood the paradigm of what was going on.

Another extension for Ashley was that at the time we had access to a surgical robot that was for laproscopic surgery for positioning the camera and we bypassed its interfaces and we created a system that she with her facial muscles, which she had been refining, we were able to put a paint brush at the end of the surgical robot, give her some paint and allow her to use her face to use the surgical robot to create art. Biocontrolled telerobotic arm capability.

Ashley was a good test case as she had a great family environment, personable, excited, easy to work with.

So we extended that one step further to the helicopter. We took a virtual IO pair of glasses coupled with Ashley's ability to move her head left, right and up and down and there was a camera controller on a helicopter which when she would move her head it would move the camera positions around while it was flying. SO Ashley was one of the first biocybernauts in controlled unmanned aerial vehicle payload controller which we are currently under contract with DARPA to develop for the military.

So the idea was that with Ashley and the concept of the flow of information we were able to take her from where she was and look at her brain function, input information into her, allow her to process it and then output something into the world. And Ashley then became, because of the way we were presenting information and the way we were able to acquire information was one of our test cases for the model of perceptual psybernetics and.

The main focus of the case examples will be to show the utility of this reference architecture. Especially to show the capacity for enhanced perceptibility and expressability that may be achieved through an intelligent application of this physiologically robust interface systems model.

The reference architecturepresented has the necessary features to map the flow of information in an interactive human computer interface system

The reference architecturehas the necessary complexity to be able to account for the physiological issues in an interactive human computer interface system

The utility of the reference architecturehas been demonstrated in a wide variety of applications

The flow of information in human computer interaction can be mapped

The extensibility of the reference architectureis sufficient to be able to map the information flux

The flow of information and its temporal spatialergo-dyno-morphic flux can be can be parsed into three fundamental prime state-spaces

Directly experienced

Biologically mediated

Externally apparent

Diagrammatic info – flow

An ordering of theof the flow of information between the apparent principle subspaces can be represented in a way which is consistent with observations

Information state-spaces and their subspaces are defined by the “state-space” specific / subspace specificmodalities of information flow

Any state of information can be represented as a point in a complex hyper-dimensional state-space

Expressional formulation

Derivational symbology

Notational examples

Established existence proof of utility of concept and implementation

Developed experimental systemsof instrumentation and software

Designed operational method ofapplication assessment and iterative refinement for case specific interface systems

What is meant by term… interactive

A system which iteratively responds in a way which is influenced by both the current state of the system

Needed to develop a tool to link emergent hardware to emergent software

Software ---

QSI-AVS-

Link-modeler

Glove talker

The development of hardware and software systems based on principles derived from the reference architecture,

Neat software, in its four

generations:

Neat Software

Introduction -- incl. overall design philosophy

Neat DOS -- first generation

JOJO

BEC

NEAT

Neat Java – transitional

NPAC-JAVA-

JOJO

GUO

Neat Win – transitional

Really neat

JOJO

NeatTools -- fourth generation

COM

EJ

Mouse

representative ntl files and application areas

and similarly one on hardware:

NeatTools

What is NeatTools?

NeatTools is a powerful visual programming environment that allows users with disabilities to control and communicate with a computer.It operates in conjunction with hardware devices created specially for this purpose.A disabled individual generates some deliberate movement under her control--perhaps moving a cheek muscle; this movement is then detected by the device, which transmits signals conveying the information to the computer. NeatTools then translates this information into some form that the computer can interpret, and generates some meaningful output – perhaps a mouse click, or a cursor move.The software thus allows disabled individuals to use whatever physical capabilities are available to them in order to interact with a computer, to improve their quality of life.NeatTools can permit quadriplegics to type, draw, or play games; to use the World Wide Web and e-mail; to control devices in their environment such as lights, stereo, or TV; and more generally, to interact with others.Capabilities that the able-bodied take for granted are made available to the disabled through this sophisticated computer program.Individuals who have previously had to depend on others to do nearly everything for them can gain some control and independence with the help of NeatTools.

Who Developed NeatTools, and When? 

The story of NeatTools goes back to 1990.At Loma Linda University in California , Dr. Doug Will led a Neurology Research Group, which Dave Warner joined soon after becoming an MD/Ph.D. student there in 1988.The group researched ways that a human/computer combination might offer valuable diagnostic information, or might explain more about how the human brain functions.The group identified and experimented with the newest developments in technology for this project.In 1990, they acquired a special glove with optical fibers in the lycra fabric, which they used to investigate the effects and treatments of neurological diseases.A corporation called VPL Research had created the glove,called the Data Glove, and donated the $10,000 device to the lab for exploration of the medical potential. Dr. Will’s lab determined that the glove could detect small changes in hand or finger position; it could thus measure hand function in patients with Parkinson’s disease or other motor disorders.The glove could also help physicians to diagnose whether a patient had Parkinson’s or whether that patient suffered from a similar disorder, such as essential tremors.Through the precise quantitative feedback, the glove could also aid physicians in understanding the effects of medication on the functioning of Parkinson’s patients.And it could be used in rehabilitation, to help patients develop skills in particular motions.

For such a glove to communicate with a computer, an intermediary device is needed to translate the glove’s electrical signals to signals that a computer could understand.The lab used the glove in conjunction with a workstation called BioMuse.This instrument had just been developed by a Palo Alto company named BioControl Systems; though the cost was approximately $20,000, the Neurological Research Group was one of four locations given free use of the equipment for medical and humanitarian purposes.BioMuse used electromyography (EMG) sensors to detect voltage differences that arise when muscles are flexed.It then transmitted these voltage measurements to a computer, and the software program converted the voltage differences into music.BioMuse was one of the first devices to allow any such biodirectional feedback between a patient and a computer.

At the end of 1991, the Neurological Research Group was approached to see if they could help a hospitalized patient – a baby named Crystal Earwood.At the time, Crystal was 18 months; she had been paralyzed from the neck down in a car accident.She needed stimulation, and she needed some way to interact with the outside world.Dave Warner, still a medical student and a member of the Neurological Research Group, took up the challenge to find out what the technology could do for this patient.

Dave found that 18-month-old Crystal could control a computer by moving her eyes.BioMuse could transmit the voltage differences from her eye motions to the computer.Her eyes became the equivalent of hands, transmitting commands to the computer. Instead of generating music, the BioMuse system was modified by the lab to give a graphical output.Crystal could then interact with displays on a computer screen, effectively demonstrating the potential value of biocybernetic technology for the severely disabled.

But despite all the promise, one serious hurdle remained.Since BioMuse cost around $20,000 and the lab had only one unit to use for research and development, the equipment could not be left with Crystal, or with any of the 30 subsequent patients it was used with.For most of them, the BioMuse technology proved highly effective.For some, it greatly improved opportunities for communication and interaction.For others, the experience of generating music significantly increased patients’ motivation to exercise weak limbs.But the high cost limited BioMuse’s accessibility to the disabled patients who most needed it.Two patients were able to afford to buy their own equipment; the others were not so fortunate.

The Benefits of Publicity

After all of the technological success and potential of these 1990 and 1991 efforts, 1992 brought some new difficulties.Dr. Will, who headed the Neurological Research Group at Loma Linda, was appointed as Dean, and the research group disintegrated.Space became an issue, as well as personnel, but eventually some space was made available in a new building, the Outpatient Rehabilitation Center.Dave Warner, still a medical student, was in charge of the new area, which was called the Human Performance Institute. Here Dave and his new group tested a sound chair created in Finland, by a company called Next Wave.The chair helped in relaxing spastic muscle groups, by low-frequency sonic stimulation.Dave made a point of giving frequent lectures on this project, as well as on the work with BioMuse – at conferences, at universities, and at hospitals. The work was publicized on TV programs and in other media.In a process that was to be repeated frequently, the TV coverage attracted the attention of a highly talented individual, a programmer named Jo Johansen, who approached the group at a conference in 1994 and volunteered to workwithout pay on software for controlling the chair.Three engineers from Walla Walla University in Washington similarly responded to a talk by Dave Warner; the presentation moved them to work on altering a Radio Shack remote control car so it could be controlled by a disabled individual through the BioMuse system.Jo Johansen then altered the chair software so it would take in signals from BioMuse, and send commands through the parallel port of a computer to control the car.Disabled kids could have fun doing this, and rehabilitation patients could use the technology to exercise arm muscles in therapy sessions.Still, these useful capabilities depended on the expensive BioMuse as a middleman between the sensors on patients and the computer.And most individuals could not afford the machine.

Making Affordable Technology

The Needed Hardware

The first breakthrough in this impasse came in 1995, when Salomo Murtonen, the Finnish inventor of the Sound Chair, came to America to volunteer for the project.A self-taughtelectronics engineer, Salomo committed himself to create the equivalent of the BioMuse device at low cost.At first, he worked for next to nothing, since the group had no substantial source of funds to pay its volunteers.Salomo created a four-channel EMG interface that could take any signal derived from muscle movements into the computer.The device was named TNG-1 (Thing 1, from The Cat in the Hat by Dr. Seuss); TNG is short for Totally Neat Gadget.Salamo produced TNG-1 with Radio Shack parts for a cost of $200, far less than the cost of BioMuse.

Creating the Software

In 1994, the group had begun to work with a seven-year-old girl named Ashley Hughes.As a result of a broken neck during birth, Ashley is a C1 quadriplegic, paralyzed from the neck down, dependent on a respirator for breathing.She could move facial muscles, and TNG-1 could transmit the EMG signals from her facial movements to her 286computer.But then software was needed to make those signals meaningful to the computer, and to display them on the screen.Jo Johansen wrote a program called BioEnvironmental Control (BEC) to make the EMG gesture signals usable for the computer, and to convert these to graphical outputs.BEC was designed for Ashley’s facial capabilities; it allowed her to express herself in rich and complex ways, using her body as a way to control a computer. With these powerful technologies, Ashley played computer games, drove a remote-controlled car, experienced her world remotely through a camera and microphone mounted on a styrofoam structure named Cindy Cyberspace, and interacted with others in her environment. Now the group had created both affordable hardware and software.But though TN-1 was inexpensive, it was not free of problems.For one thing, the electrodes that detected the muscle movement were not stable; and setting up TNG-1 was not easy for the families – it could not just be left with them.Further development was needed, to make the technology more stable and easier for family members to use.

Housing Volunteers

Besides Jo Johansen and Salomo Maturnen, a host of other volunteers committed themselves to this project of developing state-of-the-art software and hardware to improve conditions of life for the disabled.At least 20 individuals joined the project in California, including a bright physicist named Markus Schmidt from Germany who read about the effort in a German article.Some volunteers had solid professional credentials.Others were young students in high school or college.Some had college degrees, but no career plans.They either read about the project, learned about it on TV, or encountered it at a conference or a talk; in some cases, their parents found about it and steered them to join.Until the summer of 1993, they could not be paid.Beginning that summer, Dave Warner began to rent a group house where the volunteers could live and work and interact.And slowly, as monies were available, he began to pay some salary and expenses.When Dave completed his medical degree in 1995, he accepted a Nason Postdoctoral Fellowship at Syracuse University, and many of the volunteers came with him. The tradition of the group house, where volunteers live rent free and work together, has continued in Syracuse.In both California and in Syracuse, the group houses were given the same name: Center for Really Neat Research.Both have operated as development and demonstration lab environments. They have applied the newest developments in technology to show the viability of new concepts to help the disabled.Typically, the new commercial technologies are too expensive for use with the disabled population.The lab, once having tested that a concept could work, then marshals its efforts to create a powerful inexpensive version.But the lab does not serve as a clinical facility or as a marketing facility.The group shows the viability of a new technology by testing it with some disabled patients.They then publicize the development at conferences or through talks; with grant funding, they have formed some partnerships with selected national facilities that use these technologies in clinical situations with large numbers of patients.

Making the Technology Easier to Use

With the development of the TNG-1 interface device and the BEC software in California, the capability to interact with a computer was now affordable for the disabled.But TNG-1 depended on the use of EMG sensors attached to the faces of quadriplegic patients, where they had some muscle control.Whether they were mounted on cheeks, or foreheads, the EMG sensors would not work for very long.They didn’t stick to the face well, coming loose easily with repeated movement.To get around the problem posed by dependence on EMG sensors, Salomo went to work to create a more flexible interface device, that could receive a variety of types of sensory signals.TNG-2 could detect changes in light signals resulting from cheek motions, since such a motion would distort the light path to a light receptor, and thus create a signal.The lights did not have to be attached to the patients’ skin, as did the EMG sensor; the light sensors could be mounted from a hat, a helmet, or most recently, from a pair of glasses.But light was only one possible type of signal detectable by TNG-2, which was constructed to receive up to four general-purpose analog inputs. The use of light signals had advantages over the EMG approach, but raised new problems.Because photocells change their signal levels when the brightness level changes in a room, the use of light signals required frequent calibration, often beyond the capacities of a disabled individual’s family.The group then experimented with signal sources other than light – by using Hall Effect transducers to detect changes in magnetic field and by using pressure transducers.Other approaches include using bend sensors or tilt sensors.For Ashley Hughes, for instance, the group had been able to use a tilt sensor on her head along with a pointing stick attached to her head.With this combination, she could use a screen keyboard to type.

New and Better TNGs

TNG-1 and TNG-2 could each convey four channels of information to NeatTools.In real terms, this meant that the system was limited to information from, say,four muscles, or from four light detectors. Or the user would have to depend on two TNG devices at once – thus requiring two available ports on the computer.With the creation of TNG-3, 16 channels became available – 8 analog and 8 digital.As of January 2000, the group is now testing a working prototype of TNG-4, which has eight analog and 16 to 20 digital lines, each of which can serve as input or output.This increased capability will mean that the parallel port can be left alone for other functions, such as connecting a printer or a zip drive.TNG-3 and TNG-4 have been developed by Edward Lipson and Paul Gelling at Syracuse University

Neat becomes Really Neat becomes NeatTools

Similarly, as computer technology has advanced, the group has been working on creating new iterations of the software, to take advantage of the capabilities that the newest developments of technology will allow.In 1993, the BEC software was renamed Neat; it was written for MS-DOS operating system, then common for PC’s.In 1996, a version of this software was created for a Windows 95 PC environment; and in late 1996, work began on another version based on a Java-like environment. This is the current version called NeatTools.It is suitable for virtually any computer system – it can run on Windows 95 or 98 or Windows NT; it can run on Unix, Irix, or Linux; it will be able to run on Macs upon the availability of multithreaded operating systems.It can interface with a much broader range of devices.Its capabilities include both Internet connectivity and multimedia sound.A user can simultaneously develop, edit, and execute programs in Neat Tools.

The original DOS version of Neat and the first Windows version were created by Jo Johansen in California.But there were many limitations to DOS, which affected the capability of Neat.For instance, DOS does not support TCP/IP, so the DOS version was limited in connectivity.DOS does not support multiple processes occurring at once, so the DOS Neat program was slow.DOS does not support multimedia. NeatTools was created by a Computer Science Ph.D. student at Syracuse, named Yuh-Jye Chang; this program formed the basis for his Ph.D. dissertation.Yuh-Jeh defended his dissertation work in 1999, and is now at Bell Labs in New Jersey.The group has been especially fortunate in having both programmers.Before taking on the Neat Tools project, Yuh-Jye won the 1996 Java Cup International Award for an earlier Java-based graduate project – the Visible Human Viewer program.One of the major breakthroughs in Yuh-Jye’s creation of NeatTools was the development of a Windows mouse driver, which would allow a disabled individual to move a cursor in any direction with cheek or eye motions. This complex program allowed a user to move the mouse, and to simulate a mouse click. Before this, the group had to get hold of the source code for each program a disabled user would employ, and rewrite the code so the disabled individual could work with the program.With commercial software applications like Microsoft Word, there’s no way to acquire the source code. So it was quite a boon to have the general capability, in any Windows-type computer program, to simulate mouse events.A subsequent program created by Edward Lipson allows the user to move the cursor via a custom joystick mechanism. The software allows for fine calibrations, since different quadriplegic users have differing kinds and ranges of motion, as well as differing facial shapes and sizes.The NeatTools software and some related application programs can be downloaded at no cost from http://pulsar.org/NT/index.html.

The basic conception and architecture for the software – from the original DOS version to the latest NeatTools version – was laid out by Dave Warner. The software had to be adaptable to the widest possible range of industry or in-house devices, and to allow as many input channels as possible for users of limited physical capabilities.To allow maximum flexibility and modification for the needs of individual disabled users, the software is based on modules.NeatTools now consists of approximately 200 different modules, not counting the alphanumeric characters, and it is relatively easy to add modules. The program offers visual programming, with a highly user-friendly graphical interface.A user can create a simple NeatTools data-flow network by dragging a few modules to the desktop and connecting them.Typically, a module has inputs on the left and outputs on the right, and control inputs on top.A user can modify the properties of a module by a right mouse click. There are keyboard modules, modules for serial and parallel ports, modules for Internet sockets, calibrator modules, graphical display modules, modules for arithmetic and logic operations, multimedia sound modules, etc.

Because he was aiming for speed, a high level of performance, and platform flexibility, Yuh Jye chose to use the C++ program language instead of Java.When he began in 1997, the Java technology was not mature, and even today is not fast enough for intensive real-time tasks such as compression or decompression of video data.NeatTools consists of over 50,000 lines ofC++ code.The program adopts the advantages of Java – especially its ability to work on different types of computers, by utilizing a Java-like Cross-Platform API (Application Programming Interface) – a very thin layer which provides an interface to the user’s computer operating system, and to the C++ core of Neat Tools.The API unhooks an application, such as Netscape or Word, from dependency on the computer’s operating system or on Windows.Thus the architecture of NeatTools consists of three layers:(1) the computer’s operating system, and the C++ programming language (2) the Java-like cross-platform API;and (3) the NeatTools application layer.This combination allows NeatTools to combine the best of both worlds.NeatTools incorporates the speed and efficiency allowed by the high performance levels of C++, along with Java’s ability to run on different platforms.The thin layer of the API also helped provide the ability to synchronize many operations at once; such functionality would have been much more difficult to create in C++ alone.NeatTools provides the advantages of Java without using Java.

Hardware:

TNGs and widgits

Intro-- design philosophy; microcontroller technology; ...

TNG-1 -- EMG

TNG-2 -- 4 analog input channels

TNG-3-- 16 input channels (first production version)

TNG-4 -- ~32 channels bi-directional

Widgits -- sensors and transducers

Representative Applications

 

Put the following paper as is, in an appendix rather than cut and paste into document, this will allow you to cut out a lot of the NeatTools details above.

Universal Interfacing System for Interactive Technologies in Telemedicine, Disabilities, Rehabilitation, and Education

Edward Lipson1, David Warner2, and Yuh-Jye Chang3

1Department of Physics and 1-3Northeast Parallel Architectures Center, Syracuse University, Syracuse, NY 13244; and 1,2MindTel LLC, 2-212 Center for Science and Technology, 111 College Place, Syracuse, NY 13244

A modular hardware and software system for human-computer interaction is described that allows for flexible, affordable interfacing of people, computers, and instruments. The approach is illustrated with an application in the disabilities area. Other application areas are outlined. 

  1. Introduction

Emerging methods for human-computer interaction [HCI; 1] offer revolutionary opportunities to advance healthcare and quality of life, particularly as the power, functionality, and affordability of computers continues to soar. In particular, the advent of wearable computers calls for new types of interfaces, since the users are typically not desk-bound. Further, for people with disabilities who are unable to use a keyboard and/or mouse, the need for alternative interfaces is compelling. Clinical environments can enjoy improved efficiencies and outcomes, as new ways evolve to interface patients, caregivers, and instruments to computers and networks.

Our group has been developing powerful, low-cost technologies combining modular software and hardware that accommodate expressional gestures and perceptual modalities as essential parts of the interface. These systems allow for adaptive rapid prototyping in which practically any input to the computer can be mapped to appropriate actions and outputs.

  1. Methods

The NeatTools visual-programming environment allows rapid prototyping and implementation of HCI and other dataflow applications, in conjunction with custom sensors, mounting hardware, computer interface boxes (TNGs), and clinical/scientific instruments.

    1. NeatTools Software

NeatTools constitutes a visual-programming and runtime environment that produces fine-grain dataflow networks for data acquisition and processing, gesture recognition, external device control, virtual world control, remote collaboration, and perceptual modulation. The design goals of NeatTools have been to make it simple, object-oriented, network-ready, robust, secure, architecture neutral, portable, high-performance, multithreaded, and dynamic. The program and representative applications are downloadable from http://www.pulsar.org/. NeatTools can readily accommodate custom interface devices, or commercial devices including clinical instruments. Figure 1 shows two simple NeatTools programs. For a full-fledged application program, see the section below on the JoyMouse Network.

NeatTools is written in C++ but built on top of a thin-layer Java-like cross-platform C++ application programming interface (API), which operates presently on Windows 95/NT, Unix (Sun), Irix (SGI), and Linux. In due course, Macintosh will be supported, once its multitasking, multithreaded operating system is released (note that it can run provisionally on a Mac-based PC-simulator, such as Connectix Virtual PCÔ ), along with appropriate C++ development tools.

Currently, NeatTools includes serial, parallel, and joystick port interfaces; multimedia sound; MIDI (Musical Instrument Device Interface) controls; recording and playback; Internet connectivity (sockets, telephony, etc.); various display modalities including for time signals; time generation functions; mathematical and logic functions (including a state machine module); character generation; and a visual relational database system including multimedia functionality. Keyboard and mouse events can be received or generated via Keyboard and Mouse modules. This allows, among other things, the user to control a graphical user interface by alternative input devices that in effect simulate keyboard and mouse events. Data types in NeatTools include integer, real, string, block, byte array, MIDI event, and audio or video streams. NeatTools allows the visual programmer to package a dataflow network inside a container module that constitutes a reusable "complex module" with simple overt appearance. This procedure can be iterated to accommodate several layers of hidden complexity.

NeatTools modules provide multithreaded, real-time support. Editing and execution are active concurrently, without need for compilation steps. This generally accelerates system design, and facilitates rapid prototyping and debugging. To construct a dataflow network, the user drags and drops modules (objects) from toolboxes to the desktop and then interconnects them with input/output and control/parametric lines. Properties of the desktop and many of the modules are set via a right-mouse-click. In this way, users are in effect developing elaborate interface programs without having to know C++ or the fundamental structure of NeatTools, or indeed having to write any textual program code at all. On the other hand, the system is open, so that experienced programmers can develop external modules by following instructions in an online developer’s kit. External modules can be loaded into the system at runtime, or arranged to preload automatically. The NeatTools executable development program, while massive in terms of source code (~40,000 lines of C++), is compact; the downloadable compressed archive file is about 600 kilobytes in size, so it easily fits on a diskette along with a compressed archive (under 100 kilobytes) of representative "*.ntl" files.

    1. Interface Devices

The system hardware consists of mounting components, sensors, serial interface boxes, computer, and optional output interfaces and devices. Our current electronic interface module (TNG-3; www.mindtel.com/mindtel/anywear.html) accommodates up to 8 analog and 8 digital (switch) sensors and streams the data at 19,200 bits per second to the serial port of a computer. Connections are made via standard stereo and mono plugs. The heart of TNG-3 is a programmable microcontroller integrated circuit [2], a type of computer-on-a-chip commonly used in industrial and office automation, and in automotive, communication, and consumer electronics under the general rubric of embedded control systems. The microcontroller in TNG-3 is programmed in assembly language for optimal performance. TNG-3 requires no batteries or wall transformer, as it derives 5 volt power for the onboard circuitry and sensors (requiring only modest power) by exploiting some of the unused serial-port lines—a technique commonly used to power a serial mouse on a PC.

    1. Sensors

Among the sensors we have used are switches, cadmium-sulfide (CdS) photocells, Hall Effect transducers (magnetic sensors), rotary and linear-displacement potentiometers, bend sensors, piezo film sensors, strain gauges, and custom electroconductive-plastic pressure sensors. Most of these sensors are inexpensive, some costing under a dollar and some costing but a few dollars. Certain types (Hall Effect and capacitive) require preamplifiers and/or signal processing electronics, which increase the cost, but not unduly.

  1. Results

The most substantive technical result of our work to date is the development of the NeatTools system along with the TNG interfaces and sensors, as described above. We have begun to apply the core technologies in a number of key application areas.

    1. Disabilities Applications

For illustration, we describe the types of systems we developed for Eyal Sherman, a member of our team who, is a brainstem quadriplegic, unable to move his head or to vocalize. He is currently a senior at Nottingham High School in Syracuse. We have enabled Eyal to precisely control mouse motion, and thereby control graphical user interfaces, such as Windows 95. Eyal and his family have achieved independence in using this system; his mother is able to set up the hardware and software routinely in a matter of minutes.

The primary interface device is a chin joystick, extracted from an inexpensive game controller, mounted to a curved support rod, which is clamped in turn to the wheelchair headrest post, thereby allowing the device to be rotated away when not in use. To allow easy mounting and adjustment of sensors near Eyal’s expressive facial regions—mainly cheeks and forehead—an industrial designer on our team, Michael Konieczny, built lightweight adjustable mounts that attach to eyeglasses. Currently we are using small switches as the expressional sensors, but we have also used Hall Effect transducers (together with tiny rare-earth magnets) and photocells to detect facial gestures.
 

 

      1. JoyMouse Network

An application program demonstrating the considerable power of NeatTools is the JoyMouse dataflow network (Fig. 2), which Eyal and other youngsters with quadriplegia have been using with good results. For details, manual, images, and downloads, see http://www.pulsar.org/neattools/edl/joymouse_docs/JoyMouseManual.html. This uses a modest fraction of the channel capacity of TNG-3 (2 of the 8 analog inputs; and currently 3 of the 8 digital inputs). The JoyMouse application uses advanced features of NeatTools including logic gates, multiplexers and demultiplexers, encoders and decoders, various timing and mathematical operations, and sockets (here in "localhost" mode so that two windows on the same platform can communicate). The network is shown here both in developer mode and in user mode, wherein editing is blocked and only essential regions of the network are visible.

Figure 2 includes a graph of the available relationships between mouse-cursor velocity and analog-joystick displacement. For all three functions, there is a dead band, or free-play zone, near the origin so that the mouse cursor is not subject to jitter when the joystick is physically at rest. The linear relation offers essentially proportional control. The nonlinear relations—quadratic (necessarily inverted for negative displacement) and cubic—offer fine control for up to about half-maximal displacement, and rapid travel with larger displacements. In most applications, the cubic function offers the best performance. Various parameters (pertaining to gain, resolution, etc.) can be set or modified using sliders while remaining in user mode.

The network also accommodates input from three switches: a) a left cheek switch for left-mouse-button; b) a variable-use right-cheek switch for right-mouse-button, enter-key or backspace-key; and c) a forehead switch to dynamically select action mode of the right-cheek switch. Alternatively, the switches could be replaced with analog sensors for which thresholds would be set with sliders within the JoyMouse program. Calibrator modules are included in the JoyMouse program, as in many other NeatTools programs, to automatically adjust to the signal range for analog inputs.

The network as shown can be minimized, once the Enable button has been activated, so that the operating system desktop becomes fully available to other application programs while the JoyMouse runs in background. An optional small satellite window (Fig. 2), a related NeatTools application, can remain visible to display the state of essential options that are under dynamic control of the user; this is made possible by using socket modules to communicate locally between the JoyMouse main window and satellite window. The user can toggle, for example, between mouse click and drag modes by a "smile" gesture (both cheek switches activated for 1 second).

By using this the JoyMouse in conjunction with low-cost commercial utility programs, including an onscreen keyboard (Fitaly™ from Textware Solutions, sometimes with their InstantText™ program for word/phrase prediction and abbreviation expansion), Eyal has been able to a) type text, b) generate speech, c) dial in to a server, d) invoke and use Web browsers and other application programs, e) compose and send e-mail messages, f) play video games alone or with others, g) operate remote controlled cars, h) draw sketches, and i) participate in science experiments and data analysis at school. Other people with severe disabilities have also successfully used the JoyMouse system and other applications with good success, for example children with cerebral palsy.

    1. Education, Rehabilitation, Telemedicine, and Defense Applications
      1. Education

NeatTools has many possible applications and roles in the education arena. We have mentioned the use of NeatTools to allow students with disabilities to participate actively in science laboratory activities. More generally, NeatTools lends itself well to student projects in the classroom, laboratory, science fairs, etc. Moreover, NeatTools can be used for training and prototyping in an industrial or community college setting. Because NeatTools can accommodate diverse external modules, the environment can be adapted to a wide range of simulation applications, notably in medicine. With the increasing use of sophisticated technology in healthcare, environments like NeatTools can be expected to play an increasing role in practice and in training of healthcare practitioners. Medical students, interns, and residents can benefit from the rapid prototyping capability and flexibility of NeatTools. While prior programming experience is clearly of benefit for those who wish to write applications in NeatTools, it can serve, on the other hand, as a training ground for practitioners and others who want to get their feet wet in programming before learning conventional languages like C and C++. The immediacy of the results in this visual programming/runtime environment, without need to cycle through edit/compile/execute cycles is clearly an advantage.

In limited testing, we have observed that schoolchildren are often able to grasp the essentials of NeatTools programming quite rapidly. For example, at SIGGRAPH 98 in Orlando, a number of schoolchildren came to our exhibit in the sigKIDS area. Typically, after the first daytime session, they downloaded NeatTools at home the first evening, proceeded to develop applications of their own, and then returned to our site the following morning to continue their programming and obtain more advanced training. Some of the programs they wrote were quite remarkable.

      1. Rehabilitation

In the rehabilitation field, our devices have been used for monitoring range of motion, for example at an elbow or knee joint, during exercises, and other aspects of human performance. Our systems are currently in use at two rehabilitation centers, namely the Sister Kenny Institute at Abbott Northwestern Hospital in Minneapolis and at East Carolina University Medical Center. They are currently being implemented at the Extended Care Facility of Oneida City Hospitals in Oneida, NY in a context focused more specifically on monitoring of care of residents.

      1. Telemedicine

Development of external modules for digital signal processing, digital image processing, and a host of other advanced modalities will expand the scope of NeatTools for clinical applications, basic research, and education and training. Areas of telemedicine that we anticipate would be well served by NeatTools included telerehabilitation, teleradiology, and general remote patient monitoring including home healthcare, particularly for the elderly still living at home but in need of continual observation. NeatTools already includes a module for the Welch Allyn Vital Signs MonitorÔ . The Internet socket feature of NeatTools, in conjunction with its audio (and soon video) codec, recording, and database functions, already provide base functionality for telemedicine applications.

      1. Defense

Another new project area for our HCI technologies concerns landmine detection and related applications involving wearable computers and distributed robotics (our BotMasters project, funded by DARPA). NeatTools and interfaces like ours can facilitate the signal processing and alerts in such critical real-time environments. Given the scourge of 100 million landmines on our planet, often from conflicts settled long ago, we hope that our technology can help reduce this nightmare while affording maximal safety to those engaged in this dangerous task.

  1. Conclusion

Our work is based on a systems approach wherein we have developed modular HCI hardware and software that is customizable, scalable, and extensible. Although most of the core functionality is in place, NeatTools remains under development. Improvements in the visual interface for the end user are needed. Expanding and enhancing the documentation is now a major priority. Much of the functionality and design of our software and hardware has been introduced according to the real needs of users like Eyal, and this will continue as these systems evolve.

References

[1] B. Shneiderman, Designing the User Interface: Strategies for Effective Human-Computer Interaction, 3rd ed. ISBN: 0201694972. Addison-Wesley, Reading MA, 1998

[2] J. B. Peatman, Design with PIC Microcontrollers. ISBN: 0137592590. Prentice Hall,Upper Saddle River NJ, 1998

THE MICROSCOPE OF THE MIND

The goal is to extend these environmental control systems into new methods of investigative research. Such as a test of basic cognitive functionality orthe capacity to maintainattentional focus necessary to complete an iterative series of cognitive tasks.Data fusion of sensor data with user interaction parameters will allow meaningful correlation's to be made across various performance modalities. A goal of this application is to seek to identify a qualitativedifference between the two performance/behavior states and then investigate various methods of quantifying that difference in a way that can be generalized.

It is postulateda differencewill beseen in the modulation of some of the natural rhythms. It is also postulated that a cognitively induced modificationwould be consistent in an individual but would most likely be different between individuals. The psycho-social-behavioral nature of individuals factors into initial assessment of their cognitive function. Other indicators of cognitive function areshort-intermediate-long term memory, sound judgment and the ability to identify similarities in related objects. Performance

of these cognitive functions is a strong indicator of the biologic health of the brain. Poor performanceis highly correlated with organic brain dysfunction.

 

 

The articles below should be summarized in Chapter X:  the first set of case studies showing the mathematics/physiological relationship to your theories

 

1.                  Basic neuroscience

2.                  The following abstracts demonstrate the application of dynamical analysis to physiological signals and show that it is possible to characterize abnormal electrophysiological rhythms as low dimensional attractors.

3.                  nSale EJ, Warner DJ, Price S, Will AD.Compressed complexity parameter.Proceedings of the 2nd International Brain Topography Conference., Toronto, Ontario. 1991

4.                  nWarner DJ, Price SH, Sale EJ, Will AD.Chaotropic dynamical analysis of the EEG.Brain Topography. 1990.

5.                  nWarner DJ, Price SH, Sale EJ, Will AD.Chaotropic Dynamical Analysis of the EEG.Electroencephalography and Clinical Neurophysiology. 1990.

6.                  nWarner D, Will AD.Dynamical analysis of EEG: evidence for a low-dimensional attractor in absence epilepsy.Neurology. 1990 April;40(1):351.

7.                  The following abstract introduces the possibility of quantitatively correlating movement related potentials recorded over the scalp with complex motor tasks using human-computer interface technology

Warner DJ, Will AD, Peterson GW, Price SH, Sale EJ, Turley SM.Quantitative motion analysis instrumentation for movement related potentials.Electroencephalography and Clinical Neurophysiology. 1991;79:29-30.

8.                  Clinical neuroscience

9.          The basic problem being addressed by the following abstracts is that clinical research involving neurological disorders is severely limited by the inability to objectively and quantitatively measure complex motor performance.Large double-blind randomized controlled trials of novel therapies continue to rely on clinical rating scales that are merely ordinal and subjective.In addition, research on the basic neuroscience of motor control is greatly impeded by the lack of quantitative measurement of motor performance.

10.              nWill AD, Sale EJ, Price S, Warner DJ, Peterson GW.Quantitative measurement of the “milkmaid” sign in Huntington’s disease.Annals of Neurology. 1991;30:320

11.              nWarner DJ, Will AD, Peterson GW, Price SH, Sale EJ.The VPL data glove as an instrument for quantitative motion analysis.Brain Topography. 1990.

12.              nWarner DJ, Will AD, Peterson GW, Price SH, Sale EJ.The VPL data glove as an instrument for quantitative motion analysis.Brain Topography. 1990.

13.              nWill AD, Warner DJ, Peterson GW, Price SH, Sale EJ. Quantitative motion analysis of the hand using the data glove.Muscle and Nerve. 1990.

14.              nWill AD, Warner DJ, Peterson GW, Sale EJ, Price SH.The data glove for precise quantitative measurement of upper motor neuron (UMN) function in amyotrophic lateral sclerosis (ALS).Annals of Neurology. 1990;28:210.

15.              nWill AD, Warner DJ, Peterson GW, Price SH, Sale EJ.Quantitative analysis of tremor and chorea using the VPL data glove.Annals of Neurology. 1990;28:299.

16.              Therapeutic potential of human computer interface

17.              nWarner DJ, Will AD, Peterson GW, Price SH, Sale EJ.The VPL data glove as a tool for hand rehabilitation and communication.Annals of Neurology. 1990;28:272.

THE NEUROREHABILITATION WORKSTATION:

Do you really need to include anyting on NRW??? How does it fit in the outline we discussed?  Maybe “case 3” in conjunction with Ayal, disussing the system.  Or could it be part of your conclusions as “future work”.  You can’t include everything!

A Clinical Application of Machine-Resident Intelligence (1993)

Dave Warner, Jeff Sale, Stephen Price, Doug Will

Human Performance Institute

Loma Linda University Medical Center

Abstract

The Neurorehabilitation Workstation is described. The need to maintain a clinical perspective motivates the comprehensive nature of the system, which integrates multiple data acquisition devices, interfacetechnologies, advanced analytical techniques, and multi-sensory rendering capabilities. Emphasis is placed on machine-resident intelligence embedded at several levels.

Introduction

The field of Rehabilitation applies techniques and resources from many disciplines and is constantly seeking to improve the measurement of human performance and the assessment of therapeutic efficacy. We have had considerable success recently in our attempts to transfer new technologies into the clinical setting for such purposes. Devices such as gloves to measure hand motion dynamics, surface EOG and EMG sensors for eye movement and muscle contraction, and lightweight pressure sensor arrays for gait analysis show great promise in therapy. At the same time, our efforts to make these transfers permanent have been impeded by the lack of standard platforms, interfaces, inaccessible file formats, as well as the medical community's lack of time, technical expertise, and adequate budgets. Until now no cost-effective solution appeared possible. Recent developments in human-computer interface hardware and software, data analysis, and expert systems suggest this is no longer the case. We are currently exploring a solution, the Neurorehabilitation Workstation (NRW), which integrates these technologies and methods into a comprehensive system designed specifically for the clinic. In addition, we hope it may be generic enough to act as a standard for other similar applications. The success of the NRW depends on four things; modular design (for distributed processing and adaptability), integration of several data input devices into a single platform within a common interface protocol, implementation of machine-resident intelligence (neural nets, fuzzy logic) on several levels, and creation of a development environment driven by clinical needs. We detail aspects of these features below.

Data Input

A necessary feature of the NRW is the integration of a variety of data input devices into a single system to include EEG, EMG, EOG, ECG, dynamic bend sensors, pressure sensors, audio and video digitizers, etc. The resulting capacity for data fusion allows for meaningful correlations to be made across various performance modalities. The devices and their hardware boards connect to an external module, and a high speed bus will route the data both to a central multi-tasking server and to the rendering subsystem for immediate feedback. The server should be intelligent enough to automatically implement a custom configuration of input device parameters, interface functionality, and relevant records based on the device(s) connected and the identity of the operator(s) and patient(s) currently at the system.

Data Management

The maintenance of medical record integrity is a significant issue. Such integrity is achieved through security protocols, standardized data formats, error handling, and semi-automated database archiving. The data management subsystem tasks also include linking the device data with the patient record and specifying sensor-specific

data formats and structures.

Interactive Modalities/Methodologies

The user interface will be based on new theories of human-computer interaction methodologies , computer-supported cooperative work, knowledge engineering, expert systems, and adaptive task analysis The system will monitor a user's actions, learn from them, and adapt by varying aspects of the system's configuration to optimize performance. Adaptable on-line knowledge-based help using text, graphics, and animated tutorials provide interactive learning and navigation.

Data Analysis

Effective therapeutic intervention relies on a comparative evaluation of a patient's progressing or digressing state. The nature of the change in this state may often be quite subtle, even imperceptible using traditional techniques. Given that the data acquisition subsystem can detect these changes, the data analysis subsystem is designed to enhance them in ways that may then be rendered to optimize the operator's sensory modalities. Linear and nonlinear multivariate analysis tools will be capable of processing multiple data sets in a variety of ways, including graphical analysis (phase portraits, compressed arrays, recurrence maps, etc.) and sound editing (mixing, filtering). Automated detection of trends and correlations using fuzzy logic may be performed in the background or in a post-processing mode. The user may then be alerted by the system if it detects areas worthy of further investigation. Such a feature should expedite the creation of a taxonomy of lesion-specific impairments.

User Classification

We have defined five types of users. These types help define discrete levels of user functionality. Therapists, the primary users of the system, are responsible for data acquisition, data management, basic analysis, and patient-oriented interactive biofeedback modes. Technicians are responsible for simple data acquisition. Physicians will use the more comprehensive data analysis tools. Researchers will focus on the data analysis but their use of the system will be unconstrained. They will explore and develop custom analytical techniques. Patients will primarily use the therapeutic biofeedback features of the system, usually in a supervised setting.

Data Rendering Modalities

With multi-sensor data acquisition and advanced analytical characterization, the rendering capacity of the system becomes extremely vital. The NRW will implement multi-sensory rendering by combining recently developed 3D sound and tactile feedback systems with advanced visualization technologies. Research in human sensory physiology has shown the eye to be optimized for feature extraction of spatially-rendered data, the ear for temporally-rendered data, and the tactile sense for textures [9]. Thus the NRW will enhance perception of complex relationships by integrating visual, binaural, and tactile modalities. The rendering subsystem has a near real-time biofeedback mode for use in a therapeutic paradigm and a data perceptualization mode for use in an analytical paradigm. Outputs from sensing devices and analytical operations are parsed and routed to the combination of rendering modalities best suited to render that information.

Conclusion

The goals of the NRW are twofold; 1) to provide an open hardware platform and modular infrastructure which will expedite the implementation of new technologies into the clinic, 2) to augment clinical therapy with new methods of interaction and analysis. Success should result in providing neurorehabilitation, and the medical community in general, with a powerful tool for characterizing the complex nature of normal and impaired human performance.

CYBERNETIC HEDONISM

The other focus of our efforts is in developing highly interactive, biocybernetic systems where biological signals can modify an environmental chambers' parameters allowing the user to bioelectrically interface with spatialized environments. We believe that such physiologically modulated environmental systems may have a health preserving function. Interfaces to control stimulation can adaptively

utilize any biosignal. The result is the capacity tocreate a stimulus regime that accelerates relaxation and facilitates stress reduction. This is an application of wellness maintenance technology.

"The Nirvana Express"

Remapping the Human-Computer Interface for Optimized Perceptualization of Medical Information

Virtual Reality is a paradigm shift in the way we think about mass communication and information technologies. Consider the following: In the distant past Medicine was an art, the practice of medicine was guided mostly by refined heuristics and intuition. All the external senses were used in the evaluation of the patient. Visual, auditory, tactile, olfactory and gustatory cues were all integrated to give the healer a perception of what to do. With the development of science and technology the practice of medicine has slowly shifted from being intuition based to being guided by the results of objective tests. In many ways this is progress, though in other ways we seem to have forsaken our own senses in favor of machines,

thus removing ourselves from the determination of the problem. The ever increasing ability of technology to quantitate complex physiological parameters and to image volumetric anatomical structures are taking us to a point where we will soon be unable to assimilate all the available information through the traditional means (i.e., numbers and graphs). Recent attempts to solve this problem have focused primarily on advanced visualization techniques. While much progress has been made in this field, the visual sense is finite and is reaching its saturation level.

Enter Virtual Reality. Virtual reality technology is primarily interface technology that renders computer information onto multiple human sensory systems to give a sustained perceptual effect (i.e., a sensation with a context) while monitoring human response in the form of gestures, speech, eye movements, brain waves and other inputs. This interface also allows for a natural interaction with abstract data sets providing an integrated experiential encounter with information. This new technology provides us with the capacity to move into a new paradigm, a paradigm where the physiological integration of a pansensory rendering of medically relevant information provides an enhanced capability to discriminate between classes of

complex dynamic interactions involved in pathophysiological processes.

Much attention has been given to enhanced visualization techniques. Dynamic volumetric stereoscopic rendering methods have greatly enhanced our capacity for visual assessment of medical information. We need however to be careful that we do not become photo-chauvinistic and forget that we have other senses. There are relevant concepts from sensory physiology that are now within the resolution of the interface technology. This new technology increases the number and variation of simultaneous sensory inputs, thus making the body a sensorial combinetric integrator. A good working knowledge of sensory physiology and perceptual psychophysics can help us optimize our future interactions with the computer. Aside from the basic neuroscience issues of modality, duration, intensity, distribution, frequency, spatial displacement, contrast, inhibition, threshold, adaptation, transduction, conductance and transmission (to name a few) we must identify the optimal perceptual state space parameters with in which information can best be rendered. We must also identify which types of information are best rendered by each specific sense modality.

New technologies and techniques have recently become available that allow for the rendering of data via auditory means. Not only can we now represent any data set in the form of sound but we can also spatialize the displacement of multiple sound sources giving us simultaneous exposure to different dynamic data sets. In these spatialized environments we can shift our attentional focus from source to source for real time comparison of multiple sets of data. Devices now exist which can stimulate the sensation of pressure, vibration, texture and temperature. This is a relatively untouched field as far as abstract data representation is concerned. These modalities combined with somatotopic placement also provide for spatial coding of the rendered information. The implementation of vision, hearing and touch technologies allow for simultaneous sensation of multiple independent and dynamic data sets that can be integrated physiologically into a single perceptual state.

Yet to be fully embraced by the virtual reality community are the olfactory and gustatory senses, smell and taste. While their current integration is questionable, their potential impact is quite profound. Recent work in olfactory science has identified at least 30 basic smells. Technologies under current development will be able to deliver quantified combinations of these smells for a wide range of distinct perceptual states.

In the area of taste the development of automated food processing will eventually allow for the doctor to get a taste of complex data. The use of smell and taste to help convey the state of complex systems may seem like quite a reach of the imagination. However, the possibility that these senses may help discern subtle changes in complex systems warrants investigation. We are embarking on an adventure that promises to change our relationship with the computer forever. With the immersion of all the external senses into virtual reality, our ability to perceptualize medically relevant information in an interactive mode will greatly enhance our capacity for improvisational investigation (stand up research). This is truly a

paradigm shift and the beginning of a new era of computer assisted medicine.

Dave Warner MD

Medical Neuroscientist

Dir. Medical Intelligence

MindTel

davew@well.com

www.pulsar.org

Warner, a medical neuroscientist, has an MD from Loma Linda University and is the director of the Institute for Interventional Informatics and has gained international recognition for pioneering new methods of physiologically based human-computer interaction. Warner's research efforts have focused on advanced instrumentation and new methods of analysis which can be applied to evaluating various aspects of human function as it relates to human-computer interaction, this effort was to identify methods and techniques which optimize information flow between humans and computers. Warner's work has indicated an optimal mapping of interactive interface technologies to the human nervous system's capacity to transduce, assimilate and respond intelligently to information in an integrative-multisensory interaction will fundamentally change the way that humans interact with information systems. Application areas for this work include quantitative assessment of human performance, augmentative communication systems, environmental controls for the disabled, medical communications and integrated interactive educational systems. Warner is particularly active in technology transfer of aerospace and other defense derived technologies to the fields of health care and education. Specific areas of interest are: advanced instrumentation for the acquisition and analysis of medically relevant biological signals; intelligent

informatic systems which augment both the general flow of medical information and provide decision support for the health care professional; public accesses health information databases designed to empower the average citizen to become more involved in their own health care; and advanced training technologies which will adaptively optimize interactive educational systems to the capacity of the user. Selected Publications are:

1. Warner D, Rusovick R, Balch D (1998) The Globalization of Interventional Informatics Through Internet Mediated Distributed Medical Intelligence, New Medicine (in press)

2. Warner D, Tichenor J.M, Balch D.C. (1996) Telemedicine and Distributed Medical Intelligence, Telemedicine Journal 2: 295-301.

3. Warner, D., Anderson, T., and Joh Johannsen. (1994). Bio-Cybernetics: A Biologically Responsive Interactive Interface, in Medicine Meets Virtual Reality II: Interactive Technology & Healthcare: Visionary Applications for Simulation Visualization Robotics. (pp. 237-241). San Diego, CA, USA: Aligned Management Associates.

4. Warner, D., Sale, J., Price, S. and Will, D. (1992). Remapping the Human-Computer Interface for Optimized Perceptualization of Medical Information, in Proceedings of Medicine Meets Virtual Reality. San Diego, CA: Aligned Management Associates.

5. Warner, D., Sale, J. and Price, S. (1991). The Neurorehabilitation Workstation: A Clinical Application for Machine-Resident Intelligence, in Proceedings of the 13th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. ( pp. 1266-1267). Los Alamos, CA: IEEE Computer

Society Press.