Grok Box

A perceptual and expressional decision support tool for human disaster response


Physical Concepts

Grok Box is a tool for allowing a single user to interact as rapidly and with as much digital information as it is possible to render to the human body. This idea of "rendering information to the human body" may sound strange, but should become clear soon. These simple images above convey only the idea that Grok Box is a small space which enloses a single individual. This site will explore from the simplest to the most complex elements required to create a tool like Grok Box.

In an emergency situation of a military or civilian nature, it is of the utmost importance that those individuals in critical decision making positions have access to all information bearing on everything from assessing casualties, supporting medics in the field treating the wounded, location of helicopters, food supplies, blankets, medicines, and natural or hostile threats which yet encroach on crises sites and staff. Historically, response to these events has lacked the integrative informational centers now possible with web-based communications systems. However, the critical issue we aim to address with Grok Box is the human-computer interface.

PERFORMANCE ENHANCEMENT THROUGH PERCEPTUAL MODULATION

The objective of Grok Box is to render simultaneous multisensory information to the human body. This technology is based on a new paradigm of human computer interaction known as Biocybenetics. Grok Box systems will render information to the visual, auditory, and tactile senses of its user. Based on principles of human sensory physiology, Grok Box' multisensory interfaces will make maximal use of the 'feature extraction' properties of the human senses. That is, a neurological margin of maximally meaningful input from the outside can be rendered to each sensory system. Rendering systems of Grok Box will accordingly map to this margin for respective senses. Variably located 3-D visual displays, spatialized sounds, precision tactile body surface coding serve as information rendered to/onto the body (eyes, ears, skin, vestibular, etc.). EMG-like sensors across muscle surfaces, foot activated pressure sensors, voice recognition systems, and hand held devices allow user input.

We propose to develop an interactive environment incorporating new ways to render complex information to the user by optimizing the interface system to match the human nervous system's ability to transduce, transmit, and render to consciousness the necessary information. Such a system will be based on the human user's neural information processing that directly supports perception. A perceptualization environment could be built that optimizes the human's ability to discriminate and iteratively refine emergent patterns from any variety of sensor data. The perceptualization environment, the "GROK-BOX," will integrate several vital components of an interactive information environment. Key elements include multisensory rendering systems, advanced human input devices, and an array of computational techniques that transform the diverse data types into perceptible patterns which enhance human capacity to perceive meaningful signals in a "sea of noise." A comprehensive set of visual, aural, tactile, proprioceptive, and somatosensory, rendering devices will be integrated into the system to give the user an integrative experiential interaction with the complex data types. The system will also integrate several unique input systems that allow the user to have a multiplicity of interaction options; in this way, the user will be able to feed back the perceived significance to the system for further enhancement. The GROK-BOX will be a tool to interactively experience a wide variety of natural and unnatural perceptualization techniques.

This new technology increases the number and variation of simultaneous sensory inputs, thus making the body a sensorial combinetric integrator.



This draft page created by Rik Rusovick
September 18, 2000