Date of Award

2012

Document Type

Doctoral Thesis

Degree Name

Doctor of Philosophy

Department

School of Mechanical & Manufacturing Engineering

First Advisor

Dr. Michael J. O'Mahony

Abstract

Current sensory substitution systems for the visually impaired are limited in terms of their spatial and depth resolution capabilities. They are unable to relay real-time 3D spatial information regarding objects in the direct path of the user. Recent advances in 3D imaging technology have presented new opportunities to develop improved sensory substitution systems to compensate for visual sensory loss. The present study established the design criteria, designed, manufactured and evaluated the performance of a real-time vision substitution device called VisionRE.

Knowledge obtained from a literature review of Human Machine Interfaces (HMIs), feedback from visually impaired focus groups and 3D electrode modelling led to the selection of the tongue as the most appropriate HMI site and to the design and development of VisionRE. VisionRE segments, classifies and encodes spatial information regarding objects in the direct path of a visually impaired user. Object shape, distance, position and area information is relayed via a developed 64-channel electro tactile VisionRE tongue stimulator (VTS) with audio feedback.

Electrotactile stimulus frequency, pulse duration, current amplitude, stimulus location, spatial summation and the positional accuracy of the tongue were investigated. It was determined that coding information by altering the intensity or frequency of electrotactile stimuli on the tongue leads to information loss. Three temporal levels of electrotactile stimuli, determined by this research, are used by the VTS to encode the relative distance/priority of objects to an individual. Clinical trials demonstrated that eight participants could easily identify the three stimuli with an average accuracy of 97%. Individuals using the device demonstrated an average shape recognition performance of 73%, a stimulus positional accuracy of 79% and a dynamic range of 10.23±2.56dB. Spatial summation was found to greatly affect the perceived intensity of stimuli. Testing established that the posterior region of the tongue requires over three times more current than the anterior region to reach its minimum sensory threshold. EEG experiments using electrotactile shapes found Trigeminal Somatosensory-Evoked-Potentials (TSEPs) and no evidence of Visually-Evoked-Potentials VEPs and hence no cross-modal plasticity during initial VTS application.

Innovative aspects of the device include: 1) Ability to monitor the position of the tongue on the VTS in real time. 2) The use of machine vision based object classifiers to automatically classify objects selected by a user’s tongue. 3) Ability to provide the exact distance and area for objects selected by the tongue (Active Feedback). 4) An integrated constant current source to keep the perception of stimuli constant and regulate for changes in saliva and tongue resistance, 5) Ability to detect objects independently of their colour, and hence multicoloured objects are not falsely segmented into a number of objects as is the case with previous devices.

Access Level

info:eu-repo/semantics/openAccess

Share

COinS