Date of Award

2004

Document Type

Master Thesis

Degree Name

Masters of Science (Research)

Department

Mathematics and Computing

First Advisor

Dr. Paul Walsh

Abstract

Artificial neural networks are biologically inspired computational methods that have the ability to approximate discrete, real and vector valued target functions. For some problem domains this ability makes neural networks a more appealing solution than traditional computational techniques. Such problem domains typically contain a large number of parameters that are interrelated in a complex and often unknown manner. This makes rule based solutions all but impossible. However another feature of such problem domains is the presence of large volumes of real world data that can be used to tram a neural network until it learns the solution.

One of the more popular neural network training algorithms is the backpropagatioii of error training algorithm. Backpropagation works by compiling a large set of input/output samples and using these samples to adjust the network structure until the network encodes a solution to the problem. After the presentation of each sample to the network its structure is adjusted in such a way as to bring the network output closer in line with the sample output. This process must be repeatedly performed several hundred or thousand times on all the samples before the network converges on a solution.

Hence a major limiting factor of the backpropagation algorithm is the length of time required to train a network. Therefore the aim of this thesis is to investigate parallel implementation techniques that reduce this time without altering the underling calculations. Methods investigated include the use of SIMD processing to speedup the underlying operations, the parallel implementation of backpropagation on a dedicated cluster computer, and its extension to a High Throughput Computing environment

Access Level

info:eu-repo/semantics/openAccess

Share

COinS