+ 1-888-787-5890  
   + 1-302-351-4405  
 
 
 
 

Essay/Term paper: Neural networks

Essay, term paper, research paper:  Internet

Free essays available online are good but they will not follow the guidelines of your particular writing assignment. If you need a custom term paper on Internet: Neural Networks, you can hire a professional writer here to write you a high quality authentic essay. While free essays can be traced by Turnitin (plagiarism detection program), our custom written essays will pass any plagiarism test. Our writing service will save you time and grade.



Neural Networks


A neural network also known as an artificial neural network provides a
unique computing architecture whose potential has only begun to be tapped. They
are used to address problems that are intractable or cumbersome with traditional
methods. These new computing architectures are radically different from the
computers that are widely used today. ANN's are massively parallel systems that
rely on dense arrangements of interconnections and surprisingly simple
processors (Cr95, Ga93).
Artificial neural networks take their name from the networks of nerve
cells in the brain. Although a great deal of biological detail is eliminated in
these computing models, the ANN's retain enough of the structure observed in the
brain to provide insight into how biological neural processing may work (He90).
Neural networks provide an effective approach for a broad spectrum of
applications. Neural networks excel at problems involving patterns, which
include pattern mapping, pattern completion, and pattern classification (He95).
Neural networks may be applied to translate images into keywords or even
translate financial data into financial predictions (Wo96).
Neural networks utilize a parallel processing structure that has large
numbers of processors and many interconnections between them. These processors
are much simpler than typical central processing units (He90). In a neural
network, each processor is linked to many of its neighbors so that there are
many more interconnections than processors. The power of the neural network
lies in the tremendous number of interconnections (Za93).
ANN's are generating much interest among engineers and scientists.
Artificial neural network models contribute to our understanding of biological
models. They also provide a novel type of parallel processing that has powerful
capabilities and potential for creative hardware implementations, meets the
demand for fast computing hardware, and provides the potential for solving
application problems (Wo96).
Neural networks excite our imagination and relentless desire to
understand the self, and in addition, equip us with an assemblage of unique
technological tools. But what has triggered the most interest in neural
networks is that models similar to biological nervous systems can actually be
made to do useful computations, and furthermore, the capabilities of the
resulting systems provide an effective approach to previously unsolved problems
(Da90).
Neural network architectures are strikingly different from traditional
single-processor computers. Traditional Von Neumann machines have a single CPU
that performs all of its computations in sequence (He90). A typical CPU is
capable of a hundred or more basic commands, including additions, subtractions,
loads, and shifts. The commands are executed one at a time, at successive steps
of a time clock. In contrast, a neural network processing unit may do only one,
or, at most, a few calculations. A summation function is performed on its
inputs and incremental changes are made to parameters associated with
interconnections. This simple structure nevertheless provides a neural network
with the capabilities to classify and recognize patterns, to perform pattern
mapping, and to be useful as a computing tool (Vo94).
The processing power of a neural network is measured mainly be the
number of interconnection updates per second. In contrast, Von Neumann machines
are benchmarked by the number of instructions that are performed per second, in
sequence, by a single processor (He90). Neural networks, during their learning
phase, adjust parameters associated with the interconnections between neurons.
Thus, the rate of learning is dependent on the rate of interconnection updates
(Kh90).
Neural network architectures depart from typical parallel processing
architectures in some basic respects. First, the processors in a neural network
are massively interconnected. As a result, there are more interconnections than
there are processing units (Vo94). In fact, the number of interconnections
usually far exceeds the number of processing units. State-of-the-art parallel
processing architectures typically have a smaller ratio of interconnections to
processing units (Za93). In addition, parallel processing architectures tend to
incorporate processing units that are comparable in complexity to those of Von
Neumann machines (He90). Neural network architectures depart from this
organization scheme by containing simpler processing units, which are designed
for summation of many inputs and adjustment of interconnection parameters.
The two primary attractions that come from the computational viewpoint
of neural networks are learning and knowledge representation. A lot of
researchers feel that machine learning techniques will give the best hope for
eventually being able to perform difficult artificial intelligence tasks (Ga93).
Most neural networks learn from examples, just like children learn to
recognize dogs from examples of dogs (Wo96). Typically, a neural network is
presented with a training set consisting of a group of examples from which the
network can learn. These examples, known as training patterns, are represented
as vectors, and can be taken from such sources as images, speech signals, sensor
data, and diagnosis information (Cr95, Ga93).
The most common training scenarios utilize supervised learning, during
which the network is presented with an input pattern together with the target
output for that pattern. The target output usually constitutes the correct
answer, or correct classification for the input pattern. In response to these
paired examples, the neural network adjusts the values of its internal weights
(Cr95). If training is successful, the internal parameters are then adjusted to
the point where the network can produce the correct answers in response to each
input pattern (Za93).
Because they learn by example, neural networks have the potential for
building computing systems that do not need to be programmed (Wo96). This
reflects a radically different approach to computing compared to traditional
methods, which involve the development of computer programs. In a computer
program, every step that the computer executes is specified in advance by the
network. In contrast, neural nets begin with sample inputs and outputs, and
learns to provide the correct outputs for each input (Za93).
The neural network approach does not require human identification of
features. It also doesn't require human development of algorithms or programs
that are specific to the classification problem at hand. All of this will
suggest that time and human effort can be saved (Wo96). There are drawbacks to
the neural network approach, however. The time to train the network may not be
known, and the process of designing a network that successfully solves an
applications problem may be involved. The potential of the approach, however,
appears significantly better than past approaches (Ga93).
Neural network architectures encode information in a distributed fashion.
Typically the information that is stored in a neural network is shared by many
of its processing units. This type of coding is in stark contrast to
traditional memory schemes, where particular pieces of information are stored in
particular locations of memory. Traditional speech recognition systems, for
example, contain a lookup table of template speech patterns that are compared
one by one to spoken inputs. Such templates are stored in a specific location
of the computer memory. Neural networks, in contrast, identify spoken syllables
by using a number of processing units simultaneously. The internal
representation is thus distributed across all or part of the network.
Furthermore, more than one syllable or pattern may be stored at the same time by
the same network (Ze93).
Neural networks have far-reaching potential as building blocks in
tomorrow's computational world. Already, useful applications have been designed,
built, and commercialized, and much research continues in hopes of extending
this success (He95).
Neural network applications emphasize areas where they appear to offer a
more appropriate approach than traditional computing has. Neural networks offer
possibilities for solving problems that require pattern recognition, pattern
mapping, dealing with noisy data, pattern completion, associative lookups, and
systems that learn or adapt during use (Fr93, Za93). Examples of specific areas
where these types of problems appear include speech synthesis and recognition,
image processing and analysis, sonar and seismic signal classification, and
adaptive control. In addition, neural networks can perform some knowledge
processing tasks and can be used to implement associative memory (Kh90). Some
optimization tasks can be addressed with neural networks. The range of
potential applications is impressive.
The first highly developed application was handwritten character
identification. A neural network is trained on a set of handwritten characters,
such as printed letters of the alphabet. The network training set then consists
of the handwritten characters as inputs together with the correct identification
for each character. At the completion of training, the network identifies
handwritten characters in spite of the variations (Za93).
Another impressive application study involved NETtalk, a neural network
that learns to produce phonetic strings, which in turn specify pronunciation for
written text. The input to the network in this case was English text in the
form of successive letters that appear in sentences. The output of the network
was phonetic notation for the proper sound to produce given the text input. The
output was linked to a speech generator so that an observer could hear the
network learn to speak. This network, trained by Sejnowski and Rosenberg,
learned to pronounce English text with a high level of accuracy (Za93).
Neural network studies have also been done for adaptive control
applications. A classic implementation of a neural network control system was
the broom-balancing experiment, originally done by Widrow and Smith in 1963.
The network learned to move a cart back and forth in such a way that a broom
balanced upside-down on its handle tip and the cart remained on end (Da90).
More recently, application studies were done for teaching a robotic arm how to
get to its target position, and for steadying a robotic arm. Research was also
done on teaching a neural network to control an autonomous vehicle using
simulated, simplified vehicle control situations (Wo96).
Neural networks are expected to complement rather than replace other
technologies. Tasks that are done well by traditional computer methods need not
be addressed with neural networks, but technologies that complement neural
networks are far-reaching (He90). For example, expert systems and rule-based
knowledge-processing techniques are adequate for some applications, although
neural networks have the ability to learn rules more flexibly. More
sophisticated systems may be built in some cases from a combination of expert
systems and neural networks (Wo96). Sensors for visual or acoustic data may be
combined in a system that includes a neural network for analysis and pattern
recognition. Robotics and control systems may use neural network components in
the future. Simulation techniques, such as simulation languages, may be
extended to include structures that allow us to simulate neural networks.
Neural networks may also play a new role in the optimization of engineering
designs and industrial resour ces (Za93).
Many design choices are involved in developing a neural network application.
The first option is in choosing the general area of application. Usually this
is an existing problem that appears amenable to solutions with a neural network.
Next the problem must be defined specifically so that a selection of inputs and
outputs to the network may be made. Choices for inputs and outputs involve
identifying the types of patterns to go into and out of the network. In
addition, the researcher must design how those patterns are to represent the
needed information. Next, internal design choices must be made. This would
include the topology and size of the network (Kh90). The number of processing
units are specified, along with the specific interconnections that the network
is to have. Processing units are usually organized into distinct layers, which
are either fully or partially interconnected (Vo95).
There are additional choices for the dynamic activity of the processing
units. A variety of neural net paradigms are available. Each paradigm dictates
how the readjustment of parameters takes place. This readjustment results in
learning by the network. Next there are internal parameters that must be tuned
to optimize the ANN design (Kh90). One such parameter is the learning rate from
the back-error propagation paradigm. The value of this parameter influences the
rate of learning by the network, and may possibly influence how successfully the
network learns (Cr95). There are experiments that indicate that learning occurs
more successfully if this parameter is decreased during a learning session.
Some paradigms utilize more than one parameter that must be tuned. Typically,
network parameters are tuned with the help of experimental results and
experience on the specific applications problem under study (Kh90).
Finally, the selection of training data presented to the neural network
influences whether or not the network learns a particular task. Like a child,
how well a network will learn depends on the examples presented. A good set of
examples, which illustrate the tasks to be learned well, is necessary for the
desired learning to take place. The set of training examples must also reflect
the variability in the patterns that the network will encounter after training
(Wo96).
Although a variety of neural network paradigms have already been
established, there are many variations currently being researched. Typically
these variations add more complexity to gain more capabilities (Kh90). Examples
of additional structures under investigation include the incorporation of delay
components, the use of sparse interconnections, and the inclusion of interaction
between different interconnections. More than one neural net may be combined,
with outputs of some networks becoming the inputs of others. Such combined
systems sometimes provide improved performance and faster training times (Da90).
Implementations of neural networks come in many forms. The most widely
used implementations of neural networks today are software simulators. These
are computer programs that simulate the operation of the neural network. The
speed of the simulation depends on the speed of the hardware upon which the
simulation is executed. A variety of accelerator boards are available for
individual computers to speed the computations (Wo96).
Simulation is key to the development and deployment of neural network
technology. With a simulator, one can establish most of the design choices in a
neural network system. The choice of inputs and outputs can be tested as well
as the capabilities of the particular paradigm used (Wo96).
Implementations of neural networks are not limited to computer simulation,
however. An implementation could be an individual calculating the changing
parameters of the network using pencil and paper. Another implementation would
be a collection of people, each one acting as a processing unit, using a hand-
held calculator (He90). Although these implementations are not fast enough to
be effective for applications, they are nevertheless methods for emulating a
parallel computing structure based on neural network architectures (Za93).
One challenge to neural network applications is that they require more
computational power than readily available computers have, and the tradeoffs in
sizing up such a network are sometimes not apparent from a small-scale
simulation. The performance of a neural network must be tested using a network
the same size as that to be used in the application (Za93).
The response of an ANN may be accelerated through the use of specialized
hardware. Such hardware may be designed using analog computing technology or a
combination of analog and digital. Development of such specialized hardware is
underway, but there are many problems yet to be solved. Such technological
advances as custom logic chips and logic-enhanced memory chips are being
considered for neural network implementations (Wo96).
No discussion of implementation would be complete without mention of the
original neural networks, which is the biological nervous systems. These
systems provided the first implementation of neural network architectures. Both
systems are based on parallel computing units that are heavily interconnected,
and both systems include feature detectors, redundancy, massive parallelism, and
modulation of connections (Vo94, Gr93).
However the differences between biological systems and artificial neural
networks are substantial. Artificial neural networks usually have regular
interconnection topologies, based on a fully connected, layered organization.
While biological interconnections do not precisely fit the fully connected,
layered organization model, they nevertheless have a defined structure at the
systems level, including specific areas that aggregate synapses and fibers, and
a variety of other interconnections (Lo94, Gr93). Although many connections in
the brain may seem random or statistical, it is likely that considerable
precision exists at the cellular and ensemble levels as well as the system level.
Another difference between artificial and biological systems arises from the
fact that the brain organizes itself dynamically during a developmental period,
and can permanently fix its wiring based on experiences during certain critical
periods of development. This influence on connection topology does not occur in
c urrent ANN's (Lo94, Da90).
The future of neurocomputing can benefit greatly from biological studies.
Structures found in biological systems can inspire new design architectures for
ANN models (He90). Similarly, biology and cognitive science can benefit from
the development of neurocomputing models. Artificial neural networks do, for
example, illustrate ways of modeling characteristics that appear in the human
brain (Le91). Conclusions, however, must be carefully drawn to avoid confusion
between the two types of systems.










REFERENCES

[Cr95] Cross, et, Introduction to Neural Networks", Lancet, Vol. 346 (October
21, 1995), pp 1075.

[Da90] Dayhoff, J. E. Neural Networks: An Introduction, Van Nostrand
Reinhold, New York, 1990.

[Fr93] Franklin, Hardy, "Neural Networking", Economist, Vol. 329, (October 9,
1993), pp 19.

[Ga93] Gallant, S. I. Neural Network Learning and Expert Systems, MIT Press,
Massachusetts, 1993.

[Gr93] Gardner, D. The Neurobiology of Neural Networks, MIT Press,
Massachusetts, 1993.

[He90] Hecht-Nielsen, R. Neurocomputing, Addison-Wesley Publishing Company,
Massachusetts, 1990.

[He95] Helliar, Christine, "Neural Computing", Management Accounting, Vol.
73 (April 1, 1995), pp 30.

[Kh90] Khanna, T. Foundations of Neural Networks, Addison-Wesley
Publishing Company, Massachusetts, 1990.

[Le91] Levine, D. S. Introduction to Neural & Cognitive Modeling, Lawrence
Erlbaum Associates Publishers, New Jersey, 1991.

[Lo94] Loofbourrow, Tod, "When Computers Imitate the Workings of Brain",
Boston Business Journal, Vol. 14 (June 10, 1994), pp 24.

[Vo94] Vogel, William, "Minimally Connective, Auto-Associative, Neural
Networks", Connection Science, Vol. 6 (January 1, 1994), pp 461.

[Wo96] Internet Information.
http://www.mindspring.com/~zsol/nnintro.html
http://ourworld.compuserve.com/homepages/ITechnologies/
http://sharp.bu.edu/inns/nn.html
http://www.eeb.ele.tue.nl/neural/contents/neural_networks.html
http://www.ai.univie.ac.at/oefai/nn/
http://www.nd.com/welcome/whatisnn.htm
http://www.mindspring.com/~edge/neural.html
http://vita.mines.colorado.edu:3857/lpratt/applied-nnets.html

[Za93] Zahedi, F. Intelligent Systems for Business: Expert Systems with Neural
Networks, Wadsworth Publishing Company, California, 1993.

.

 

Other sample model essays:

Internet / Software Piracy
Software Piracy Software piracy is the failure of a licensed user to adhere to the conditions of a software license or the unauthorized use or reproduction of copyrighted software by a per...
Internet / Software Piracy
Software Piracy What is Software Piracy The PC industry is just over 20 years old. In those 20 years, both the quality and quantity of available software programs have increased dramaticall...
Internet / Telecommuting
Telecommuting As defined in Webster's New World Dictionary, Third Edition, telecommuting is "an electronic mode of doing work outside the office that traditionally has been done in the off...
The Communications Decency Act The Communications Decency Act that was signed into law by President Clinton over a year ago is clearly in need of serious revisions due, not only to its vag...
Year 2000: Fiction, Fantasy, and Fact "The Mad Scramble for the Elusive Silver Bullet . . . and the Clock Ticks Away." Wayne Anderson November 7, 1996 The year 2000 is pra...
TV rots the senses in the head! It kills the imagination dead! It clogs and clutters up the mind! It makes a child so dull and blind. He can no longer understand a fantasy, A fairyland! His bra...
Internet / 1 To 500 Mhz
Steve Wozniak and Steve Jobs, two good friends from high school, started a revolution that will never end. They invented the first Apple computer (Slater 3) The Apple I, they called it, ran on one...
By the strange laws of quantum mechanics, Folger, a senior editor at Discover, notes, an electron, proton, or other subatomic particle is "in more than one place at a time," because individ...
Technology / Bandwith Issues
xDSL, present and future. By IcePick icepick74@hotmail.com email me if you use this. 26-April-1998 Since the birth of the World Wide Web, bandwidth has been a concern. Computer users surfing the Web a...
Sociology Term Papers / Censorship On The Internet
During the past decade, our society has become based solely on the ability to move large amounts of information across large distances quickly. Computerization has influenced everyone's life. Th...
Experience with Dream Essay - Reliable and great customer service. Quality of work - High quality of work.
, ,
Dream Essay - Very reliable and great customer service. Encourage other to try their service. Writer 91463 - Provided a well written Annotated Bibliography with great deal of detail per th
, ,
it is always perfect
, ,
The experience with Dream Essay is stress free. Service is excellent and forms various forms of communication all help with customer service. Dream Essay is customer oriented. Writer 17663
, ,
Only competent & proven writers
Original writing — no plagiarism
Our papers are never resold or reused, period
Satisfaction guarantee — free unlimited revisions
Client-friendly money back guarantee
Total confidentiality & privacy
Guaranteed deadlines
Live Chat & 24/7 customer support
All academic and professional subjects
All difficulty levels
12pt Times New Roman font, double spaced, 1 inch margins
The fastest turnaround in the industry
Fully documented research — free bibliography guaranteed
Fax (additional info): 866-332-0244
Fax (additional info): 866-308-7123
Live Chat Support
Need order related assistance?—Click here to submit a inquiry
© Dreamessays.com. All Rights Reserved.
Dreamessays.com is the property of MEDIATECH LTD