Saturday, October 16, 2010

Artificial Neural Network

Artificial Neural Network (ANN)

Artificial neural networks are a method of information processing and computation that takes benefit of today's technology. Mimicking the processes present in biological neurons, artificial neural networks are used to predict and learn from a given set of data information. At data analysis neural networks are more robust than statistical methods because of their capability to handle small variations of parameters and noise.

An Artificial Neural Network is data information processing paradigm that is inspired by the way biological nervous systems such as the brain process information. The important element of this paradigm is the novel structure of the information processing system. It is created of a large number of highly interconnected processing elements known as neurons working in unison to solve specific problems. ANN is similar to people which learn by example. An ANN is defined for a specific application such as pattern recognition or data classification through a learning process. Learning in biological systems includes adjustments to the synaptic connections that exist between the neurons.

Why to use neural networks?
Either humans or other computer techniques use it to determine patterns and detect trends that are too complex to be noticed. In the category of information has been given to process, a trained neural network can be considered as an "expert".

It has following advantages:
a) Adaptive learning:
Capability to learn tasks based on the given data for training or initial experience.
b) Self-Organisation:
It can create its own organisation or representation of the information it receives during learning time.
c) Real Time Operation:
Its computations may be carried out in parallel and special hardware devices are being designed and manufactured which take advantage of this capability.
d) Fault Tolerance via Redundant Information Coding:
Partly destruction of a network leads to the corresponding degradation of performance. However some network capabilities may be retained even with major network damage.

Neural networks versus conventional computers
Neural networks have a different approach to problem solving than that of conventional computers. Conventional computers use an algorithmic approach in order to solve a problem. The problem cannot solve the problem until the specific steps that computer needs to follow are known. That limits the problem solving capability of conventional computers to problems that we already understand and know how to solve.

Neural networks and human brains process information in a similar way. The network is created from large number of highly interconnected processing elements working in parallel to solve a specific problem. Neural networks learn from example. They can’t be programmed to do a specific task.

Neural networks and conventional algorithmic computers are complements to each other. Neural networks tasks are more suited to an algorithmic approach like arithmetic operations. Large number of systems uses combination of the two approaches in order to perform at maximum efficiency.

Different architecture of neural networks
1) Feed-forward networks :
Feed-forward ANNs permit signals to transfer one way from input to output. There is no response i.e. the output of any layer doesn’t affect that same layer. Feed-forward ANNs tend to be straightforward networks that correlate inputs with outputs. They are widely used in pattern recognition. This type of organisation is called as bottom-up or top-down.

2) Feedback networks :
By using loops in the network, Feedback networks transfer signals in both directions. Feedback networks are powerful and complex. Feedback networks state is changing dynamically until they reach an equilibrium point. Until the input changes, they remain at the equilibrium point. Feedback architectures are called as interactive or recurrent.

3) Network layers:
Artificial neural network includes three layers of units: a layer of "input" units is connected to a layer of "hidden" units, which is connected to a layer of "output" units.

Input units:
The action of the input units represents the raw information that is fed into the network.

Hidden units:
The action of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units.

Output units:
The behaviour of the output units depends on the action of the hidden units and the weights between the hidden and output units.

4) Perceptrons

The most influential work on neural network went under the heading of 'perceptrons' a term coined by Frank Rosenblatt. The perceptron comes out to be an MCP model with some additional, fixed, preprocessing. Association units and their task are to remove specific, localized featured from the input images. Perceptrons mimic the basic idea behind the human visual system. They were used for pattern recognition even though their abilities extended a lot more.

Learning Process
The patterns and the subsequent response of the network can be divided into two general paradigms:
1) Associative mapping
In associated mapping the network learns to create a particular pattern on the set of input units whenever another particular pattern is applied on the set of input units. The associative mapping can be divided into two mechanisms:

1a) Auto-association:
An input pattern is related with itself and the states of input and output units coincide. This provides pattern completion to create a pattern whenever a portion of it or a distorted pattern is presented. In the second case, the network actually saves pairs of patterns building relationship between two sets of patterns.

1b) Hetero-association:
It is associated with two recall mechanisms:

Nearest-neighbor recall:
Where the output pattern created corresponds to the input pattern saved, which is closest to the pattern presented.

Interpolative recall:
Where the output pattern is a similarity-based interpolation of the patterns saved corresponding to the pattern presented.

2) Regularity detection
This unit corresponds to particular properties of the input patterns. Whereas in associative mapping the network saves the associations among patterns in regularity detection the response of each unit has a particular 'meaning'. This type of learning mechanism is vital for feature discovery and knowledge representation.

Every neural network has knowledge, which is contained in the values of the connections weights. Modifying the knowledge saved in the network as a function of experience means a learning rule for changing the values of the weights. Information is saved in the weight matrix of a neural network. Learning is the purpose of the weights. Learning is performed as follow; we can divide 2 types of neural networks:

i) Fixed networks
In which the weights remain the same. In such networks, the weights are fixed a priori regarding to the problem to solve.
ii) Adaptive networks
In which the weights do not remain same. For this network all learning methods can be classified into two major types:
Supervised learning
This incorporates an external teacher so that each output unit is told what its desired response to input signals ought to be. Global information may be required during the learning process. Paradigms of supervised learning consist error-correction learning, reinforcement learning and stochastic learning.

Unsupervised learning
It uses no external teacher and is dependent upon only local information. It is also called as self-organisation because it self-organizes data presented to the network and detects their emergent collective properties.

Transfer Function
Artificial Neural Network based on both the weights and the input-output function, which is specified for the units. This function typically falls into one of three types:
a) linear (or ramp)
b) threshold
c) sigmoid
For linear units the output action is proportional to the total weighted output.
For threshold units the output is set at one of two levels, based on whether the total input is greater than or less than some threshold value.
For sigmoid units the output varies rapidly but not linearly as the input changes. Sigmoid units allow a greater resemblance to real neurons than do linear or threshold units, but all three must be considered rough approximations.

To make a neural network that performs some specific work, we must choose how the units are interconnected to one another and we must set the weights on the connections appropriately. The connections decide whether it is possible for one unit to influence another. The weights define the strength of the influence.

Applications of neural networks

1) Detection of medical phenomena:
A variety of health based indices e.g., a combination of heart rate, levels of various substances in the blood, respiration rate can be observed. The onset of a particular medical condition could be related with a very complex mixing of changes on a subset of the variables being observed. Neural networks have been used to identify this predictive pattern so that the appropriate treatment can be specified.

2) Stock market prediction:
Fluctuations of stock prices and stock indices are complex, multidimensional deterministic phenomenon. Neural networks are used by many technical analysts to make decisions about stock prices dependent upon a large number of factors such as past performance of other stocks.

3) Credit assignment
For a loan a number of pieces of data information are usually known about an applicant. For instance, the applicant's age, education, occupation and many other data information may be present. After training a neural network on historical data, neural network analysis can determine the most relevant characteristics and use those to classify applicants as good or bad credit risks.

No comments:

Post a Comment