In
recent years an increasing interest in neural networks has arisen in applied
analysis. But the first ideas (that later developed into the
neural networks we use today) were already invented by Warren McCulloch and
Walter Pitts in the early 1940s.

Their paper, “A logical
calculus of the ideas immanent in nervous activity”, published simultaneously
with the rise in the use of programmable computers in applied maths. But due to a lack of calculating capacity from
the computers of the time, the interest of the mathematical community in neural
networks declined until around 2010, when there was a resurgence of interest as
new applications were developed.

One interesting mathematical application is discussed in Issue 2, Volume 29 of European Journal of Applied Mathematics, where Kostas Goulianas, Athanasios Margaris, Ioannis Refanidis and Kostas Diamantaras propose a neural network architecture for solving systems of non-linear equations in their paper Solving polynomial systems using a fast adaptive back propagation-type neural network algorithm. Those systems often provide larger difficulties than systems of linear equations, where we already know very convenient approaches to solve them (i.e. the Gauss algorithm).

In
the introduction to their paper the authors first differentiate between three
different approaches from literature for solving polynomial systems of equations.
They name symbolic methods, numerical methods and geometric methods and discuss
the advantages and problems of using these ansatzes. They propose an efficient
solution of an algorithm based on the use of neural networks.

Neural
networks are a form of supervised learning inspired by the mammalian nervous
system which consists of a network of neurons transporting information through
the human body. The artificial neural network in the paper consists of four
layers of neurons and weights and are computed using a backpropagation
algorithm, which calculates an error at the output layer and then redistributes
this error backwards on the different layers.

In
order to verify the proper functioning of their neural network, the authors
tested their network with some well-known systems of equations motivated by
both mathematical and applied problems. They were capable of solving high
dimensional equation systems with fast convergence.

As a major advantage of this ansatz, compared to previous ones from literature, they stated that “neural networks are universal approximators in the sense that they have the ability to simulate any functions of any type with a predefined degree of accuracy” and that “the adaptive learning rate is one of the most interesting features of the proposed simulator since it allows to each neuron of Layer 1 to be trained with its own learning rate value”.

Solving polynomial systems using a fast adaptive back propagation-type neural network algorithm can be read for free through 31st May 2019.

Leave a reply

Your email address will not be published. Required fields are marked *