Neural networks are a computing paradigm that is finding increasing attention among computer scientists. In this book, theoretical laws and models previously scattered in the literature are brought together into a general theory of artificial neural nets. Always with a view to biology and starting with the simplest nets, it is shown how the properties of models change when more general computing elements and net topologies are introduced. Each chapter contains examples, numerous illustrations, and a bibliography. The book is aimed at readers who seek an overview of the field or who wish to deepen their knowledge. It is suitable as a basis for university courses in neurocomputing.
Solid introduction to the theory of neural networks, pre-deep learning revolution. Not particularly useful for implementation, and it predates the development of popular architectures like CNNs and LSTM networks, and all the associated optimization and regularization schemes. But it's got probably the clearest and most in-depth coverage of vanilla MLP and stochastic networks I've come across -- way better than more modern texts like Goodfellow (which seriously generally sucks).