Dòng
|
Nội dung
|
1
|
|
2
|
Introduction to deep learning : from logical calculus to artificial intelligence / Sandro Skansi Cham, Switzerland : Springer, 2018. 196 pages. : illustrations Ký hiệu phân loại (DDC): 006.312 This textbook presents a concise, accessible and engaging first introduction to deep learning, offering a wide range of connectionist models which represent the current state-of-the-art. The text explores the most popular algorithms and architectures in a simple and intuitive style, explaining the mathematical derivations in a step-by-step manner. The content coverage includes convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks and autoencoders. Numerous examples in working Python code are provided throughout the book, and the code is also supplied separately at an accompanying website. Topics and features: Introduces the fundamentals of machine learning, and the mathematical and computational prerequisites for deep learning Discusses feed-forward neural networks, and explores the modifications to these which can be applied to any neural network Examines convolutional neural networks, and the recurrent connections to a feed-forward neural network Describes the notion of distributed representations, the concept of the autoencoder, and the ideas behind language processing with deep learning Presents a brief history of artificial intelligence and neural networks, and reviews interesting open research problems in deep learning and connectionism This clearly written and lively primer on deep learning is essential reading for graduate and advanced undergraduate students of computer science, cognitive science and mathematics, as well as fields such as linguistics, logic, philosophy, and psychology. Dr. Sandro Skansi is an Assistant Professor of Logic at the University of Zagreb and Lecturer in Data Science at University College Algebra, Zagreb, Croatia. Số bản sách:
(0)
Tài liệu số:
(1)
|
3
|
Neural network design / Martin T Hagan...[at el] Boston : Martin Hagan, 2014 800 pages. : illustrations ; 28 cm Ký hiệu phân loại (DDC): 006.32 This book, by the authors of the Neural Network Toolbox for MATLAB, provides a clear and detailed coverage of fundamental neural network architectures and learning rules. In it, the authors emphasize a coherent presentation of the principal neural networks, methods for training them and their applications to practical problems.FeaturesExtensive coverage of training methods for both feedforward networks (including multilayer and radial basis networks) and recurrent networks. In addition to conjugate gradient and Levenberg-Marquardt variations of the backpropagation algorithm, the text also covers Bayesian regularization and early stopping, which ensure the generalization ability of trained networks.Associative and competitive networks, including feature maps and learning vector quantization, are explained with simple building blocks.A chapter of practical training tips for function approximation, pattern recognition, clustering and prediction, along with five chapters presenting detailed real-world case studies.Detailed examples and numerous solved problems. Slides and comprehensive demonstration software can be downloaded from hagan.okstate.edu/nnd.html. Số bản sách:
(1)
Tài liệu số:
(0)
|
4
|
|
5
|
Neural networks and deep learning : a textbook / Charu C Aggarwal Cham, Switzerland : Springer, 2018. 512 pages. : illustrations Ký hiệu phân loại (DDC): 006.32 This book covers both classical and modern models in deep learning. The chapters of this book span three categories: The basics of neural networks: Many traditional machine learning models can be understood as special cases of neural networks. An emphasis is placed in the first two chapters on understanding the relationship between traditional machine learning and neural networks. Support vector machines, linear/logistic regression, singular value decomposition, matrix factorization, and recommender systems are shown to be special cases of neural networks. These methods are studied together with recent feature engineering methods like word2vec. Fundamentals of neural networks: A detailed discussion of training and regularization is provided in Chapters 3 and 4. Chapters 5 and 6 present radial-basis function (RBF) networks and restricted Boltzmann machines. Advanced topics in neural networks: Chapters 7 and 8 discuss recurrent neural networks and convolutional neural networks. Several advanced topics like deep reinforcement learning, neural Turing machines, Kohonen self-organizing maps, and generative adversarial networks are introduced in Chapters 9 and 10. The book is written for graduate students, researchers, and practitioners. Numerous exercises are available along with a solution manual to aid in classroom teaching. Where possible, an application-centric view is highlighted in order to provide an understanding of the practical uses of each class of techniques Số bản sách:
(0)
Tài liệu số:
(1)
|
|
|
|
|