Terrence J. Sejnowski
![]() |
A tanulás tanulása
by
—
published
2018
|
|
![]() |
Uncommon Sense Teaching: Practical Insights in Brain Science to Help Students Learn
by
11 editions
—
published
2021
—
|
|
![]() |
The Deep Learning Revolution
19 editions
—
published
2018
—
|
|
![]() |
The Computational Brain
by
13 editions
—
published
1992
—
|
|
![]() |
Liars, Lovers, and Heroes: What the New Brain Science Reveals About How We Become Who We Are
by
5 editions
—
published
2002
—
|
|
![]() |
ChatGPT and the Future of AI: The Deep Language Revolution
|
|
![]() |
23 Problems in Systems Neuroscience (Computational Neuroscience Series)
by
5 editions
—
published
2005
—
|
|
![]() |
Unsupervised Learning: Foundations of Neural Computation
3 editions
—
published
1999
—
|
|
![]() |
Neural Codes and Distributed Representations: Foundations of Neural Computation
by
4 editions
—
published
1999
—
|
|
![]() |
Massively-Parallel Architectures for Automatic Recognition of Visual Speech Signals
|
|
“It’s All about Scaling Most of the current learning algorithms were discovered more than twenty-five years ago, so why did it take so long for them to have an impact on the real world? With the computers and labeled data that were available to researchers in the 1980s, it was only possible to demonstrate proof of principle on toy problems. Despite some promising results, we did not know how well network learning and performance would scale as the number of units and connections increased to match the complexity of real-world problems. Most algorithms in AI scale badly and never went beyond solving toy problems. We now know that neural network learning scales well and that performance continues to increase with the size of the network and the number of layers. Backprop, in particular, scales extremely well. Should we be surprised? The cerebral cortex is a mammalian invention that mushroomed in primates and especially in humans. And as it expanded, more capacity became available and more layers were added in association areas for higher-order representations. There are few complex systems that scale this well. The Internet is one of the few engineered systems whose size has also been scaled up by a million times. The Internet evolved once the protocols were established for communicating packets, much like the genetic code for DNA made it possible for cells to evolve. Training many deep learning networks with the same set of data results in a large number of different networks that have roughly the same average level of performance.”
― The Deep Learning Revolution
― The Deep Learning Revolution
Is this you? Let us know. If not, help out and invite Terrence to Goodreads.