Terrence J. Sejnowski

Terrence J. Sejnowski’s Followers (78)

member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo
member photo

Terrence J. Sejnowski



Terrence Joseph Sejnowski is an Investigator at the Howard Hughes Medical Institute and is the Francis Crick Professor at The Salk Institute for Biological Studies where he directs the Computational Neurobiology Laboratory. His research in neural networks and computational neuroscience has been pioneering.
Sejnowski is also Professor of Biological Sciences and Adjunct Professor in the Departments of Neurosciences, Psychology, Cognitive Science, and Computer Science and Engineering at the University of California, San Diego, where he is Director of the Institute for Neural Computation. In 2004 he was named the Francis Crick Professor and the Director of the Crick-Jacobs Center for Theoretical and Computational Biology at the Salk Institute.

Average rating: 4.0 · 1,848 ratings · 226 reviews · 23 distinct worksSimilar authors
A tanulás tanulása

by
4.40 avg rating — 2,608 ratings — published 2018
Rate this book
Clear rating
Uncommon Sense Teaching: Pr...

by
4.18 avg rating — 959 ratings — published 2021 — 11 editions
Rate this book
Clear rating
The Deep Learning Revolution

3.74 avg rating — 577 ratings — published 2018 — 19 editions
Rate this book
Clear rating
The Computational Brain

by
3.97 avg rating — 86 ratings — published 1992 — 13 editions
Rate this book
Clear rating
Liars, Lovers, and Heroes: ...

by
3.80 avg rating — 82 ratings — published 2002 — 5 editions
Rate this book
Clear rating
ChatGPT and the Future of A...

3.70 avg rating — 69 ratings3 editions
Rate this book
Clear rating
23 Problems in Systems Neur...

by
3.90 avg rating — 20 ratings — published 2005 — 5 editions
Rate this book
Clear rating
Unsupervised Learning: Foun...

really liked it 4.00 avg rating — 18 ratings — published 1999 — 3 editions
Rate this book
Clear rating
Neural Codes and Distribute...

by
really liked it 4.00 avg rating — 5 ratings — published 1999 — 4 editions
Rate this book
Clear rating
Massively-Parallel Architec...

0.00 avg rating — 0 ratings
Rate this book
Clear rating
More books by Terrence J. Sejnowski…
Quotes by Terrence J. Sejnowski  (?)
Quotes are added by the Goodreads community and are not verified by Goodreads. (Learn more)

“It’s All about Scaling Most of the current learning algorithms were discovered more than twenty-five years ago, so why did it take so long for them to have an impact on the real world? With the computers and labeled data that were available to researchers in the 1980s, it was only possible to demonstrate proof of principle on toy problems. Despite some promising results, we did not know how well network learning and performance would scale as the number of units and connections increased to match the complexity of real-world problems. Most algorithms in AI scale badly and never went beyond solving toy problems. We now know that neural network learning scales well and that performance continues to increase with the size of the network and the number of layers. Backprop, in particular, scales extremely well. Should we be surprised? The cerebral cortex is a mammalian invention that mushroomed in primates and especially in humans. And as it expanded, more capacity became available and more layers were added in association areas for higher-order representations. There are few complex systems that scale this well. The Internet is one of the few engineered systems whose size has also been scaled up by a million times. The Internet evolved once the protocols were established for communicating packets, much like the genetic code for DNA made it possible for cells to evolve. Training many deep learning networks with the same set of data results in a large number of different networks that have roughly the same average level of performance.”
Terrence J. Sejnowski, The Deep Learning Revolution



Is this you? Let us know. If not, help out and invite Terrence to Goodreads.