This book covers the field of machine learning, which is the study of algorithms that allow computer programs to automatically improve through experience. The book is intended to support upper level undergraduate and introductory level graduate courses in machine learning.
This is an introductory book on Machine Learning. There is quite a lot of mathematics and statistics in the book, which I like. A large number of methods and algorithms are introduced:
The material covered is very interesting and clearly explained. I find the presentation, however, a bit lacking. I think it has to do with the chosen fonts and lack of highlighting of important terms. Maybe it would have been better to have shorter paragraphs too.
If you are looking for an introductory book on machine learning right now, I would not recommend this book, because in recent years better books have been written on the subject. These are better obviously, because they include coverage of more modern techniques. I give this book 3 out of 5 stars.
Great intro to ML! For someone who doesn't have a formal Comp Sci background, this took a lot out of me. I found it helpful to stop after every chapter and listen to a more recent lecture to tie loose ends. Highly recommend reading this book in conjunction with professor Ng's ML intro course.
This is a very compact, densely written volume. It covers all the basics of machine learning: perceptrons, support vector machines, neural networks, decision trees, Bayesian learning, etc. Algorithms are explained, but from a very high level, so this isn't a good reference if you're looking for tutorials or implementation details. However, it's quite handy to have on your shelf for a quick reference.
This is part of the required reading for the machine learning class in Georgia Tech's online master's program. I think it's weird that they use a book from the 90s, and that's especially annoying because it's expensive and relatively hard to find. It is a really well-written textbook, though.
This was my introductory book into the how and why machine learning works. I still come back to this book from time to time to serve as a reference point!
In my opinion Tom Mitchell serves up some good motivating examples for the algorithms and simply and clearly explains how they work.
I learned a lot from this book. The author assumes very little prior knowledge about math and statistics. For that reason, he takes care to explain equations thoroughly from a rigorous and intuitive perspective.
The book is old, and you'll see many references from 1980s and 1990s. However, the content isn't about any specific technology it's about the foundational ideas in the field of machine learning. For that reason, the content is still relevant in my opinion.
I would recommend this book to anyone machine learning beginner who wants to dive deeper into the field.
This books is old, but I thought it was a great introduction to Machine Learning. I‘d never heard of Random Forrests before reading this book and this was very helpful, BUT of you already a lot about ML, consider something else. This is an intuitive introduction for people who have basic math skills or first year grad students (maybe even the motivated bachelor student).
Read it back in 2018. I think it was my first book on ML. if you're well versed in probability, stats & linear algebra and want to get an outline of traditional ML, then this should serve as a nice weekend course on ML. :D
Yes, everything in the book is technically sound, the material holds up even after 30 years, and it is used in all of the machine learning courses. The problem is that the way the book is written, it's hard to parse information out of it, which essentially means that you have to read and follow along with entire paragraphs of boilerplate writing in passive voice with tiny nuggets of information that could have been more easily explained in a table or list. This makes it both hard to learn from, and hard to use as a reference.
This nook is good for beginners in machine learning. Covers all of the basics pretty well. If you are interested in more modern sources, check https://voicesfromtheblogs.com/ai-ann.... Here you can learn about modern AI annotation tools and other trends in the data processing. Working with raw data at the beginning of building software with machine learning features is really important, so the more you know the better.
Probably the first book you want in academic setting when studying machine learning. it's simple yet effective, and contains less mathematical mind-twisters and more concepts of machine learning algorithms.
A little dated, but had a nice way of introducing machine learning, classifying learning algorithms by their inductive biases. Would recommend other more modern books, such as the one by Kevin Murphy.
Textbooks like this might not make for "fun" reading, but sometimes they're quite necessary. I was reading Tom Mitchell's classic machine learning survey book along with a machine learning survey class, as you might guess... and found it quite helpful. While the professor for my class made an effort to explain concepts and algorithms and such, I rarely began to understand the lectures until I read the textbook.
One key point for reading this book: each chapter and section builds upon past content, so reading from the beginning is actually a good idea (despite what the intro says about jumping into what chapter you need). Mitchell is always careful to explain terminology and such so you don't necessarily need to look it up elsewhere, so if a term pops up that you're not familiar with, chances are good you missed the earlier explanation of it!
An easy, engaging text with a good selection of introductory topics from the field of machine learning. Mitchell covers decision trees, neural nets, Bayesian methods, rules and concept learning, and reinforcement learning, among others. Each is treated at just the right level: enough detail to chew on the concepts, but not a slog into the marginal particulars of this or that technique. Recommended as a good starter kit for those interested in machine learning: from here, you can launch off in a variety of directions, or supplement with other resources as needed. Only caveat is that it's a tad dated (and no coverage of unsupervised learning), though this is a minor nit for a lightweight introduction to the theory and concepts behind the various learning approaches.
Machine Learning by Tom Mitchell was a good read that was surprisingly light on the math. It covered several different machine learning algorithms including: Concept Learning, Decision Tree, Neural Networks, Bayesian, Genetic Algorithms, Analytical Learning and Reinforcement Learning. It also mentions how to evaluate algorithms providing a training set limit equation and discussed how to evaluate hypothesis using confidence intervals. I enjoyed the structure and re-occurrence of specific concepts that I wasn't familiar with like inductive bias and hypothesis search space. The beginning, which used Concept Learning was very useful as a proxy as the later algorithms get more complex.
The book does a good job summarizing various research areas in machine learning. It, however, lacks enough formal treatments. It also does not provide enough details for each topics so that the read can fully understand the algorithms to apply them. I would recommend reading this book as an overview of machine learning and read books on each individual topic afterwards.
This book is a classic, but I personally don't care for it - it strongly embodies the "ML view", which sees the world from the point of view of optimizing a cost function for future examples, without deriving insights from models. In other words, it is all about prediction and not about inference, which is not the point of view that I personally find interesting and valuable.
Very clear prose. Covers an interesting sample of both probabilistic and non-probabilistic methods. Starting to feel a bit dated, as it does not cover important methods developed over the last decade, such as support vector machines. Nonetheless, the topics covered are covered very well.
This is a little harder than the Russell AI book, but it doesn't have the problem of not being complete. It doesn't try to give simple examples, but the math is complete, or complete enough for me. I didn't finish it all. It's more like a reference.