Daniel Geng and Shannon Shih created a 4-part (and on-going) crash course on machine learning, available at Berkeley's ML blog. The first part was posted in November 2016 while the last one in July 2017; so it's really recent. Here's what it includes:
- Part 1 - Introduction, Regression/Classification, Cost Functions, and Gradient Descent
- Part 2 - Perceptrons, Logistic Regression, and SVMs
- Part 3 - Neural Networks
- Part 4 - Bias and Variance
This is really a crash course as the first part begins with introductory concepts of ML. But it takes off really quickly and gets you into the maths of ML, which are fundamental, in my opinion. It doesn't go too much into the programming side of it because if you understand the concepts, you can pick from a multitude of languages for implementation.
The graphics are 'stellar' and it should help along with the understanding. Plus, there are a few interactive graphics that you should really 'play' with, as they further the grasping of the concepts. I'd recommend those from part 3 on neural nets. Enjoy the learning!
To stay in touch with me, follow
Cristi Vlad, Self-Experimenter and Author