"It is this author’s personal belief that the most important part of machine learning is the mathematical foundation, followed closely by efficiency in implementation details."
An introductory ML material in my opinion can be less mathematically rigorous. Emphasis can be on intuitive understanding of principles of various techniques, the strengths and weaknesses of each and the application of ML techniques to various simplified problems for practice. It is easy to get lost in too much Math and loose sight of real world problem solving.
I've interviewed several people for machine learning/data science positions and I've found when people don't get the math behind machine learning they don't get the machine learning. Math, specifically linear algebra, is the language that lets you move from our 2/3 dimensional thinking to a more abstract high-dimensional space: the one machine learning lives in. It's easy to draw a bunch of points on a piece of paper and then draw a line between it and say "this is linear regression!" It's much harder to argue why regularization is important and why/how you would want to use/tweak it. The math is essential to getting important aspects of machine learning like this.
Although I think there are degrees of mathematical understanding in ML, and I've noticed people often mean very different things when they make statements like "less mathematically rigorous". Understanding how/why regularization works is pretty trivial mathematics and if you don't understand how that works I'd agree that ML is a bit too 'black box'. But look at something like the Kernel trick in SVMs. I'd argue it's important to understand the idea of mapping points in one dimensionality to another in order to understand why you would use a linear vs Gaussian kernel. However the mathematics required to create your own kernel functions is much less trivial. If you're going to be doing original research is SVMs I would say this is required math, but for practical ML knowledge of 'how' a kernel behaves without a deep understanding of 'why' would be adequate. I would consider an understanding of how but not why to be 'less mathematically rigorous'
I think that's a fair point, but the issue is that most people stop with the application and don't pick up on the underlying math. So, we get articles[1] discussing the lack of data science talent that point to a lack of math as the reason.
i think it can be pretty tricky to get real intuition without having the maths around. for example: exactly what assumptions do these methods make about the kinds of noise in the data? this can make a large difference in practice.
Was a little disappointed to see neural networks noted as "classical" with SVMs designated "modern". And nothing about deep learning? Autoencoders? How about different optimization methods--Truncated newton vs gradient descent?
Some of the most interesting recent developments in ML seem to be left out, even if it is just an introduction.
An introductory ML material in my opinion can be less mathematically rigorous. Emphasis can be on intuitive understanding of principles of various techniques, the strengths and weaknesses of each and the application of ML techniques to various simplified problems for practice. It is easy to get lost in too much Math and loose sight of real world problem solving.