I've interviewed several people for machine learning/data science positions and I've found when people don't get the math behind machine learning they don't get the machine learning. Math, specifically linear algebra, is the language that lets you move from our 2/3 dimensional thinking to a more abstract high-dimensional space: the one machine learning lives in. It's easy to draw a bunch of points on a piece of paper and then draw a line between it and say "this is linear regression!" It's much harder to argue why regularization is important and why/how you would want to use/tweak it. The math is essential to getting important aspects of machine learning like this.
Although I think there are degrees of mathematical understanding in ML, and I've noticed people often mean very different things when they make statements like "less mathematically rigorous". Understanding how/why regularization works is pretty trivial mathematics and if you don't understand how that works I'd agree that ML is a bit too 'black box'. But look at something like the Kernel trick in SVMs. I'd argue it's important to understand the idea of mapping points in one dimensionality to another in order to understand why you would use a linear vs Gaussian kernel. However the mathematics required to create your own kernel functions is much less trivial. If you're going to be doing original research is SVMs I would say this is required math, but for practical ML knowledge of 'how' a kernel behaves without a deep understanding of 'why' would be adequate. I would consider an understanding of how but not why to be 'less mathematically rigorous'