Shameless plug: I've made a deep learning course oriented with practical content on a wide variety of computer vision topics --> https://arthurdouillard.com/deepcourse/
I'll look at this properly when I'm not on mobile, but I noticed some minor issues. A typo that seems to be repeated a few times: "space-repetition" should be "spaced-repetition". There are also several unnecessary capitals in your opening sentence.
Alfredo is an excellent teacher who really cares about his students learning. I've watched a number of his lecture recordings on YouTube, and would highly recommend this course.
I've taken both classes. You can take Andrew Ng's class with no prior AI experience. Though Math is a nice addition. Every concept is broken down and explained. You actually get to build real models using python and numpy. This was probably the best course I took for an introduction.
Yann LeCun class has a ton of information and you can easily get lost. I spent a great deal of time after hours trying to figure out what was being said in class. I still can't tell you what energy based models are. Not to take something away from the class, but you will need a whole lot of resources to come out learned in these classes.
The option here is not either or. You will be better off starting with Andrew's class, then muster through Yann's.
Thank you @firefoxd for your interest in the course. The 2021 edition had energy-based models pushed earlier in the semester, and my lectures reflected this change, introducing a practical example (ellipse prediction). In the next edition (Spring 22) I'll move the introduction to EBMs up to the first lecture. In this way the overall concept should become familiar sooner, letting you appreciate the beauty of this perspective.
If you have any specific question don't hesitate to ask it under any of the videos.
This cynical point of view is shared by a number of engineers I know. Another version of it is 'why is it worth learning the calculus of machine learning when that is mostly abstracted away by Tensorflow/PyTorch/JAX?'
To a software engineer accustomed to operating on layers of abstraction far removed from the hardware, this may seem a reasonable point. Why is it worth learning that pesky math, anyway?
I would argue that the machine learning engineers of today are more like electrical engineers than programmers, however. When something goes wrong, you don't have nice warning messages or error catching available to you. Like an electrical engineer with a voltmeter, one must begin probing inputs and outputs each step of the way. Good luck doing that if you do not understand how the components are supposed to work.
YMMV by copying and tweaking others code, but I believe we are still far off from hands free 'autoML'. Just ask anyone who has sent a model to deployment whether AWS autoML was sufficient for them. And whether they needed someone who understands backprop at some point during the model training process.
That works only for cats vs. dogs toy classifiers.
It had never ever worked for anything practical.
And no, Deep Learning is high science. It is not just throwing piles of data to PyTorch and run it on a 8-GPU cluster.
I don't blame anyone having this view. Megacorps with billions of bucks are in a race to train the biggest language models, and that is the news every media focuses on.
In this sense computers and language, for example, are also not a property of natural world. So you can't have sciences that study and/or use computers or languages?
with slides, google colab, and anki cards