Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Yann LeCun’s 2021 Deep Learning Course at CDS free and fully online (nyu.edu)
230 points by MAXPOOL on Nov 14, 2021 | hide | past | favorite | 22 comments


Shameless plug: I've made a deep learning course oriented with practical content on a wide variety of computer vision topics --> https://arthurdouillard.com/deepcourse/

with slides, google colab, and anki cards


I'll look at this properly when I'm not on mobile, but I noticed some minor issues. A typo that seems to be repeated a few times: "space-repetition" should be "spaced-repetition". There are also several unnecessary capitals in your opening sentence.


Thanks, for the remarks, I'll fix that.


Anki Cards? I’m sold


This looks great


It's important to also credit Alfredo Canziani who organized much of the course.


Alfredo is an excellent teacher who really cares about his students learning. I've watched a number of his lecture recordings on YouTube, and would highly recommend this course.


Thank you for the kind words :)


I must mention that Alfredo Canziani is a true gem, and it's highly edifying and entertaining to watch his lectures.

He is the best in offline lectures recorded in class. For that you can watch videos from 2020.

He is going to be a teaching superstar, if he already is not.

He invests significant time in making proper and helpful visualization, engages with practically anyone if they have something valid to say.

Yann LeCun is a fantastic teacher, as well. You would think that a Turing winner would be ordinary at best and bad at worst, but you would be wrong.


Anyone know how this compares to Andrew Ng's course on coursera? (set of multiple courses really)

https://www.coursera.org/specializations/deep-learning


I've taken both classes. You can take Andrew Ng's class with no prior AI experience. Though Math is a nice addition. Every concept is broken down and explained. You actually get to build real models using python and numpy. This was probably the best course I took for an introduction.

Yann LeCun class has a ton of information and you can easily get lost. I spent a great deal of time after hours trying to figure out what was being said in class. I still can't tell you what energy based models are. Not to take something away from the class, but you will need a whole lot of resources to come out learned in these classes.

The option here is not either or. You will be better off starting with Andrew's class, then muster through Yann's.


Thank you @firefoxd for your interest in the course. The 2021 edition had energy-based models pushed earlier in the semester, and my lectures reflected this change, introducing a practical example (ellipse prediction). In the next edition (Spring 22) I'll move the introduction to EBMs up to the first lecture. In this way the overall concept should become familiar sooner, letting you appreciate the beauty of this perspective.

If you have any specific question don't hesitate to ask it under any of the videos.


What are the prerequisites prior to taking this course? TIA!


Prerequisites are listed in the second sentence...



I wonder if students that take this class are better at throwing their pile of data into Tensorflow.


This cynical point of view is shared by a number of engineers I know. Another version of it is 'why is it worth learning the calculus of machine learning when that is mostly abstracted away by Tensorflow/PyTorch/JAX?'

To a software engineer accustomed to operating on layers of abstraction far removed from the hardware, this may seem a reasonable point. Why is it worth learning that pesky math, anyway?

I would argue that the machine learning engineers of today are more like electrical engineers than programmers, however. When something goes wrong, you don't have nice warning messages or error catching available to you. Like an electrical engineer with a voltmeter, one must begin probing inputs and outputs each step of the way. Good luck doing that if you do not understand how the components are supposed to work.

YMMV by copying and tweaking others code, but I believe we are still far off from hands free 'autoML'. Just ask anyone who has sent a model to deployment whether AWS autoML was sufficient for them. And whether they needed someone who understands backprop at some point during the model training process.


That works only for cats vs. dogs toy classifiers.

It had never ever worked for anything practical.

And no, Deep Learning is high science. It is not just throwing piles of data to PyTorch and run it on a 8-GPU cluster.

I don't blame anyone having this view. Megacorps with billions of bucks are in a race to train the biggest language models, and that is the news every media focuses on.


Deep learning is definitely not science


We still don't know how exactly things work inside.

But we will, someday.

We, once considered the existence of "unknown forces" in basic planetary motion. But we figured out later.

There are many directions, one of them being the work being done by Bronstein et al. w/ Geometric Deep Learning.


planetary motion is a property of the natural world. Deep learning algorithms are not. Deep learning is not science.


In this sense computers and language, for example, are also not a property of natural world. So you can't have sciences that study and/or use computers or languages?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: