In classic fashion, I'm fascinated by [what I suspect is] a different aspect of this syllabus than OP.
> An autograder will be distributed with some of the homework. The autograder is there so you can tell if your work is correct.
This kind of thing makes me wish I had done better in high school and could have studied at a school of Stanford's caliber. I can think of many occasions in which I thought my code was satisfactory, but it failed to meet the professor's expectations (which were not clear in the assignment instructions).
Autograders are a mixed blessing. At my University some of our courses have autograders, and I think they can be great. There is a compiler course where the main goal is to create a compiler that has a specific set of measurable features. Encoding those features in a test-rig and making it available seems appropriate.
I deliberately don't use Autograders for my assignments though. I want students to be able to figure out for themselves whether to be satisfied with an answer, and work out how to test it. And I suspect many employers want the same. If a student asks me "is that right", I try to remember not to tell them, but to ask questions that will help them figure it out for themselves. Similarly, providing a test-rig that answers whether a piece of code is right is sometimes inappropriate.
I totally get that, and leading a class in a way that prepares them for the 'real world' is awesome. The examples I'm thinking of involve occasions such as when I used a package (NumPy) for matrix multiplication instead of writing my own function to perform the task. I didn't think matrix multiplication was the main learning objective of the assignment and was counted off for failing to demonstrate how to code that. I guess my original comment doesn't really make sense in this [ranting] context though because mine probably would have still satisfied an autograder. I digress...
> This kind of thing makes me wish I had done better in high school and could have studied at a school of Stanford's caliber.
Yes. Yes, indeed. Boy, my life could have taken a completely different course had I been able to make more decisions once my brain became more fully-developed.
Sounds like just a fancy way of giving your students unit tests. Our early CS classes will have unit tests but later the students are told to write and submit their own. I think that experience is more intuitive than a black box grader.
I've had mixed experiences with autograders, but they seem to be important for scaling large classes. Georgia Tech has recently begun using an autograder for some classes in the MS program. It works better for some courses than others, and (inevitably) students end up considering the autograder the ultimate authority of correctness (even when some features can't be tested that way).
It seems to be an online course in the form of online slides and documentation, as this course has sadly already been given (Tu / Th 3:45PM - 5:35PM; April 4 - April 18). Nevertheless, interesting material.
This short course is taught every quarter at Stanford, usually by a PhD student in ICME (Institute for Computational and Mathematical Engineering). This is the course page for the class as was taught a couple of years ago.
Edit: google CME193 to find more course pages, if interested
> An autograder will be distributed with some of the homework. The autograder is there so you can tell if your work is correct.
This kind of thing makes me wish I had done better in high school and could have studied at a school of Stanford's caliber. I can think of many occasions in which I thought my code was satisfactory, but it failed to meet the professor's expectations (which were not clear in the assignment instructions).