> This seems a not too uncommon experience among CS people where machine learning loses its interest once they understand the magic
Maybe. I went into my program with no preconceived ideas that there was 'magic' happening here. What I got out of it was a real curiosity about how humans think and how that may not be a magical as we think either.
Long-term I cannot see AI being anything but the paradigm-smashing artifact it's sold as. But progress is (relatively) slow on a human scale, so we find incremental progress ultimately lackluster.
If you take it through that lens it becomes more reasonable and you find yourself less skeptical that, just as we have no flying cars, we have no AGI or that all work has been replaced.
Maybe. I went into my program with no preconceived ideas that there was 'magic' happening here. What I got out of it was a real curiosity about how humans think and how that may not be a magical as we think either.
Long-term I cannot see AI being anything but the paradigm-smashing artifact it's sold as. But progress is (relatively) slow on a human scale, so we find incremental progress ultimately lackluster.
If you take it through that lens it becomes more reasonable and you find yourself less skeptical that, just as we have no flying cars, we have no AGI or that all work has been replaced.