AIMA is wide-ranging, a lot of which is not "must-have", but "nice-to-have" knowledge. But I do like its breadth-over-depth approach to get a full scope of the AI landscape.
3blue1brown's content on NN's is awesome -- The explanations are super intuitive. But I'm also looking to understand, as you say, the big picture and understand where NN's fit.
There are so many attributes that its impossible to list them all, just like there are countless attributes that can distinguish a person, with new ones being discovered everyday.
You mention "personality" and that's a good analogy for a company's cultures - It's the organization's personality. Just like personalities, most are neither good or bad inherently, they're just different. Some personalities are better suited for certain endeavors (eg. extroverts are generally better at sales) and attract certain type of people more than others.
So a "good" culture is one that aligns well with the business objectives and attracts the type of talent that are better aligned with those objectives.
Here's an example:
Apple has a design-led culture. Product designers have tremendous influence on what products get made and how they get made. One way this expresses itself is in how leaders make decisions: Through demos. Which makes perfect sense when your business is reliant on the tactile experience of a product and its look-feel.
Google, OTOH, has an engineering-led culture. A lot of product decisions aren't made via demo, but with data. Leaders may see a demo of an improvement to, say the search engine, but they'll rely on usage data to determine whether it should be rolled out or not.
These examples also demonstrate how one culture might not be the best for certain lines of business. Apple, relative to the other tech giants, is way behind on its implementation of AI, and I wouldn't be surprised if that's because its not data-driven at its very core.
Ya, the phrase "first principles" is vague...I meant starting from an axiomatic and actionable definition of AI and learning from there. The first chapter of AIMA does a swell job of enumerating different definitions of and then explicitly declaring which one is used and the foundational premises for the concepts and methods to follow. And it doesn't define AI then jump to neural networks, it gradually layers more atomic concepts, like agents (which I know, have been bastardized) and environments, until it gets to machine learning.
> The other big question is why you want to learn it.
Good question. I'm just looking for a wider context to understand contemporary AI. I don't know if this serves any practical purpose but I'm someone who likes to understand the "why" behind everything and starting from "first principles" helps uncover that.
By "first principles" do you mean something long "learn from the ground up" or " from basic building blocks"?
I like learning things starting from small, atomic this, then building up and learning higher layers of abstraction and functionality later. I tend to find hands on totals too "top down" in the sense that they start with all the told in place and then give you a cursory look into what's actually happening.
Personally I feel like most things in the world aren't really that complicated when you understand the building blocks. There are a few core ideas and then a bunch of layers on top to organize and utilize those ideas for different applications. So if I have an interest in something I want to learn from the ground up.
+1, I'm a few chapters in and its highly instructive. Gives me a deeper appreciation for the modern deep learning regime. Also, as we enter the agent supercycle, I think many of the basic algorithms for search, planning, etc. will make comeback a in a huge way.
An interesting coincidence is that the first fabless semiconductor company, Chips and Technologies, was also founded in 1985, by Dado Banatao and Gordon Campbell. They didn't partner with TSMC but contracted with companies like Hitachi that had excess fab capacity to manufacture their semiconductors. Wild. They eventually sold the company to Intel, which obviously didn't appreciate the insight of de-verticalization in the semiconductor supply chain.
This lines up well with Google's definition of "quality" when it comes to search engine content, which is it must be non-trivial to reproduce. Content generated by prompting LLMs is likely easy to reproduce, and hence lower quality, and hence likely to get demoted in search results.