CTO is usually the exec responsible for the entire tech org. The CTO reports to the CEO, and the top managers and maybe a few ICs in the tech org report to the CTO.
I was CTO of a <20 person startup. I recruited the entire tech team, collaborated with the CEO to build the product backlog and spec things out, presented to investors, but also had at least 50% time to code. Not all “CTO” roles are the same. At a small company they better be hands on.
I think a founder/early gets away with "CTO" on their resume, esp. if they're the only person in the org with the role (ie: it's a PM-style CTO, and there isn't a VP/PM; or: it's a VP/E-style CTO, and there isn't a VP/E). But outside that circumstance, given the choice, I'd rather have the VP/PM or VP/E role than "CTO".
(As we get deeper into these threads I am further out on a limb.)
Yes, I was one of the first hires. My role was closer to VP/E and "CTO" was mostly vanity, a reward for being early and getting a new company through the first couple years.
Dotted line reporting is very different. In these instances the VP/E is usually directly interfacing with other executives as the CTO's peer. This is even more true when the budget is managed by the VP/E and the CTO is more customer/sales facing.
You're saying "usually" about something that has definitely not been a norm in my career. It seems like there's really only two ways to interpret that arrangement: either the CTO is in fact the EVP/E (fair enough! lots of CTOs are other exec roles with a funny hat), or the CTO has a single top-level manager report, in which case what really happened is that the org hired a pro to run engineering and put the "CTO" out to pasture.
There's a great recent book (Anne Trumbore's _The Teacher_in_The_Machine_) on using technology to "disrupt" education (starting much earlier than you would think, with mechanical devices in the early 20th century that could drill students with multiple choice questions, running through basically pre-computer MOOCS that used radio and then TV to broadcast lectures, various educational software, and finally MOOCs like Coursera and Udacity).
The real value of a degree unfortunately isnt the education it's the exclusivity of the program. When bootcamps realized this some started having more stringent admissions.
I was in one of those early cohorts that used Octave, one of the things the course had to deal with was that at the time (I don't know about now) Octave did not ship with an optimization function suitable for the coursework so we ended up using an implementation of `fmincg` provided along with the homework by the course staff. If you're following along with the lectures, you might need to track down that file, it's probably available somewhere.
Using Octave for a beginning ML class felt like the worst of both worlds - you got the awkward, ugly language of MATLAB without any of the upsides of MATLAB-the-product because it didn't have the GUI environment or the huge pile of toolbox functions. None of that is meant as criticism at Octave as a project, it's fine for what it is, it just ended up being more of a stumbling block for beginners than a booster in that specific context.
I did that with Octave too. I didn't mind the language much, but it wasn't great. I had significant experience with both coding and simple models when doing it, so I wasn't a beginner; I can see it being an additional hurdle for some people. What are they using now? Python?
Believe Andrew Ng's new course is all Python now, yeah. Amusingly enough another class that I took (Linear Algebra: Foundations to Frontiers) kinda did the opposite move - when I took it, it was all Python, but shortly after they transitioned to full-powered MATLAB with limited student licenses. Guess it makes sense given that LAFF was primarily about the math.
Why would I want to use Mistral's MCP services instead of official MCP services from Notion, Stripe, etc.? It seems to me that the official MCP services would be strictly better, e.g. because I don't have to grant access to my resources to Mistral.
If you want to remain relevant in the AI-enabled software engineering future, you MUST get very good at reviewing code that you did not write.
AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.
Educational establishments MUST prioritize teaching code review skills, and other high-level leadership skills.
Yeah, LLMs can do that very well, IMO. As an experienced reviewer, the "shape" of the code shouldn't inform correctness, but it can be easy to fall into this pattern when you review code. In my experience, LLMs tend to conflate shape and correctness.
> As an experienced reviewer, the "shape" of the code shouldn't inform correctness, but it can be easy to fall into this pattern when you review code.
For human written code, shape correlates somewhat with correctness, largely because the shape and the correctness are both driven by the human thought patterns generating the code.
LLMs are trained very well at reproducing the shape of expected outputs, but the mechanism is different than humans and not represented the same way in the shape of the outputs. So the correlation is, at best, weaker with the LLMs, if it is present at all.
This is also much the same effect that makes LLMs convincing purveyors of BS in natural language, but magnified for code because people are more used to people bluffing with shape using natural language, but churning out high-volume, well-shaped, crappy substance code is not a particularly useful skill for humans to develop, and so not a frequently encountered skill. And so, prior to AI code, reviewers weren't faced with it a lot.
I’m considered one of the stronger code reviewers on the team, what grinds my gears is seeing large, obviously AI heavy PRs and finding a ton of dumb things wrong with them. Things like totally different patterns and even bugs. I’ve lost trust that the person putting up the PR has even self reviewed their own code and has verified it does what they intend.
If you’re going to use AI you have to be even more diligent and self reviewed your code, otherwise you’re being a shitty team mate.
Same. I work at a place that has gone pretty hard into AI coding, including onboarding managers into using it to get them into the dev lifecycle, and it definitely puts an inordinate amount of pressure on senior engineers to scrutinize PRs much more closely. This includes much more thorough reviews of tests as well since AI writes both the implementation and tests.
It's also caused an uptick in inbound to dev tooling and CI teams since AI can break things in strange ways since it lacks common sense.
> you MUST get very good at reviewing code that you did not write.
I find that interesting. That has always been the case at most places my friends and I have worked at that have proper software engineering practices, companies both very large and very small.
> AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.
I echo @ZYbCRq22HbJ2y7's opinion. For well defined refactoring and expanding on existing code in limited scope they do well, but I have not seen that for any substantial features especially full-stack ones, which is what most senior engineers I know are finding.
If you are really seeing that then I would either worry about the quality of those senior+ software engineers or the metrics you are using to assess the efficacy of AI vs. senior+ engineers. You don't have to even show us any code: just tell us how you objectively came to that conclusions and what is the framework you used to compare them.
> Educational establishments MUST prioritize teaching code review skills
Perhaps more is needed but I don't know about "prioritizing"? Code review isn't something you can teach as a self-contained skill.
> and other high-level leadership skills.
Not everyone needs to be a leader and not everyone wants to be a leader. What are leadership skills anyway? If you look around the world today, it looks like many people we call "leaders" are people accelerating us towards a dystopia.
There is no reason to think that code review will magically be spared by the AI onslaught while code writing falls, especially as devs themselves lean more on the AI and have less and less experience coding every day.
There just hasn't been as many resources yet poured into improving AI code reviews as there has for writing code.
And in the end the whole paradigm itself may change.
Totally agree with this. Code review is quickly becoming the most important skill for engineers in the AI era. Tools can generate solid code, but judgment, context, and maintainability come from humans. That’s exactly why we built LiveReview(https://hexmos.com/livereview/) — to help teams get better at reviewing and learning from code they didn’t write.
reply