Hacker Newsnew | past | comments | ask | show | jobs | submit | kjgkjhfkjf's commentslogin

My guess is that Claude's "bad days" are due to the service becoming overloaded and failing over to use cheaper models.

Seems like a substantial fraction of the web was brought down because of a coding error that should have been caught in CI by a linter.

These folks weren't operating for charity. They were highly paid so-called professionals.

Who will be held accountable for this?


This is quite likely to be in the training data, since it's one of the projects in Wes Bos's free 30 days of Javascript course[0].

[0] https://javascript30.com/


I was under the impression for this to work like that, training data needs to be plenty. One project is not enough since it’s too "sparse".

But maybe this example was used by many other people and so it proliferated?


The repo[0] currently has been forked ~41300 times.

[0] https://github.com/wesbos/JavaScript30


It’s quite unlikely that training data will include duplicate repositories or even forks, that alone would surpass the published dataset sizes.

I love it!


CTO is usually the exec responsible for the entire tech org. The CTO reports to the CEO, and the top managers and maybe a few ICs in the tech org report to the CTO.


I was CTO of a <20 person startup. I recruited the entire tech team, collaborated with the CEO to build the product backlog and spec things out, presented to investors, but also had at least 50% time to code. Not all “CTO” roles are the same. At a small company they better be hands on.


This is very similar to my own role, but I didn't have (nor would I have accepted) that title.


On my resume, I usually list it as “Lead Engineer” since it fits the roles I’m applying to better.


I think a founder/early gets away with "CTO" on their resume, esp. if they're the only person in the org with the role (ie: it's a PM-style CTO, and there isn't a VP/PM; or: it's a VP/E-style CTO, and there isn't a VP/E). But outside that circumstance, given the choice, I'd rather have the VP/PM or VP/E role than "CTO".

(As we get deeper into these threads I am further out on a limb.)


Yes, I was one of the first hires. My role was closer to VP/E and "CTO" was mostly vanity, a reward for being early and getting a new company through the first couple years.


It's very common to see a VP of Engineering managing the day-to-day operations while the CTO acts in a capacity like this.


I’ve seen that too, but then the VP of Engineering tends to report to the CTO, and not to- say, the CEO directly.


Dotted line reporting is very different. In these instances the VP/E is usually directly interfacing with other executives as the CTO's peer. This is even more true when the budget is managed by the VP/E and the CTO is more customer/sales facing.


You're saying "usually" about something that has definitely not been a norm in my career. It seems like there's really only two ways to interpret that arrangement: either the CTO is in fact the EVP/E (fair enough! lots of CTOs are other exec roles with a funny hat), or the CTO has a single top-level manager report, in which case what really happened is that the org hired a pro to run engineering and put the "CTO" out to pasture.


Early versions of Andrew Ng's ML MOOC used Octave, if you are looking for examples and exercises.

YouTube playlist: https://www.youtube.com/playlist?list=PLiPvV5TNogxIS4bHQVW4p...


Oh, the times when Coursera and Udacity were just starting... They were supposed to disrupt academia, it's a shame they never actually did.


There's a great recent book (Anne Trumbore's _The Teacher_in_The_Machine_) on using technology to "disrupt" education (starting much earlier than you would think, with mechanical devices in the early 20th century that could drill students with multiple choice questions, running through basically pre-computer MOOCS that used radio and then TV to broadcast lectures, various educational software, and finally MOOCs like Coursera and Udacity).


The real value of a degree unfortunately isnt the education it's the exclusivity of the program. When bootcamps realized this some started having more stringent admissions.


I was in one of those early cohorts that used Octave, one of the things the course had to deal with was that at the time (I don't know about now) Octave did not ship with an optimization function suitable for the coursework so we ended up using an implementation of `fmincg` provided along with the homework by the course staff. If you're following along with the lectures, you might need to track down that file, it's probably available somewhere.

Using Octave for a beginning ML class felt like the worst of both worlds - you got the awkward, ugly language of MATLAB without any of the upsides of MATLAB-the-product because it didn't have the GUI environment or the huge pile of toolbox functions. None of that is meant as criticism at Octave as a project, it's fine for what it is, it just ended up being more of a stumbling block for beginners than a booster in that specific context.


I did that with Octave too. I didn't mind the language much, but it wasn't great. I had significant experience with both coding and simple models when doing it, so I wasn't a beginner; I can see it being an additional hurdle for some people. What are they using now? Python?


Believe Andrew Ng's new course is all Python now, yeah. Amusingly enough another class that I took (Linear Algebra: Foundations to Frontiers) kinda did the opposite move - when I took it, it was all Python, but shortly after they transitioned to full-powered MATLAB with limited student licenses. Guess it makes sense given that LAFF was primarily about the math.


It’s nice to know that someone else suffered this pain. And that i bet on PGMs which really turned out to be the wrong horse…


ha! I took at least one PGM class myself. I had a difficult time with the material.


Why would I want to use Mistral's MCP services instead of official MCP services from Notion, Stripe, etc.? It seems to me that the official MCP services would be strictly better, e.g. because I don't have to grant access to my resources to Mistral.


I enjoy the responsiveness of the Zed editor compared to VSCode, but I am concerned that it is a VC backed enterprise.

For example, I don't like that I am forced to look at its "Sign In" UI, and that they have refused attempts to remove it. [0]

Zed has some much more annoying bugs, and I am not excited to help fix them given the position of the code owners.

[0] https://github.com/zed-industries/zed/issues/12325


If you want to remain relevant in the AI-enabled software engineering future, you MUST get very good at reviewing code that you did not write.

AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.

Educational establishments MUST prioritize teaching code review skills, and other high-level leadership skills.


> AI can already write very good code

Debatable, with same experience, depends on the language, existing patterns, code base, base prompts, and complexity of a task


How about AI can write large amounts of code that might look good out of context.


Yeah, LLMs can do that very well, IMO. As an experienced reviewer, the "shape" of the code shouldn't inform correctness, but it can be easy to fall into this pattern when you review code. In my experience, LLMs tend to conflate shape and correctness.


> As an experienced reviewer, the "shape" of the code shouldn't inform correctness, but it can be easy to fall into this pattern when you review code.

For human written code, shape correlates somewhat with correctness, largely because the shape and the correctness are both driven by the human thought patterns generating the code.

LLMs are trained very well at reproducing the shape of expected outputs, but the mechanism is different than humans and not represented the same way in the shape of the outputs. So the correlation is, at best, weaker with the LLMs, if it is present at all.

This is also much the same effect that makes LLMs convincing purveyors of BS in natural language, but magnified for code because people are more used to people bluffing with shape using natural language, but churning out high-volume, well-shaped, crappy substance code is not a particularly useful skill for humans to develop, and so not a frequently encountered skill. And so, prior to AI code, reviewers weren't faced with it a lot.


I’m considered one of the stronger code reviewers on the team, what grinds my gears is seeing large, obviously AI heavy PRs and finding a ton of dumb things wrong with them. Things like totally different patterns and even bugs. I’ve lost trust that the person putting up the PR has even self reviewed their own code and has verified it does what they intend.

If you’re going to use AI you have to be even more diligent and self reviewed your code, otherwise you’re being a shitty team mate.


Same. I work at a place that has gone pretty hard into AI coding, including onboarding managers into using it to get them into the dev lifecycle, and it definitely puts an inordinate amount of pressure on senior engineers to scrutinize PRs much more closely. This includes much more thorough reviews of tests as well since AI writes both the implementation and tests.

It's also caused an uptick in inbound to dev tooling and CI teams since AI can break things in strange ways since it lacks common sense.


if you are seeing that it just means they are not using the tool properly or using the wrong tool.

AI assisted commits on my team are "precise".


No True AI …


> you MUST get very good at reviewing code that you did not write.

I find that interesting. That has always been the case at most places my friends and I have worked at that have proper software engineering practices, companies both very large and very small.

> AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.

I echo @ZYbCRq22HbJ2y7's opinion. For well defined refactoring and expanding on existing code in limited scope they do well, but I have not seen that for any substantial features especially full-stack ones, which is what most senior engineers I know are finding.

If you are really seeing that then I would either worry about the quality of those senior+ software engineers or the metrics you are using to assess the efficacy of AI vs. senior+ engineers. You don't have to even show us any code: just tell us how you objectively came to that conclusions and what is the framework you used to compare them.

> Educational establishments MUST prioritize teaching code review skills

Perhaps more is needed but I don't know about "prioritizing"? Code review isn't something you can teach as a self-contained skill.

> and other high-level leadership skills.

Not everyone needs to be a leader and not everyone wants to be a leader. What are leadership skills anyway? If you look around the world today, it looks like many people we call "leaders" are people accelerating us towards a dystopia.


There is no reason to think that code review will magically be spared by the AI onslaught while code writing falls, especially as devs themselves lean more on the AI and have less and less experience coding every day.

There just hasn't been as many resources yet poured into improving AI code reviews as there has for writing code.

And in the end the whole paradigm itself may change.


Totally agree with this. Code review is quickly becoming the most important skill for engineers in the AI era. Tools can generate solid code, but judgment, context, and maintainability come from humans. That’s exactly why we built LiveReview(https://hexmos.com/livereview/) — to help teams get better at reviewing and learning from code they didn’t write.


> AI can write better code than most of them can at this point

So where is your 3 startups?


AI can review code. No need for human involvement.


For styling and trivial issues, sure. And if it's free, do make use of it.

But it is just as unable to properly reason about anything slightly more complex as when writing code.


Nice article, but this command doesn't activate the venv that's managed by uv.

    uv venv activate
The command actually creates a new venv in a dir called "activate". The correct way to activate the venv is like this:

    source .venv/bin/activate


One of the key principles of uv is you don't activate the venv. You just run everything through uv every time, e.g `uv run pytest` or whatever.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: