Hacker Newsnew | past | comments | ask | show | jobs | submit | a_cool_username's commentslogin

I (and my team) work remote and don't quite agree with this. I work very hard to provide deep, thoughtful code review, especially to the more junior engineers. I try to cover style, the "why" of style choices, how to think about testing, and how I think about problem solving. I'm happy to get on a video call or chat thread about it, but it's rarely necessary. And I think that's worked out well. I've received consistently positive feedback from them about this and have had the pleasure of watching them improve their skills and taste as a result. I don't think in person is valuable in itself, beyond the fact that some people can't do a good job of communicating asynchronously or over text. Which is a skills issue for them, frankly.

Sometimes a PR either merits limited input or the situation doesn't merit a thorough and thoughtful review, and in those cases a simple "lgtm" is acceptable. But I don't think that diminishes the value of thoughtful non-in-person code review.


> I work very hard to provide deep, thoughtful code review

Which is awesome and essential!

But the reason that the value of code reviews drops if they aren't done live, conducted by the person whose code is being reviewed, isn't related to the quality of the feedback. It's because a very large portion of the value of a code review is having the dev who wrote the code walk through it, explaining things, to other devs. At least half the time, that dev will encounter "aha" moments where they see something they have been blind to before, see a better way of doing things, spot discontinuities, etc. That dev has more insight into what went into the code than any other, and this is a way of leveraging that insight.

The modern form of code review, where they are done asynchronously by having reviewers just looking at the code changes themselves, is not worthless, of course. It's just not nearly as useful as the old-school method.


Do you think that "Don't do this" as a reply comment is following the spirit of the guidelines? It doesn't seem very thoughtful or substantive to me.


Those are definitely reasonable reasons to lose confidence in elections and feel disillusioned, but voter ID laws won't help you there (which was GP's point).


The pypa team are just not capable stewards of core aspects of the python ecosystem. As a maintainer and developer of Python based tools and libraries it is very frustrating having these folks push some change that they want and simply oopsie a significant chunk of the Python ecosystem, and then go dark for hours.

They've done it this time by making poor architectural decisions ("Isolated builds should install the newest setuptools") and then add in poor library maintenance decisions ("We'll remove this feature used by thousands of packages that are still in use as active dependencies today"). Possibly each of these decisions were fine in a vacuum, but when you maintain a system that people depend upon like this, you can't simply push this stuff out without thinking about it. And if you do decide to do those things, you can't just merge the code and call it a day without keeping an eye on things and figuring out if you need to yank the package immediately! This isn't rocket science, everyone else developing important libraries in Python world has mostly figured this stuff out. In classic pypa form, it sounds like there was a deprecation warning but it only showed up if you ran the deprecated command explicitly, while the simple presence of this command causes package installs to fail. You have to at least warn on the things that will trigger the error!

These days I try to rely on the absolute minimum number of packages possible, in order to minimize my pypi exposure. That's probably good dev practice anyway, but is really disappointing that I can't rely on third party libraries basically at all in Python, when the vast pypi package repository is supposed to be a big selling point of the language. But as a responsible developer I must minimize my pip / setuptools surface area, as it's the most dangerous and unreliable part of the language. Even wrapper tools are not safe, as you see in the thread.


You might want to try getting them from apt-get. They're usually more stable there and get patched if they fail to install or fail to work with a newer version of something else.


The article discusses that this is actually in the tens of thousands of dollars:

>Twitter’s API service limits tweets for non-paying users to 1,500 a month — not enough for many emergency accounts. And while a small fee for the platform's 'Blue Check' service will increase that ceiling for individual users, the cost of an enterprise account has reportedly climbed into the tens of thousands of dollars.


If you're an even remotely competent developer in a language used by more than one company, have a reasonably firm grasp of the English language, can work with others reasonably well, and can be bothered to apply for jobs, then there are definitely plenty of opportunities for your skills. Plenty of companies don't even care about the language, they figure you'll pick it up. The only way this isn't true is if you are already hugely overpaid or if you come off terrible in interviews. A year ago I would have said "or are a convicted felon" but I think plenty of companies don't even care about that anymore, that's how huge demand is. You don't even have to live somewhere with jobs anymore so long as you have a reliable Internet connection!

If you just sit around letting your current company take advantage of you, and complaining about not getting a big enough raise you definitely won't find those opportunities though. You have to interview a lot.


They don't allow me to become an expert by having me be full stack in multiple stacks with frequent context switching. I tend to be slow in the code screens.


Maybe there's something I don't understand, but per this table it's slightly worse to be married filing separately than it is to be single at the top brackets: https://www.bankrate.com/taxes/tax-brackets/

edit: changed "much" to "slightly"


Once you hit the sweet spot of developing for cross-platform (even just Linux, MacOS, and Windows) and supporting normal average-people users and have (even optional!) C dependencies, Python's packaging situation quickly deteriorates into "nightmare" territory.


This is exactly my problem. I have to support Windows (a locked down corporate version) and Linux.


>A key question arises: why are so few repositories type-correct?

The authors don't seem to ever discuss the fact that mypy version changes frequently make previously-passing code fail type checks.

I don't think I have ever upgraded mypy and not found new errors. Usually they're correct, sometimes incorrect, but it's a fact of being a mypy user. Between mypy itself and typeshed changes, most mypy upgrades are going to involve corresponding code changes. The larger your code base and the more complicated your types are, the worse it'll be, but it's basically an ever-present issue for any program interesting enough to really benefit from a type checker.

How many of those repositories were "type-correct" but only on particular versions of mypy? I bet it's a lot!


I don't have this experience. Can you give an example of a previously-valid codebase that failed typechecking unexpectedly on a recent MyPy update, that wasn't a result of a false negative bug in MyPy?


I can't give you any links because it's not open source code, but there was a bug fix in 0.790 named "Don't simplify away Any when joining union types" that caused me some problems with bad annotations in the existing code: The annotations implied that Any was possible, but it wasn't, but it got dropped from the final Union by the bug so we never had to handle Any. Dataclasses have had some backwards-incompatible improvements as well.

But the big culprit is typeshed. Something will get new/fixed annotations and suddenly you aren't handling all possible return types in your callers, or whatever.


I've had this problem a few times., For example it happened with the 0.800+ versions. Mypy got stricter, and was more aggressive in finding code (eg small scripts in not in the proper python hierarchy).

I can't show any of my professional work (no publicly available src) but this side project of mine is locked to 0.790 until I can find time to sort the issues: https://github.com/calpaterson/quarchive/tree/master/src/ser...

It's hard to classify anything as a "false negative" with mypy since it is very liberal (often unexpectedly so, which I think is one of the sharp edges of gradual typing).


I obviously can't share it here, but my primary codebase at my job needs a few dozen changes every time we change mypy versions. There's a few places of false positives where I've had to comment `# shut up, mypy` and a `type: ignore`. Usually when I'm being clever with the subprocess module.


I wonder if that's a mypy issue or more that the typeshed types are bugged, since type shed versions also get shipped (used to?) with new type checker versions.

https://github.com/python/typeshed


in a codebase with tens of thousands of lines of typed code, I see maybe a few new type errors with a new mypy release. They've always been fixable within a few minutes.

I think mypy has some problems, but this isn't one of the bigger ones for me.


I don't think it's a "problem" with mypy, I just think it's likely the cause of a lot of the programs that the authors think don't type check.

Though I will say, while most have been fixable in a few minutes, some have been a real chore to fix. Sometimes an innocuous looking error balloons into several hours of reconciling obscure type system behavior errors once you start fixing it. Regardless, it's a small price to pay for proper type checking in Python. I've more than made up the lost time in detecting bugs before they ship.


Even for fully statically typed languages like C++, it is very common that some old code can’t compile with latest compiler. Shrugs.


No it isn't. C++ has extremely good backwards compatibility.


I disagree. C++ often removes language features, standard library features (std::random_shuffle), etc. Also object files compiled with different standard versions often are ABI incompatible with each other, which means you can't just pick new C++ for new code, but its rather all or nothing.

You can argue that when it removes features it provides a replacement (sometimes), but that does not change the fact that if you have any reasonably large project (>1 million LOC), every standard upgrade will break your app.

One of the main reasons we write all new code in Rust and are migrating the C++ code base step by step to Rust is because Rust offers infinitely better backward compatiblity guarantees than C++.

Rust never ever breaks your code, and you can opt-in to newer Rust editions for new code only, and these are ABI compatible with Rust code written using older editions.


Even aside from deliberate backwards-compatibility breaks in the standard, compilers sometimes break compatibility. Both MSVC and GCC 11 have changed their header file transitive includes within the past few years, causing projects (like doctest and Qt5) to stop compiling because they forgot to include headers, which built fine in the past but not anymore. IDK if it's "very common", but it's definitely happening in the wild.

MSVC: https://github.com/onqtam/doctest/issues/183

GCC:

- https://invent.kde.org/qt/qt/qtbase/-/commit/8252ef5fc6d0430...

- https://invent.kde.org/qt/qt/qtbase/-/commit/cb2da673f53815a...


In the c and c++ worlds, this is the same line of thinking that kept new warnings out of "-Wall" for so long.


I’ve found the opposite with pyright. Code that seems right but is failing checks is fixed with the next release :)


Sometimes! A very simple example is code that uses "async" as a variable name. It became a keyword in 3.5, which was an enormous pain in the ass.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: