Hacker Newsnew | past | comments | ask | show | jobs | submit | badhombres's commentslogin

The trade offs are though that patterns and behind the scenes source code generation is another layer that the devs who have to follow need to deal with when debugging and understanding why something isn’t working. They either spend more time understanding the bespoke things or are bottle necked relying on a team or person to help them get through those moments. It’s a trade off and one that has bit me and others before


I am not talking about C# specifically but also and I agree.

Implicit and magic looks nice at first but sometimes it can be annoying. I remember the first time I tried Ruby On Rails and I was looking for a piece of config.

Yes, "convention over configuration". Namely, ungreppsble and magic.

This kind of stuff must be used with a lot of care.

I usually favor explicit and, for config, plain data (usually toml).

This can be extended to hidden or non-obvious allocations and other stuff (when I work with C++).

It is better to know what is going on when you need to and burying it in a couole of layers can make things unnecessarily difficult.


Would you rather a team move faster and be more productive or be a purist and disallow abstractions to avoid some potential runtime tracing challenges which can be mitigated with good use of OTEL and logging? I don't know about you, but I'm going to bias towards productivity and use integration tests + observability to safeguard code.


Disallow bespoke abstractions and use the industry standard ones instead. People who make abstractions inflate how productive they’re making everyone else. Your user base is much smaller than popular libs, so your docs and abstractions are not as battle tested and easy to use as much as you think.


This is raw OpenFGA code:

    await client.Write(
        new ClientWriteRequest(
            [
                // Alice is an admin of form 123
                new()
                {
                    Object = "form:124",
                    Relation = "editor",
                    User = "user:avery",
                },
            ]
        )
    );

    var checkResponse = await client.Check(
        new ClientCheckRequest
        {
            Object = "form:124",
            Relation = "editor",
            User = "user:avery",
        }
    );

    var checkResponse2 = await client.Check(
        new ClientCheckRequest
        {
            Object = "form:125",
            Relation = "editor",
            User = "user:avery",
        }
    );
This is an abstraction we wrote on top of it:

    await Permissions
        .WithClient(client)
        .ToMutate()
        .Add<User, Form>("alice", "editor", "226")
        .Add<User, Team>("alice", "member", "motion")
        .SaveChangesAsync();

    var allAllowed = await Permissions
        .WithClient(client)
        .ToValidate()
        .Can<User, Form>("alice", "edit", "226")
        .Has<User, Team>("alice", "member", "motion")
        .ValidateAllAsync();
You would make the case that the former is better than the latter?


In the first example, I have to learn and understand OpenFGA, in the second example I have to learn and understand OpenFGA and your abstractions.


Well the point of using abstractions is that you don't need to know the things that it is abstracting. I think the abstraction here is self explaining what it does and you can certainly understand and use it without needing to understand all the specifics behind it.


More importantly: it prevents "usr:alice_123" instead of "user:alice_123" by using the type constraint to generate the prefix for the identifier.


How much faster are we talking? Because you'd have to account for the time lost debugging annotations.


What are you working on that you're debugging annotations everyday? I'd say you've made a big mistake if you're doing that/you didn't read the docs and don't understand how to use the attribute.

(Of course you are also free to write C# without any of the built in frameworks and write purely explicit handling and routing)

On the other hand, we write CRUD every day so anything that saves repetition with CRUD is a gain.


I don't debug them every day, but when I do, it takes days for a nasty bug to be worked out.

Yes, they make CRUD stuff very easy and convenient.


It has been worth the abstraction in my organization with many teams. Thinking 1000+ engineers, at minimum. It helps to abstract as necessary for new teammates that want to simply add a new endpoint yet follow all the legal, security, and data enforcement rules.

Better than no magic abstractions imo. In our large monorepo, LSP feedback can often be so slow that I can’t even rely on it to be productive. I just intuit and pattern match, and these magical abstractions do help. If I get stuck, then I’ll wade into the docs and code myself, and then ask the owning team if I need more help.


That's the deal with all metaprogramming.


The implication is that all remote work is part time effort for full time pay?


> The implication is that all remote work is part time effort for full time pay?

Which wouldn't match my experience. It is exactly the opposite unless you count being present physically somewhere as work.


I think that’s a bit of a stretch to say go will implement all the features of c# and Java because of a few new features. Go isn’t a frozen language, they just take a lot of time and discussion before committing to a major change.


The point isn't implementing all the features of c# and Java, rather doubling down on their simplicity mantra against all programming language complexity, only to revisit such decisions years later, because after all the other programming languages had good reasons to have such features in first place.


If you look back at the mailing list from earlier in the Go development you'll see that this was the plan. To start with as small and simple a set of features they felt was a necessary and evaluate more advanced features as the language evolved while trying to keep with the simplicity mantra when adding them. They wanted to evaluate the potential features in light of real work with Go.


I have a feeling that in the future developers are still going to be needed, but in an architecture and debugging/optimization fashion. Not in a writing boilerplate code aspect. There is also an issue with validating the output before it gets released to production.


Not surprising. Every company is going to take the opportunity to trim costs when it doesn't affect their PR as much as it would any other time.


And not doing so could affect their IR (investor relations) negatively, which is a higher concern than PR to most companies


Why can't a company do offense on the IR front with literally any action other than cutting workforce?

"Our products are better than ever" "Our revenue per employee is better than ever"

Why does cutting costs have to be the only way to please investors?


Because most investors are investing in more than one company. If someone has a stake in Google, Microsoft, and Confluent, and their ROI for the first two are great this quarter, but Confluent is lukewarm or even in the red, they're going to want an explanation.

They can be told that products are great, or employees are producing a lot, but the only thing that matters to investors is the money. And so, when you're underperforming compared to others in your market, you have to do what appeases the shareholders, and that involves culling employees.


This needs to be a video game, not the way we organize humanity :(


I have no unique insight into their perspective but if I had to guess they're starting to worry about the lack of easy money and want a signal that management is going to play it safe with their current cash reserves instead of banking on another infusion. They probably could, you know, tell them as much, but sacrificing your employees really adds oomph to your commitment.


Every company that mishired.

You don't cut costs if they aren't costs.

If you hired employees to do something and they're making you money hand over fist, you don't fire them. They're not an expense. They're a profit center.


It's going to take a little more than a sed statement to make this change


I think this one difficult subject does not take away from the common feeling that overall Go is simple. This is like looking at a small scratch on the car and claiming the entire car is damaged.


“Simple” means the definition has been kept compact, which should suggest problems went unaddressed. Gabriel’s “Worse is Better” essay reminds us that making a system do the Right Thing™ is rarely simpler.

But making the runtime go out of its way to store the exact type of an object you don’t have is damn weird. In theory a method could accept a nil receiver, but in practice they never seem to.


But as products evolve, their boring names become misleading. At least with non-boring names you can re-define what they represent in your company.


Do products/components really evolve so much that the name frequently become outdated?

Half the article is like, "There was a component called YamlParser, which is now a browser-based stable-diffusion renderer!"


I've worked on tools that were slightly misnamed after 6 months, and completely misnamed after 2 years. At that point they were also usually just nearly useless due to feature bloat and/or lack of scalability, so deprecated or replaced with something better.

They didn't change names, but their successors would get a new one.


Yep, enough that they need a caveat every time someone new is told of the product. It happens, and it's gotten worse due to Agile.


Look at IBM "Watson". It had evolved from an AI jeopardy and Q&A engine into basically whatever salespeople make up.


Isn't it even harder to re-define names in a company? There might be 3 people involved in re-definition, but it affects 15 people.

How are we going to notify those 15? Do we even know who those 15 are? Are we going to create a weekly redefinition newsletter?

I think in most cases new meaning deserves a new name. Everything else is just hacks.

How hard is it to change a name is a actually a really good metric for a company. If a simple rename takes several days, multiple approvals, rounds of QA, and a scheduled release next quarter, then you probably need those hacks.


I think that's very extreme. Products grow at a gradual pace. I don't think there are defining moments when a product no longer supports something, or is no longer used in a way that it was intended to.

I would argue it's easier to maintain peoples understanding of a product since that will also be done gradually. It's not easy to update naming inside of a code base without potentially breaking software significantly or causing unknown bugs elsewhere. I think most software would fail the renaming test. It's also generally not worth the money and time needed to make that change.


I appreciate the ingenuity of the solution. But I still prefer Tailwind-esque style of css styling.


They did in the article? They're not asking it to change, they're just expressing their opinion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: