Hacker Newsnew | past | comments | ask | show | jobs | submit | bfdm's commentslogin

Maybe I'm out of the loop here, but scanning over this I can't help but ask myself: "Why Jujutsu?"

I don't understand what the point is over just using git. The top intro defined some JJ names for git things, but it's not clear why I would want or need this. What problem does it solve with using git?


Ideally, reduces cognitive complexity because you don't have to think about the staging area anymore, just commits.

I recently started trying it out at work and I like how fluent it makes what would be more advanced git operations like squashing and rebasing.

Issues I've run into have been understanding its version of branches (bookmarks), understanding its merge conflict indicators, and its lack of respect for git skip-worktree.


> Ideally, reduces cognitive complexity because you don't have to think about the staging area anymore, just commits.

My git use is mostly a direct translation of mercurial (which I learned first), and the staging area is really optional. The only time I ever type `git add` is when adding a new file; otherwise I just

  vi foo.txt
  git commit foo.txt
every time.

I guess jj is different still (by way of ~autocommitting), but my point stands.


> Ideally, reduces cognitive complexity because you don't have to think about the staging area anymore, just commits.

This is the thing I don't like about jj. I know it makes splitting easy, but splitting is harder than selectively adding after blindly merging all changes.


> but splitting is harder than selectively adding after blindly merging all changes.

Is the scenario that you make many changes in the working copy and then run `git add -p` a few times until you're happy with what's staged and then you `git commit`? With jj, you would run `jj split` instead of the first `git add -p` and then `jj squash -i` instead of the subsequent ones. There's no need to do anything instead of `git commit`, assuming you gave the first commit a good description when you ran `jj split`. This scenario seems similarly complex with Git and jj. Did you have a different scenario in mind or do you disagree that the complexity is similar between the tools in this scenario? Maybe I'm missing some part of it, like unstaging some of the changes?


> This scenario seems similarly complex with Git and jj.

It is in number of commands ran, but there's a few annoyances around changes getting into the repo automatically.

There's a lot of git commits coming from jj's constant snapshots. Maybe this is a good thing overall, but it brings some silly issues,

What to do when data that shouldn't leave the dev machine gets to the repo? I'm thinking secrets, large files, generated files. - Leaking secrets by mistake seems easier. - Getting large files/directories into the git snapshots might degrade git's performance.

It seems that you need to be diligent with your ignores or get forced to learn more advanced commands right away. I guess there's a more advanced history scrub command around though.


I remember complaining about this to Martin early on and he mentioned he found not having a staging area simpler, and I see why whenever I try to switch commits from a dirty workspace that has conflicts with other branches.

Maybe if in git the "trash" that makes a commit dirty was commit-local, then you'd get to move around freely while still having a staging area to cherry pick your changes. Sounds trickier than just not having a staging area (and may be flawed), but gives back the control you have in git over what gets into the repo.


conceptually simpler, + easier to revert things.

in my case, i abandoned advanced git-ing because it's too much pain for little gain and i typically forgot everything by the time i'd actually need it. nowadays i only use the basics commands with the occasional cp -r.

with jj i get the gain with little pain.


That's news to me. Do you have a source for that I can look at? Not being snarky. I would legitimately like to read more about this.


Probably refers to regulatory exceptions that aren't in the statue directly, which are updated every 3 years:

https://www.copyright.gov/1201/2024/

I see in the "final rule" for 2024 (PDF) a section titled "11. Computer Programs—Repairs of Devices Designed Primarily for Use by Consumers", although it seems to indicate that nothing changed, as opposed to telling you what stayed the same.


I actually was just reading up on it yesterday because I've rooted a commercial e-ink word processor and was trying to sort out how much about the process I can legally share. The sibling post has the link to the LoC rulemakings that define the exemption categories. These exemptions are the same basis for any phone jailbreak, which makes me suspect it could be legal to publish methods as well as do it your self, but I'm still unsure.

Very curious what you expect to move to. The market outside those options is extremely limited.


it is, but i'm willing to compromise. grapheneos can be an option for a while, ultimately a linux phone. worst case i can settle with 2 phones for a while, one cheap/old stock android exclusively for the bank and such, another one for everything else.

it's also a long run, the way things are shaping up i don't expect alternatives to become mainstream but nevertheless getting improved support over time.

if we indeed end up in a situation where there is no viable alternative then screw that, i might as well go completely off grid.


This is why we need to pass right to repair (or modify) legislation , return control over your own belongings.

This must supersede any TPM/copyright restrictions or other encumbrances.


The telltale sign for me is that chopping an onion causes the layers to separate from pressure before the knife starts to cut in. Super dangerous.


Yea we should change that. Corporate life without parole: sorry, you don't get to be a business anymore, bye.


And all those employees and customers are punished for the crimes of a few.


For egregious cases, yes. Absolutely. That very short term pain is almost instantly offset by the societal gain brought about by companies' better adherence to the law. It's incredible just how much good it would do, and how quickly this would happen.

And please don't assume a "you wouldn't if it was your own employer" - no, I very much would, despite the struggles it would cause.


Companies cant think and act. Employees at companies commit crimes. Lets say that a boss at Microsoft commits a crime.

Entire company shutdown. All employees fired. All servers shutdown. All windows computers stop working. All companies using azure gets their stuff turned off. And so on.

Is the world a better place?


In that second? No. A week later when, as a result, every other company has gotten their act togeter? Yes, the world is an incredibly better place.


Not a an expert in this area at all, but I suspect there's an aspect here if maintaining productive capacity to replace imported food, if that became unavailable. Cursory searching suggests US food exports are similar in scale to food imports. Roughly speaking, with some adaptation that export capacity could be redirected for domestic nutrition.

If, in contrast, you let those farms and skill dry up it would be difficult to rebuild quickly.


How does capacity work? If the US needs food because of an export stop of supplier countries because of a crisis, those farmers can't swap out what they produce. Depending on the year of time it might take 12 months to swap out export products for something else. This is not "standby-capacity".


Many things that can stop imports take a lot longer than 12 months to resolve. Wars can take several years for example.

Getting a domestic replacement in 12 months of belt tightening would still be a lot better than having to deal with several more years of that.

Also I'd expect that there are cases were we export X and import Y, and if Y was cut off we could switch to using X somewhat as a substitute. I don't know offhand of any specific plant foods where this is the case, but pork and beef illustrate the idea. We import about 15% of our beef. If that got cut off we could cut some pork exports and use that as a substitute. It's not a great substitute but it is calories that can get you through a crisis.


I get that aspect. My assumption is that due to scale, location, etc., a farm that transitions from its current owner to a different owner will still be a farm with (possibly the same) employees. I don't see 30% of the farmland in Arkansas (assuming a foreclosure rate that high) suddenly becoming new-build city centers, or factories, or suburbs. It works as farmland, someone will probably buy it to use as farmland.

I don't see why it's strategically important to the US taxpayer that one millionaire own it vs a different millionaire (or corporation).


Because it's bad for consumers to lose choices, even if they don't normally exercise those choices. The choice is the distributed power we have against the consolidated corporate power. We can choose not to let them restrict those choices, for example with interoperability regulations.


There is interest, but thanks to DMCA 1201 they put a thin veneer of encryption on it and suddenly it's a felony to make/use that third party tool.


We could, of course, repeal DMCA section 1201, but no one in government wants to do that.


The difficulty of fighting against things which are concentrated benefits to a few, and diffuse harms to everyone else. :/


This a real loss. I'll miss Glitch for rapid experimentation with UI and APIs together.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: