Hacker Newsnew | past | comments | ask | show | jobs | submit | avidal's commentslogin

Very cool. I've been using worktrees a lot recently at work, mostly reworking tools or CI jobs that try to manage branches into just using a worktree and cleaning up after yourself.

It's made a big difference in readability and cleanliness. I'll often use, eg, `mktemp -d` with a template argument that's relevant to the usage, then use the base name of the temporary directory as the worktree branch, followed by `git fetch <remote> <remote-branch>:<worktree-branch>`.

I've been thinking about using worktrees more for my general development, since I'm frequently working across multiple branches.


I regret not finding an opportunity to check the old place out on one of my many visits to the SF office, but it's awesome to see that the new place is finally coming to a close.

Didn't realize it had been so long, but I suppose I left Cloudflare like 3.5 years ago and I know it was a bit of a pain in the ass just trying to buy the property here in Austin in the first place.


Eh you didn't miss anything at the old place if you see the new one. Hope you can make it to a party later this year!


From context, I'm assuming Microsoft / Amazon / Google, referring to Azure / AWS / Google Cloud respectively.


Yep.

Because when I think reliable cloud infra, I think Azure.


Heya Dave! I see your name pop up every once in awhile in various places. We first connected on that MUD many many years ago if you don't recall.


Ha hi man! I actually played the MUD for a while recently, fun fun. Hope you're doing well. Think we're still connected on LinkedIn :D


vscode.dev is still for local development, not remote. It uses browser filesystem APIs to access your local filesystem. Whereas code-server runs on the remote and exposes a web server for you to connect to which serves up the filesystem on the remote.


You can connect to remote git repository


Yeah. But is that just for editing/committing? Can it remote into a server via SSH, run extensions there? I'm guessing not.


This was an incredible resource back when CBVs were first introduced. While I tended to prefer function-based views (when I was still heavily using Django), this page was the first one I opened when I knew I was going to be using a CBV.

Thanks to the authors/maintainers!


Cf has really mature trust and safety as well as security teams and tools. I've no doubt they've taken bad actors into account considering all of the other offerings they have that have also attracted bad actors.


No question. They've been dealing with that factor at hyper scale for many years now. The only question is how they've already decided to go about it.

They'll certainly want to make a big splash with the product, given the cloud giants they're taking on. That's better done with fewer limitations initially, even if they know ahead of time the various ways they'll restrict abuse over time.


Yeah, 8chan definitely weren't bad actors and Cloudflare definitely didn't have to make a PR statement about booting them from their services.


These things are very cool. I buy wood from a local wood yard every year or two (I don't use much here in Texas, but we do love our firepit and fireplace when it's cool enough), but the splits I get are often too large to easily burn. I'll often buy a bag of kindling when I pickup fire wood, but the kindling doesn't last long.

The main reason I didn't get a kindling cracker is because I don't have a stable base to mount it, although I suppose I could manufacture one next to the wood rack. I did, however, get a hydraulic wood splitter[1] which at least helps split some of the really large logs into something more manageable. It's pretty simple to use and very effective, even on the very knotty and hard oak wood out here.

[1] https://www.harborfreight.com/lawn-garden/outdoor-power-tool...


As far as "vote rings" are concerned, the CTO (and others) would emphatically remind people not to upvote to ensure we didn't end up getting flagged.


Have you seen gron[0]? It's similar: flattens JSON documents to make them easily greppable. But it also can revert (ie, ungron) so you can pipe json to gron, grep -v to remove some data, then ungron to get it back to json.

[0] https://github.com/tomnomnom/gron


Actively developed and written in Go.

The project linked to is from 2014 with last update in 2015.and it is on NPM...

What is left to say? Thank you!


What's wrong with it not being updated in 5 years? It's a simple script. It's probably stable and doesn't need to be "actively developed".


I see this sentiment a lot. I agree with you, simple things can be done and don't need active development.


I much prefer jq[1]. Actively developed and written in C.

[1]:https://github.com/stedolan/jq


Used to be written in php right?


I haven't seen that one, but it looks very similar to the one I use: https://github.com/micha/json-table


That's actually a nice lifehack. Much simplier than jq. Unfortunately, would be harder to make all kinds of logical conditions for which jq allows (even if not that intuitively).

It still feels like there must be something in between, some way to make queries with json more naturally, than with jq, yet with enough power.


jq is certainly a unique language, which makes it unfamiliar to work with. Intuitiveness and natural feeling comes when one has gotten familiar with it after a bit of practice and reading the manual, though. It's a very well thought-out language. A very nice design.

It might help to recognize how it's influenced by shell languages and XPath, if you're familiar with those.


Well, no arguing there, it is indeed. And I use it from time to time. However, it's not like I need a tool like that every day, and if I'm no using it for a week I usually need to "learn" it all over again.


What does grep + gron give you over jq?


jq seems very powerful. I don't deal with json all that often and my most common use case (by far) is `jq '.' -C` and it took a few tries for me to remember that syntax.

The idea of flattening, grepping, then reverting sounds very appealing and sounds like a better fit for me.


> `jq '.' -C` and it took a few tries for me to remember that syntax.

I don't think you really need neither `.` nor `-C`. Just `jq` seems to do the same colored output of the input by default.


It does look like neither are needed if you pipe a file in jq, but `jq . file.json` requires the `.` and if you're pipeing into a pager, like less, you need both `.` and `-C` to get colored output (that was the case with the alias I had pulled up). I am using 1.5 and haven't looked to see if 1.6 changes this.


I see. I doubt that behavior has changed, then.

`-C` would be required when piping because most of the time (with the exception of piping into less) when stdout is not a terminal, it doesn't make sense to include terminal color escape sequences. You'd end up with those codes in your files, and grep would be looking at them for matches, for example.

`.` would be required when passing the file as an argument instead of stdin, because jq interprets the first argument as jq-code. If you don't include `.` it would interpret the filename as jq-code.


`.` is still needed if I'm pipeing in json--but only when I'm piping out. Otherwise help goes to stderr and nothing goes to stdout.

I do honestly think jq is a cool and powerful tool. I also appreciate little things like auto-color when appropriate--git also does this. Git also uses your pager, which might trivialize my personal use case.


Wrong question: It's not a competition.

There are cases when you have some complicated json and just want to search for stuff. Then you use grep + gron.

There are cases when you want a complete json processing tool. Then you use jq.

You can probably simulate each approach with the other approach, but the code needed to this is just too tedious to write. So you use whatever tool fits your use case.


No, but you still have to make a decision about which tool to use. So it's helpful to have a sense of the use cases for each.


I find it useful when I don't know what the json schema is. Then you can just do a quick gron + grep and find where the interesting parts of a large json document are.


as far as i can tell, jq doesn't do flattening .


Not built in, but @jolmg posted a script here which does the needful.


Maybe GP meant that jq can do selection as well, i.e. that grepping is redundant after jq. But jq is much more complicated to learn and grep works on all inputs (not just json), so it makes a lot more sense to learn and use grep properly.


Simplicity.


Love the greppability and reconstructability. This should be the submission.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: