Very cool. I've been using worktrees a lot recently at work, mostly reworking tools or CI jobs that try to manage branches into just using a worktree and cleaning up after yourself.
It's made a big difference in readability and cleanliness. I'll often use, eg, `mktemp -d` with a template argument that's relevant to the usage, then use the base name of the temporary directory as the worktree branch, followed by `git fetch <remote> <remote-branch>:<worktree-branch>`.
I've been thinking about using worktrees more for my general development, since I'm frequently working across multiple branches.
I regret not finding an opportunity to check the old place out on one of my many visits to the SF office, but it's awesome to see that the new place is finally coming to a close.
Didn't realize it had been so long, but I suppose I left Cloudflare like 3.5 years ago and I know it was a bit of a pain in the ass just trying to buy the property here in Austin in the first place.
vscode.dev is still for local development, not remote. It uses browser filesystem APIs to access your local filesystem. Whereas code-server runs on the remote and exposes a web server for you to connect to which serves up the filesystem on the remote.
This was an incredible resource back when CBVs were first introduced. While I tended to prefer function-based views (when I was still heavily using Django), this page was the first one I opened when I knew I was going to be using a CBV.
Cf has really mature trust and safety as well as security teams and tools. I've no doubt they've taken bad actors into account considering all of the other offerings they have that have also attracted bad actors.
No question. They've been dealing with that factor at hyper scale for many years now. The only question is how they've already decided to go about it.
They'll certainly want to make a big splash with the product, given the cloud giants they're taking on. That's better done with fewer limitations initially, even if they know ahead of time the various ways they'll restrict abuse over time.
These things are very cool. I buy wood from a local wood yard every year or two (I don't use much here in Texas, but we do love our firepit and fireplace when it's cool enough), but the splits I get are often too large to easily burn. I'll often buy a bag of kindling when I pickup fire wood, but the kindling doesn't last long.
The main reason I didn't get a kindling cracker is because I don't have a stable base to mount it, although I suppose I could manufacture one next to the wood rack. I did, however, get a hydraulic wood splitter[1] which at least helps split some of the really large logs into something more manageable. It's pretty simple to use and very effective, even on the very knotty and hard oak wood out here.
Have you seen gron[0]? It's similar: flattens JSON documents to make them easily greppable. But it also can revert (ie, ungron) so you can pipe json to gron, grep -v to remove some data, then ungron to get it back to json.
That's actually a nice lifehack. Much simplier than jq. Unfortunately, would be harder to make all kinds of logical conditions for which jq allows (even if not that intuitively).
It still feels like there must be something in between, some way to make queries with json more naturally, than with jq, yet with enough power.
jq is certainly a unique language, which makes it unfamiliar to work with. Intuitiveness and natural feeling comes when one has gotten familiar with it after a bit of practice and reading the manual, though. It's a very well thought-out language. A very nice design.
It might help to recognize how it's influenced by shell languages and XPath, if you're familiar with those.
Well, no arguing there, it is indeed. And I use it from time to time. However, it's not like I need a tool like that every day, and if I'm no using it for a week I usually need to "learn" it all over again.
jq seems very powerful. I don't deal with json all that often and my most common use case (by far) is `jq '.' -C` and it took a few tries for me to remember that syntax.
The idea of flattening, grepping, then reverting sounds very appealing and sounds like a better fit for me.
It does look like neither are needed if you pipe a file in jq, but `jq . file.json` requires the `.` and if you're pipeing into a pager, like less, you need both `.` and `-C` to get colored output (that was the case with the alias I had pulled up). I am using 1.5 and haven't looked to see if 1.6 changes this.
`-C` would be required when piping because most of the time (with the exception of piping into less) when stdout is not a terminal, it doesn't make sense to include terminal color escape sequences. You'd end up with those codes in your files, and grep would be looking at them for matches, for example.
`.` would be required when passing the file as an argument instead of stdin, because jq interprets the first argument as jq-code. If you don't include `.` it would interpret the filename as jq-code.
`.` is still needed if I'm pipeing in json--but only when I'm piping out. Otherwise help goes to stderr and nothing goes to stdout.
I do honestly think jq is a cool and powerful tool. I also appreciate little things like auto-color when appropriate--git also does this. Git also uses your pager, which might trivialize my personal use case.
There are cases when you have some complicated json and just want to search for stuff. Then you use grep + gron.
There are cases when you want a complete json processing tool. Then you use jq.
You can probably simulate each approach with the other approach, but the code needed to this is just too tedious to write. So you use whatever tool fits your use case.
I find it useful when I don't know what the json schema is.
Then you can just do a quick gron + grep and find where the interesting parts of a large json document are.
Maybe GP meant that jq can do selection as well, i.e. that grepping is redundant after jq. But jq is much more complicated to learn and grep works on all inputs (not just json), so it makes a lot more sense to learn and use grep properly.
It's made a big difference in readability and cleanliness. I'll often use, eg, `mktemp -d` with a template argument that's relevant to the usage, then use the base name of the temporary directory as the worktree branch, followed by `git fetch <remote> <remote-branch>:<worktree-branch>`.
I've been thinking about using worktrees more for my general development, since I'm frequently working across multiple branches.