To replace inside single quotes, you can do `mi'c`. There's nothing to restrict it to the current line, but you will see the selection before you hit c. That's the big benefit of the noun-verb model that Helix uses.
Helix is my daily driver. I love it! Kakoune's editing model, built in tree sitter and LSP, works great with zero config. My only gripes are I wish it had Kakoune's client-server model and collaborative editing.
Removing the income limit without increasing the benefit amount (at least not at the same rate) would be a huge improvement and should be about as simple as any tax change can be.
At least magnetic disks are iops constrained, lower iops loads conceivably allow higher density, or packing different load patterns to the same devices. Say a 8 TB / 100 iops disk reserves 90 iops for a 1 TB a database service, that's 87% of the disk's capacity sitting free but only 10 iops to serve it with. Adding what is effectively an iops tax to discourage frequent reads is one way to make a mixture like this work (or another way to think of it - subtracting an iops discount)
Obviously example above is contrived, but same principle applies to a pool of 1000 disks as it would 1. You also don't escape this issue with regular hot storage either, there is still a (((iops * replication count) / average traffic) / max latency) type problem lurking, which would still necessitate either limiting density or increasing redundancy according to expected IO rate. This is one reason why some S3 alternatives with weaker latency bounds (not naming names, they're great but it's just not the same service) can often be made substantially cheaper, and why at least one of S3's storage classes may be implemented entirely as an accounting trick with no data movement or hardware changes at all
> The differences stack up for say, a 1GB video that becomes viral and triggers terabytes in egress. You pay for 1GB, not terabytes.
Under the condition that you actively monitor the usage and manage to "process it once" on time (and then "process it back"). Because otherwise you pay for terabytes - not in egress fees, but in processing fees. Or am I missing something?
The whole point of IA is cheaper storage that is infrequently accessed, and there is a price to accessing it. If you need / want frequent access just use the regular storage class.
All object stores out there have a flavor of IA class with an access fee that should be far lower than the storage class savings for scenarios where you would even consider using this. If you don't want or understand this cost optimization you simply don't use it.
Yes, because in a well-designed setup files that are frequently accessed would be restored to standard tier. Ideally you'd only pay the data processing fee once when files transition from infrequently accessed to frequently accessed. There's a breakeven point at a data access rate of once every two months.
Maybe the cold-to-hot migration "tax" is partially to prevent abuse?
> "Data retrieval is charged per GB when data in the Infrequent Access storage class is retrieved and is what allows us to provide storage at a lower price. It reflects the additional computational resources required to fetch data from underlying storage optimized for less frequent access."
I like the "automatic storage classes" idea as well.
> "…you can define an object lifecycle policy to move data to Infrequent Access after a period of time goes by and you no longer need to access your data as often. In the future, we plan to automatically optimize storage classes for data so you can avoid manually creating rules and better adapt to changing data access patterns."
AWS already give you intelligent tiering for this, it's a very nice product but it's also just a nice way of hiding the same fees. Your $0.004/GB becomes $0.023/GB on first read for 1 month then $0.0125/GB for 2 months, so the average cost of storing it over those 3 months becomes $0.016/GB, and that's before considering monitoring fees
You could also implement tiering yourself, depending on your workload of course. If you know you're storing objects for long-term archival reasons (or backups), you could opt for using S3 Glacier Instant Retrieval at $0.004/GB.
In my experience (Bazel, sample size of 2 projects), the complexity doesn't come from configuration options that have defaults, but from how well the "mental model" of the build system fits the preconceived notions of how to structure, organize, and depend on code in an existing project.
Almost none of the complexity comes from what configuration options I've registered ahead of time. It comes almost entirely from, "Well darn, this code depends on this completely unrelated part of the project. I wish it didn't, but now the build tool either needs to sometimes fail to rebuild something correctly, or it needs to build way too much to run quickly."
Ironically this means the best time to adopt Bazel is from the very start. Despite the fact Bazel doesn't add much at that point in time it's when the cost is lowest and it prevents impedance mismatch from being introduced.
This runs counter to how most people think of things like Bazel which are tools you should only reach for when the situation has already grown out of control.
> I optimize directly for the hardware I'm running on, which typically gives me 10-100x performance improvements. Controlling how memory is managed is critical.
What makes you think you can't control how memory is managed in Rust? Rust doesn't have "automatic" memory management, it has a compiler that can help ensure you are managing memory correctly, and force you to type "unsafe" when you are doing things it doesn't understand.
> it has a compiler that can help ensure you are managing memory correctly, and force you to type "unsafe" when you are doing things it doesn't understand.
Well it's a bit more subtle than that if we're honest.
Arguably, Rust does make a number of memory layouts (self referential structs, per struct allocators, non thread local addresses, etc) much harder to accomplish than "typing unsafe".
If you self-reference using pointers and guarantee the struct will never move, you don't even need unsafe. If you self-reference using offsets from the struct's base pointer, you need a splash of unsafe but your struct can be freely moved without invalidating its self-referential "pointers".
> If you self-reference using pointers and guarantee the struct will never move, you don't even need unsafe
I have a hard time seeing how you could use self references without a combination of raw pointers, pins/projects, and unsafe code. The tediousness of doing so is pretty much a no-go for any sane developer.
The only sane solution seems to be a generous sprinkle of Arcs, which is _not_ okay in high performance scenarios.
> Per-struct allocators are a work in progress
Yes, and most of us don't really want to use nightly in production. It's been years of work on the allocator work group already, and there's probably still years to wait before a stable release.
> Not sure what "non thread local addresses" means, but in my experience Rust is pretty good at sending data between threads
Well I mean overall any way to allow a somewhat eased setup for storing and retrieving objects to/from shared memory, across processes that do not map said memory at the same location. This is very very annoying to implement in Rust at the moment.
Don't get me wrong though, I think Rust has a lot to offer. But when you dive in even slightly technical subjects in Rust, it soon becomes obvious that the power of C/C++ is far from "just type unsafe" away.
> I have a hard time seeing how you could use self references without a combination of raw pointers, pins/projects, and unsafe code. The tediousness of doing so is pretty much a no-go for any sane developer.
You just do it. I can't recall a good example at the moment, so I've just thrown together a load of Cells: the thing I was doing when I learnt this technique didn't have any Cells in it.
> Yes, and most of us don't really want to use nightly in production.
Strictly speaking, this is only needed if you want to use Rust's standard library types with custom allocators. You've been able to have per-struct allocators for your own types since long before I learnt the language.
> Well I mean overall any way to allow a somewhat eased setup for storing and retrieving objects to/from shared memory, across processes that do not map said memory at the same location.
Can't you just store offsets into the memory region, and PhantomData references, then unsafely index the mmap'd region when you need an actual reference? Seems like the same thing you'd do in C, except the function you abstract that with can be a method instead. (Unless I'm still misunderstanding.)
> I optimize directly for the hardware I'm running on, which typically gives me 10-100x performance improvements. Controlling how memory is managed is critical.
> What makes you think you can't control how memory is managed in Rust?
> Arguably, Rust does make a number of memory layouts (self referential structs, per struct allocators, non thread local addresses, etc) much harder to accomplish than "typing unsafe".
So basically, the right question would be "can you explain what you mean that Rust can't control how memory is managed"?
Because the author knew, the Rust supporter didn't and confused "work in progress" with "I need it now because I'm using it now in production in my daily job"
Yup, I’ve read it again, and I’m still pretty sure that you’re responding mostly to your own assumptions and not really responding to what was written :)
Anyone who writes C/C++/Rust, like I do, can benefit tremendously from Mold. I use it daily. Each time I hit save and need to run my program, Mold saves me about 10 seconds in linking time. That time adds up quickly, and prevents me from getting distracted.
I wonder how using a typical isomorphic React app performs on Fermyon using JS running with QuickJS vs on Vercel, Cloudflare Workers, or Netlify. Has the Fermyon team tested this?
The main limitation at the moment is that we only support a handful of Web APIs (e.g. `fetch`, `URL`, etc.) and NodeJS APIs (e.g. `readFile`), as well as QuickJS's built-in ES2020 APIs, so if you try to run an existing app you may find it needs a Web API we don't yet support. If so, please feel free to open an issue on the `spin-js-sdk` repo.