DateTime handling, especially with timezone offsets, is crucial. If your format gets that right, it'll stand out...most formats still mess up time zones or rely on loose string parsing. It's key for stuff like logs, scheduling, or syncing data across systems. DuperGZ right after that! ;)
Nice! Are you using any specific canvas libraries for this, like KonvaJS? I recently worked on a closed-source project that used Konva and was pretty impressed with its capabilities. HTML canvas is powerful stuff if you know what you're doing.
I implemented an encoding pipeline for AV1 for vids uploaded to my social news site (think reddit competitor except I'm extremely small fry). I eventually removed the code for it.
While the space savings and quality improvements are good, the encoding speed is an order of magnitude slower than using h264/vp9. In the end the user experience of causing people to wait significantly longer for an AV1 encode wasn't worth the tradeoff. To fix the user experience problem, I still had to encode a h264 version anyway, which kinda defeats the point when it comes to space savings. You still get data transfer improvements, but the break even point for when the encoding costs offset the data transfer costs were around 1000 views per min of video encoded, and as an average I'm far below that.
IMO there's a reason why YouTube only encodes AV1 for certain videos - I suspect it's based off of a view count. Past that point they trigger a AV1 encode, but it isn't worth it to do all videos, at least right now.
Worth keeping in mind I was looking at this ~2 years ago, so things may have evolved since then.
>IMO there's a reason why YouTube only encodes AV1 for certain videos - I suspect it's based off of a view count. Past that point they trigger a AV1 encode, but it isn't worth it to do all videos, at least right now.
But how can they do that without storing the original uploaded video until it hits that view count?
Do they actually store the original uploaded video somewhere, but reencode for the edge servers to save data/storage?
> Do they actually store the original uploaded video somewhere, but reencode for the edge servers to save data/storage?
YouTube has always stored the original video indefinitely. When they added 60FPS support, videos going back years were suddenly available with 60FPS without having to re-upload them. Not many people bothered to upload in 60FPS before YouTube supported it, but those that did noticed. (I know from Rooster Teeth/Achievement Hunter, which did 60FPS before YouTube supported it possibly because they also had their own platform in parallel.)
AV1 is really one of those things born out of internet providers (e.g. Google, Amazon) put together so they can deliver content more efficiently with their bandwidth without needing to deal with a complicated web of royalties in addition to paying said royalties. There's plenty of people using AV1 or it's image format but don't realize it.
Also, video encoding pretty much always comes with the tradeoff of more efficient = uses more processing power
I did some testing with the 3 main AV1 encoders with gifs (avif). They’re pretty good. But not as good as jpeg xl but currently basically only Safari supports it.
For most “normie” use cases, I’d recommend cloudflares image transforms which are available on free tier. I actually wrote a small Jekyll plugin for my site to auto prefix images with their transform. Idk why but shipping optimized images is just one of those things that tickles me!
In my experience (not professional, encoding various files for archiving or sharing), AV1 perform quite well for low bitrate situation/streaming, and the encoder is reasonably fast (not as fast as h264 ofc, but that's decades of work on it).
But for higher quality encoding, I personally found that h265/HVEC almost always beat it, with similar encoding time.
As for AV2, I just hope that we get a good open-source encoder.
FS calls across the OS boundary are significantly faster in WSL1, as the biggest example from the top of my head. I prefer WSL2 myself, but I avoid using the /mnt/c/ paths as much as possible, and never, ever run a database (like sqlite) across that boundary, you will regret it.
Back in 2008–2009, we had a lot of bare metal servers at SoftLayer's (Dallas, TX) facility. One of our customers ran a South American music forum, and anytime someone uploaded an MP3, the data center would honor the DMCA request and immediately stop routing traffic to the server until the issue was resolved. Now imagine what tools they might have in their arsenal in 2025.
Multiple subsea fiber cuts in the Red Sea impacting global communications
Impact Summary
Starting at 05:45 UTC on 06 September 2025, traffic traversing through the Middle East originating and/or terminating in Asia or Europe regions may experience increased latency due to multiple undersea fiber cuts in the Red Sea. The disruption has required rerouting through alternate paths which may lead to higher-than-normal latencies.
This advisory is intended to raise awareness ahead of increased demand as the regions enter the start of its work week.
Current Status
Multiple international subsea cables were cut in the Red Sea. Our engineering teams are actively managing the interruption via diverse capacity and traffic rerouting, while also discussing alternate capacity options and providers in the region.
Undersea fiber cuts can take time to repair, as such we will continuously monitor, rebalance, and optimize routing to reduce customer impact in the meantime. We’ll continue to provide daily updates, or sooner if conditions change.
This message was last updated at 19:30 UTC on 06 September 2025
The smart folks who keep these things running, despite failures like this, are incredible. Hats off to you all, good luck in the coming weeks as you have to deal with this.
> "There are 150 to 200 instances of damage to the global network each year. So if we look at that against 1.4 million km, that's not very many, and for the most part, when this damage happens, it can be repaired relatively quickly."
...
> If a fault is found, a repair ship is dispatched. "All these vessels are strategically placed around the world to be 10-12 days from base to port,"
...
> To repair the damage, the ship deploys a grapnel, or grappling hook, to lift and snip the cable, pulling one loose end up to the surface and reeling it in across the bow with large, motorised drums. The damaged section is then winched into an internal room and analysed for a fault, repaired, tested by sending a signal back to land from the boat, sealed and then attached to a buoy while the process is repeated on the other end of the cable.
At first it’s an average of 1% slack across the entire run but up to 5% in rough terrain. They don’t use slack though for repairs, they lay down new cable between the bights that adds up to several miles to the length.
That's pretty freaking cool!!! Got exactly what I wanted with the following prompt:
Create a 200x100mm rectangle with depth of 12.7mm, 6mm filleted corners, a 25mm center hole, 6.35mm holes in each corner offset 12.7mm each edge, with 1mm chamfer on top of center hole and 0.5mm chamfers on corner holes.
Now, just give me a picture to parametric model prompt generator...and then we can get into assemblies! ;)
Several European countries have been using banks as a form of digital authentication for years. Of course, there are strict regulations to make sure banks don't abuse their position.
I wouldn't want to use such a system with American banks, but the concept is hardly novel.
I don't want to verify anything and use services that don't require verification. The alleged motivation to introduce these checks is the error and the flaw.
What incentive does a bank have to support this? The site and the user get what they want, and from the bank's perspective they got to freeload on the age verification the bank has performed (though admittedly they already had to anyway)
reply