Very cool. This seems like such a "Duh. Users want this" feature. I wish it was integrated in Firefox years ago. I bookmark some sites and then come back years later, but perhaps they hijacked the URL or they later changed the URL's parameters.
And then when I return, I get a 404. Instead of bookmarking, I'd love to "capture" the current info, divs, and graphics.
You could do that with the Firefox extension "SinleFile" [1]. It main purpose is to save a full webpage as one single html file with images etc encoded in the html file (no more messy html file + folder cluttering), very neat. There is also a variant that is a html/zip hybrid called SingleFileZ [2] that gets smaller saved file.
In the settings you could set so that sites you bookmark is saved and also link the bookmark to the saved file locally if you want to.
That looks great! Firefox should have something like SingleFile built-in. Instead of just copying features from Chrome, they should lead the innovation and add useful features like this.
I've been using SingleFile for a couple of months now, to auto-save every page I visit, and it works well. It's exactly as simple as it needs to be. My only real complaints are that it can be slow on large pages and that it pollutes your download history (every auto-save shows up as a download), both of which I suspect are due to limitations in the extension API.
While it's far from an ideal solution, you can throw your bookmarks periodically to archive.org's "Save Page Now!" service. It's easy to semi-automate it - here's how I use it with pinboard.in bookmark exports: https://pastebin.com/uUVE22RD
This solution is on the user side, which is great because each person can get and manage saved pages for themselves.
But if we're looking for a developer side solution, then making pages that last an order of magnitude longer may be better for everyone in the long run, e.g. https://jeffhuang.com/designed_to_last/
I like the ability to use tags too -- I've got various product/tech ideas and it's nice collecting information with it and not having to worry about the pages going away or changing.
Does it capture your view of the page, though (i.e. use your own browser, with its cookie jar, to do the scrape)? I'd like to, for example, snapshot my Facebook feed.
I made an application[0] that does capture your view. It's screenshot-based. It works outside of a browser too - anything on your screen. All local too, so no privacy concerns :)
No it does not. That means it also doesnt work for any news sites that you have subscriptions to.
Im using the joplin web clipper pretty heavily for this purpose.
Afraid of lose the contents on the page, I used to save it as PDF on cloud. Far from ideal but happened way too much I go back to the page and get a 404 page error or so.
If enough people did this, it would provide more pressure for websites not to just nuke their old URL schemes every time they switch from wordpress to Drupal...
I disagree. I spent time in Terraform a few years ago working with a client and Terraform had the ability to create but not tear down resources for some services. I was shocked -- check out the Github issues history. I ended up writing a "bunch of bash script wrappers or similar around it".
Yea, this is exactly what I was talking about. It's not to say that Cloudformtation can't run into the same thing - deleting S3 buckets is difficult in any situation - but there's a lot of things Terraform doesn't/won't do, and often you're left with just making your own second layer of automation to work around.
To bad the Wikipedia entry is completely wrong on when he coined the term. If it was in response to the July 2011 tornados in Joplin, then how come I have a video of him giving the WFI anecdote in November 2009 at RHOK#0 (Random Hacks of Kindness hack day).
I like the Refactoring UI YouTube series.[1] Schoger takes a real website and makes incremental changes to make it gorgeous. He justifies his decisions and just generally gives you a good intuition for the why he makes a given tweak, which has helped me far more than any other tutorial or blog post I've followed.
- Web Design in 4 Minutes https://jgthms.com/web-design-in-4-minutes/
- Beginner’s Guide to the Programming Portfolio https://leerob.io/blog/beginners-guide-to-the-programming-portfolio
- Improving My Next.js MDX Blog https://leerob.io/blog/mdx
It would be amazing to see the Gameboy/N64 emulator hacker community expand, and NEW games begin to be developed based on these docs. Maybe Nintendo could double-down on this and turn it into good publicity
The article mentions that this would be unwise due to these documents being released illegally. Reverse engineering some existing system for the purpose of interoperability (as is common among homebrew enthusiasts) is legal; hacking into a Nintendo subcontractor's IT and leaking internal documentation obviously isn't.
There is a comment at the bottom of the article from an expert in the Air Force:
> William Hargus, lead of the Air Force Research Laboratory's Rotating Detonation Rocket Engine Program, is a co-author of the study and began working with Ahmed on the project last summer.
> "As an advanced propulsion spectroscopist, I recognized some of the unique challenges in the observation of hydrogen-detonation structures," Hargus said. "After consulting with Professor Ahmed, we were able to formulate a slightly modified experimental apparatus that significantly increased the relevant signal strength."
> "These research results already are having repercussions across the international research community," Hargus said. "Several projects are now re-examining hydrogen detonation combustion within rotating detonation rocket engines because of these results. I am very proud to be associated with this high-quality research."
And then when I return, I get a 404. Instead of bookmarking, I'd love to "capture" the current info, divs, and graphics.