1) IRC doesn't support any form of message threading. There's some user conventions like "username:" or "@username" to direct a message to a specific person, but there's no way to make a clear distinction between multiple conversations in a channel. (You can make a new channel for the conversation, I guess, but that's even more hostile than usual to other users trying to follow along.)
2) IRC doesn't support messages longer than ~400-500 bytes. There's an internal hard limit of 512 bytes per line in the IRC protocol, and part of that gets silently consumed by fields like the server hostname and channel name. If you try to send a longer message, it's silently truncated. (Your client will probably echo it back as if it all went through, but other users will see it truncated.) Trying to discuss anything longer than a single line of text typically requires users to post it to an online "pastebin" - adding another external tool you need to make IRC work.
3) There's no reliable, standard way to transfer non-text content on IRC. DCC is unreliable, slow, requires one or both users to set up their firewall to allow incoming connections (!), and only transfers files to one user at a time. All of this combines to make IRC incredibly awkward for discussing audiovisual content, like graphical designs or audio/video. Sure, you can upload screenshots or videos to another web site and post links to IRC - but that's yet another external tool you have to bring in.
4) User authentication and authorization in IRC is incredibly crude. The protocol is built around the understanding that users can change their username at any time (!!), with systems like NickServ bolted on after the fact to "protect" names. User permissions are tied to a user's presence in a channel; private channels are implemented with a single shared password. All of this is "good enough" for a lot of social IRC networks, but completely unsuitable for a business setting which demands a higher level of assurance.
In the author's scenario, there are zero benefits in using NVMe/TCP, as he just ends up doing a serial block copy using dd(1) so he's not leveraging concurrent I/O. All the complex commands can be replaced by a simple netcat.
On the destination laptop:
$ nc -l -p 1234 | dd of=/dev/nvme0nX bs=1M
On the source laptop:
$ nc x.x.x.x 1234 </dev/nvme0nX
The dd on the destination is just to buffer writes so they are faster/more efficient. Add a gzip/gunzip on the source/destination and the whole operation is a lot faster if your disk isn't full, ie. if you have many zero blocks. This is by far my favorite way to image a PC over the network. I have done this many times. Be sure to pass "--fast" to gzip as the compression is typically a bottleneck on GigE. Or better: replace gzip/gunzip with lz4/unlz4 as it's even faster. Last time I did this was to image a brand new Windows laptop with a 1TB NVMe. Took 20 min (IIRC?) over GigE and the resulting image was 20GB as the empty disk space compresses to practically nothing. I typically back up that lz4 image and years later when I donate the laptop I restore the image with unlz4 | dd. Super convenient.
That said I didn't know about that Linux kernel module nvme-tcp. We learn new things every day :) I see that its utility is more for mounting a filesystem over a remote NVMe, rather than accessing it raw with dd.
Edit: on Linux the maximum pipe buffer size is 64kB so the dd bs=X argument doesn't technically need to be larger than that. But bs=1M doesn't hurt (it buffers the 64kB reads until 1MB has been received) and it's future-proof if the pipe sizes is ever increased :) Some versions of netcat have options to control the input and output block size which would alleviate the need to use dd bs=X but on rescue discs the netcat binary is usually a version without these options.
We don’t block archive.is or any other domain via 1.1.1.1. Doing so, we believe, would violate the integrity of DNS and the privacy and security promises we made to our users when we launched the service.
Archive.is’s authoritative DNS servers return bad results to 1.1.1.1 when we query them. I’ve proposed we just fix it on our end but our team, quite rightly, said that too would violate the integrity of DNS and the privacy and security promises we made to our users when we launched the service.
The archive.is owner has explained that he returns bad results to us because we don’t pass along the EDNS subnet information. This information leaks information about a requester’s IP and, in turn, sacrifices the privacy of users. This is especially problematic as we work to encrypt more DNS traffic since the request from Resolver to Authoritative DNS is typically unencrypted. We’re aware of real world examples where nationstate actors have monitored EDNS subnet information to track individuals, which was part of the motivation for the privacy and security policies of 1.1.1.1.
EDNS IP subsets can be used to better geolocate responses for services that use DNS-based load balancing. However, 1.1.1.1 is delivered across Cloudflare’s entire network that today spans 180 cities. We publish the geolocation information of the IPs that we query from. That allows any network with less density than we have to properly return DNS-targeted results. For a relatively small operator like archive.is, there would be no loss in geo load balancing fidelity relying on the location of the Cloudflare PoP in lieu of EDNS IP subnets.
We are working with the small number of networks with a higher network/ISP density than Cloudflare (e.g., Netflix, Facebook, Google/YouTube) to come up with an EDNS IP Subnet alternative that gets them the information they need for geolocation targeting without risking user privacy and security. Those conversations have been productive and are ongoing. If archive.is has suggestions along these lines, we’d be happy to consider them.
Personally, I self-host Gitea. It works extremely well.
It has webhooks that tie nicely into my (also self-hosted) Drone (CI) and Mattermost (f/oss self-hosted Slack replacement) and CapRover (self-hosted Heroku) setups.
The same is available in Chrome. I made a list of shortcuts I use here [0] -- copying a few favorites below:
shortcut: "aw", lets you type: "aw s3", "aw iam", etc.
https://console.aws.amazon.com/%s
shortcut: "amzn", searches the retail side
https://www.amazon.com/s/?field-keywords=%s
shortcut: "gm", searches through gmail (change the 0 if you use multiple accounts)
https://mail.google.com/mail/u/0/#search/%s
shortcut: "maps", searches google maps
https://www.google.com/maps/search/%s/
shortcut: "img", searches google images
https://www.google.com/search?tbm=isch&q=%s
shortcut: "wp", goes directly to the article if it exists
https://en.wikipedia.org/wiki/%s
shortcut: "yt", searches youtube
https://www.youtube.com/results?search_query=%s
I find these “shorter work weeks are just as effective” articles to be nonsense, at least for knowledge workers with some tactical discretion. I can imagine productivity at an assembly line job having a peak such that overworking grinds someone down to the point that they become a liability, but people that claim working nine hours in a day instead of eight gives no (or negative) additional benefit are either being disingenuous or just have terrible work habits. Even in menial jobs, it is sort of insulting – “Hey you, working three jobs to feed your family! Half of the time you are working is actually of negative value so you don’t deserve to be paid for it!”
If you only have seven good hours a day in you, does that mean the rest of the day that you spend with your family, reading, exercising at the gym, or whatever other virtuous activity you would be spending your time on, are all done poorly? No, it just means that focusing on a single thing for an extended period of time is challenging.
Whatever the grand strategy for success is, it gets broken down into lots of smaller tasks. When you hit a wall on one task, you could say “that’s it, I’m done for the day” and head home, or you could switch over to something else that has a different rhythm and get more accomplished. Even when you are clearly not at your peak, there is always plenty to do that doesn’t require your best, and it would actually be a waste to spend your best time on it. You can also “go to the gym” for your work by studying, exploring, and experimenting, spending more hours in service to the goal.
I think most people excited by these articles are confusing not being aligned with their job’s goals with questions of effectiveness. If you don’t want to work, and don’t really care about your work, less hours for the same pay sounds great! If you personally care about what you are doing, you don’t stop at 40 hours a week because you think it is optimal for the work, but rather because you are balancing it against something else that you find equally important. Which is fine.
Given two equally talented people, the one that pursues a goal obsessively, for well over 40 hours a week, is going to achieve more. They might be less happy and healthy, but I’m not even sure about that. Obsession can be rather fulfilling, although probably not across an entire lifetime.
This particular article does touch on a goal that isn’t usually explicitly stated: it would make the world “less unequal” if everyone was prevented from working longer hours. Yes, it would, but I am deeply appalled at the thought of trading away individual freedom of action and additional value in the world for that goal.
Our home has a rotary phone in most of the rooms (they're cheap on ebay and easy to repair if they're not working). Each one is plugged into a Grandstream HT802, which gets it onto our home network. A raspberry pi runs FusionPBX, which gives each phone a number, and lets them all dial each other. The kids love it!
There's a hidden setting when configuring geo on campaigns.
By default it's set to "People -interested- in your target location" . You need to change it to "People who are -in- your target location".
This setting is hidden under a toggle, so it's very easy to miss. Definitely a dark pattern and results in a lot of garbage clicks if you overlook that setting.
This is just one of many dark patterns which makes Google Ads effective only for people willing to spend the time tuning and tweaking every single setting.
A big part of the problem is Google themselves - they say always use "broad" keyword matches (and of course it's the default). Broad matches are really not good for most campaigns unless you have an extremely large budget, yet if you read their documentation they heavily encourage it.
While we're at it...
1) Never enable the "auto apply recommendations" setting. If you do, it gives Google free reign to modify your campaigns (this has always resulted in worse performance and more spend in my experience)
2) Never listen to a google ads rep if they call. Once you're spending enough, they'll call you every week trying to convince you to change various settings. 95% of the time their advice is just plain bad. The quality of the advice does increase once you're spending enough and get assigned more senior reps. But even the senior reps are there to get you to spend more money, their job is not to make your campaigns more effective, "ad specialists" are simply sales people in disguise.
Jeremy, thanks for Chitchatter and for bringing this up here. The first feedback I can give you is about the threat model and the associated OPSEC measures mentioned in the README.
Anyone with knowledge of the room UUID can listen to the conversation, even though the presence of the eavesdropper may (or may not) show up in the connected peers' counter. It is of essence to share such UUIDs over a secure channel, or the communication security will be compromised trivially.
It is mentioned in the README the relevance of government level threat actors and email, SMS, Discord, among the possible mediums over which the UUIDs can be shared. Of course this leaves Chitchatter open to be attacked by governments and network operators, who do have access to the phone network(s), email servers, and other platforms involved. It would be best to prefer other out-of-bound channels, or come up with one-time UUIDs generators able to resynchronise and shared among the peers.
See I'm pro freegan nugget liberating but as you said, definitely anti-this. I had a friend w an IM client in 2010 or so (maybe trillian?) with a plugin that notified him if I opened his profile to chat w him, as well as telling him whether I was really invisible, and finally one that gave a desktop notif if i was typing to him (NOT the typing indicator in a chat window with me).
We were both into computers so he acted like it was a funny/novel piece of tech but he used it in daily life. Felt like stalkerware. You don't stalk your friends, and you don't violate their consent in the same way that friends dont use a patched snapchat client that disables screenshot notifs/keeps photos. Thats creeper shit!!!
I'll be trying this out, it could work well with Cloud Torrent. I've largely moved away from Torrents tho and switched to using NZB's and apps that automatically download tv shows and films[0]
Hardware is a pair of HP Proliant Gen8 microservers, Ubuntu 14, Docker, nginx and LetsEncrypt. There is no real easy way to set this all up, you have to do each part of the stack yourself (a docker-compose file would go a long way to simplifying it)
[0] I spend over $200 a month on content subscriptions so I don't feel bad about utilizing the conveniance of NZB downloads + Plex
I have started doing something completely different than using bookmarks. I set up yacy[1] on a personal, internal server at my home, which I can access from all my devices, since they are always on my wireguard vpn.
Yacy is actually a distributed search engine, but I run in 'Robinson mode' as a private peer, to keep it isolated, as I just want a personal search of only sites I have indexed.
Anytime I come across something of interest, I index it with yacy, using a a depth of 0 (since I only want to index that one page, not the whole site). This way, I can just go to my search site, and search for something, and anything related that I've indexed before pops up. I found this works way better than trying to manage bookmarks with descriptions and tags.
Also, yacy will keep a cache of the content which is great if the site ever goes offline or changes.
If I need to browse, I can go use yacy's admin tools to see all the urls I have indexed.
I have been using this for several months and I am using this way more than I ever used my bookmarks.
It's interesting how AWS can keep so high prices on these. But it's just the beginning, the real money comes from when they convince you to run over a dozen vms/containers (all needing storage etc of course).
You need to be triply redundant on 3 availability zones, (3x) both with the RDS db cluster and app containers (2x) . And then have separate dev/staging/prod envs (3x). That's 18x.
You can then get a pat on the head ("pass AWS well-architected review"). Then they innovate "serverless" stuff where you can pour developer man months and years to save on this overpriced things by making everything more complex, hard to monitor/debug and learn new aws specific techs, so you can get faux savings. Here they're buying your long term lock-in by getting AWS specialists and advocates on your long term payroll. They're doing certifications and investing in AWS careers that can carry over to the next employer too.
And don't even get me started on how much work by this time has gone to build the infra-as-code to manage the rube golderg machine, you'll (seriously) have more lines of CDK code than app logic (that was slower to develop & harder to debug than your actual app code per line).
Just about now someone in your newly grown cloud herding engineering org champios the idea "I know what will help, let's start using Kubernetes". They've probably never costed / self administered a server in production use, or if they have, they know to keep quiet.
If you build a product no matter what you have to be honest to yourself and imho most of the neural voices from azure sound better than your example. They may miss some of the tempre of your voices but the tempre comes from the examples you fed it... tbh it's not much better than doing it yourself with something like https://github.com/neonbjb/tortoise-tts
This doesn't really seem to do what Tailscale is doing, which is to create a mesh network with a central beacon node for facilitating handshakes.
I am currently researching this area and have found the following solutions in the mesh VPN space. In order of how locked down the source code is—which also seems to correlate with ease of use—there is Tailscale, ZeroTier, Netmaker, Nebula, and also Innernet (this last one is only mac/linux).
MTUs are fun. We (live video streaming platform) were recently investigating issues with users having trouble watching our live video streams when using a VPN. Since we're using WebRTC on the viewer end we thought immediately it was just some WebRTC "protection" trying to not leak the users IP.
Eventually we figured out that Open Broadcaster Software has a compiled configuration of a max MTU of 1392 for the UDP packets we're sending. Generally this is fine because most routers have a default MTU of 1500, however when coupled with some of the VPN technologies, it ends up pushing the MTU over the limit and the video packets get dropped.
Overall MTUs seem to be slightly a not well understood thing, because answers on the internet wildly vary for appropriate ways of handling it. The consensus from some Google/WebRTC folks seems to be that 1200 is a safe and fast default.
> Anyway, 1200 bytes is 1280 bytes minus the RTP headers minus some bytes for RTP header extensions minus a few "let's play it safe" bytes. It'll usually work.
The solution to the Youtube-problem is to use a Youtube-frontend like Invidious [1] in combination with an extension like Privacy Redirect [2] of libredirect [3] so you don't need to touch the Youtube site at all. The same works for things like Twitter and Reddit (which can be redirected to front-ends offering the same content) or Google Maps, Google Search and Google Translate (which get redirected to alternative services). Some of these alternatives - Invidious for Youtube, Nitter for Twitter, libreddit for Reddit - can (but don't have to) be run on your own server, others use established services. This way you get the benefits of accessing content from adversarial services like Twitter and Youtube without having to interface with them directly on any device you use, not just that phone you happened to install some alternative front end like Newpipe on. You can "subscribe" to Youtube channels without telling Google you did so, you can access those subscriptions from anywhere, etc.
Ditch that Youtube app and while you're at it ditch the rest of those Google apps as well. Freedom is just one click away: Are you sure you want to uninstall this app? [OK].
I took a react class and the intro was a lot of html/css/design stuff. The CSShints site isn't just CSS, so its worth exploring.
There is a lot of good stuff published that's hard to find. I wish I had a better catch all resource page.
Codepen.io is a good playground to play around with html/css/javascript and it has some javascript frameworkstuff too.
A lot of people put together good content. It seems to surface though blogs and twitter.
Some links/papers we used (without the CSShints pages). A lot of them have more content if you explore.
> CEPROTECT3 just lumps ISPs into the list even though an address has never sent spam.
But this is by design[0]
> This blacklist has been created for HARDLINERS. It can, and probably will cause collateral damage to innocent users when used to block email.
And it makes for a perfectly usable blocklist. If you use postfix, the postscreen_dnsbl_threshold and postscreen_dnsbl_sites parameters let you create a simple scoring system:
I made up the numbers, because you will need to monitor your system for a while to see if they make sense, but the principle holds. Also make sure that the dnsbl you are using are working for you.
But it isn't really a problem with uceprotect, it's about how DNSBLs are used.
Are there any lists of unicode characters (like the OWASP one) that should be blacklisted from most apps (not just for XSS, but even for desktop apps)?
Are there any good security guides/best practices for unicode sanitation?
1) IRC doesn't support any form of message threading. There's some user conventions like "username:" or "@username" to direct a message to a specific person, but there's no way to make a clear distinction between multiple conversations in a channel. (You can make a new channel for the conversation, I guess, but that's even more hostile than usual to other users trying to follow along.)
2) IRC doesn't support messages longer than ~400-500 bytes. There's an internal hard limit of 512 bytes per line in the IRC protocol, and part of that gets silently consumed by fields like the server hostname and channel name. If you try to send a longer message, it's silently truncated. (Your client will probably echo it back as if it all went through, but other users will see it truncated.) Trying to discuss anything longer than a single line of text typically requires users to post it to an online "pastebin" - adding another external tool you need to make IRC work.
3) There's no reliable, standard way to transfer non-text content on IRC. DCC is unreliable, slow, requires one or both users to set up their firewall to allow incoming connections (!), and only transfers files to one user at a time. All of this combines to make IRC incredibly awkward for discussing audiovisual content, like graphical designs or audio/video. Sure, you can upload screenshots or videos to another web site and post links to IRC - but that's yet another external tool you have to bring in.
4) User authentication and authorization in IRC is incredibly crude. The protocol is built around the understanding that users can change their username at any time (!!), with systems like NickServ bolted on after the fact to "protect" names. User permissions are tied to a user's presence in a channel; private channels are implemented with a single shared password. All of this is "good enough" for a lot of social IRC networks, but completely unsuitable for a business setting which demands a higher level of assurance.
A previous rant of mine on IRC's missing features: https://news.ycombinator.com/item?id=40813743