I saw that too, but in this case `echo` exits. My note was just regarding the pipeline will continue to write forever if the first character was any of the non destructive commands.
Without trashing vms...
( while true; do printf 'a'; done ) > /proc/sysrq-trigger
One of my colleagues was asking me a question about this last week. Can all/any applications running on our device read the key? They work on a mac, and wrote a simple python script to confirm. Any program running in the userspace can read the private key file; have the private keys always been not so private all this time?
> Any program running in the userspace can read the private key file; have the private keys always been not so private all this time?
That's right, and the reason for that seeming surprising is that the threat model has quietly changed.
Previously: You owned your computer and your data on it, and you ran programs you trusted e.g. you'd buy Microsoft Word and you'd assume that that program acted in your interests, after all the seller wants you to buy the program. Desktop operating systems originated from the time when this was the current threat model.
Now: Programs don't necessarily act in your interest, and you can't trust them. The mobile phone operating systems were built with this threat model in mind, so mobile "apps" run in a sandbox.
Correction: Mobile phone operating systems are designed to give a single player in the market unlimited access to your privacy while locking out competitors. The operating system is not your friend.
Correction: The operating system is a friend that vets your friends. Sometimes I don't want to have to do a full background check on "everyone" I want to "friend" so I let the OS do it for me.
More like an abusive parent that unilaterally decides who you're allowed to do what with - sometimes because they think they know better than you and sometimes just because it's more convenient to them.
I legitimately experienced the abusive scenario you’re describing as a child. I’ve never once felt even an analogous experience from my OS vendor (which is Apple on all of the devices I own).
Obviously the analogy is deeply flawed, I was trying to fit it to the style of previous comments. It's possible you never had a use-case that required such a feature, since you're fully in the Apple ecosystem. They intentionally limit their OS to give their own solutions an edge: clipboard sharing, notification mirroring, call forwarding, etc. only work iOS-macOS - if you have a Windows or Linux PC, Apple won't let you have those features, even if you're willing to develop them from scratch. Access to the WiFi, NFC and Bluetooth hardware is heavily limited - you won't find "WiFi Analyzer" on iOS. There are also many entirely legal categories of apps (web browsers, things that run code, porn, gambling...) that Apple refuses to allow on iOS, even when the user is fully informed of their "risks" and wants to use them. They won't let anyone but themselves fix your device because they think nobody could do it right, despite the fact that their own service technicians are almost always much worse than the third party, who then have to scavenge parts from damaged devices because Apple forced their suppliers into exclusivity contracts.
Care to elaborate? Because nothing the parents said is untrue. Even if you yourself don't feel that way, there are numerous reports of predatory and unethical behavior on the part of any corporation that is able to control your device, whether this is Sony[0], Samsung[1], Microsoft, Google or Apple[2][3].
They even stopped apologizing and consider their actions a standard practice. You know, Microsoft actually used to asked me if I allow them to send a report when Word crashed. What happened? What changed that they no longer ask me but do whatever they want? Why with each update they insist on "syncing my ms account" and I have to disable it each time?
Isn't this ridiculous? "the update does not require any user interaction and is deployed automatically." OK, how do I know if it's installed, or how to get it installed if it doesn't work? I guess there is just no help for me if I don't remember exactly how many auto-update mechanisms I've turned off.
Not really, it's a on-purpose contrived thing to attempt to deploy sandboxed apps on Windows.
Developing a sandboxed app in Windows means deploying a correctly sandboxed Appx in Microsoft Store, and getting those (Appx deployed on Microsoft Store) correctly working is hell for any non-trivial application.
On Linux, you can attempt (it's not garanteed to work) to sandbox anything you want. Whenever the sandbox even is able to conveniently defend what really matters to you (say, your private key files) is another matter.
Linux with snap or flatpak is far closer to mobile than whatever isolation Windows and MacOS have. The difference is in how widely and well implemented it is (it's neither).
I think he's referring to the time when desktop Linux was competing against the likes of Windows 98. At that time, it was common for household PCs to be multi-user because one computer was shared by several people in the house. But with Windows 98, there was no protection between users; anybody using the computer could read anybody else's files. Even if you didn't have an account on the computer, you could just press [cancel] at the login screen and have access to the computer. User accounts on Windows 98 were only for the convenience of having different desktop settings, there was no concept of files being owned by specific users.
Linux was a lot different at that time, in that it actually had a concept of users owning files. If you wanted to access another user's files without their permission you had to jump through more hoops like booting into single user mode.
single user == root only. While linux has a single user mode, it is rarely used. Certainly not everywhere "excluding some exotic and super fragile setups you might see in .edu networks"
What do you have in mind? I'm using terminal only and don't track desktop development. Whenever I have to run something I don't trust, I use another account or, if it demands elevated privileges, a virtual machine. I guess with desktop it's not much different?
I've been using Secretive for a long time now. It's a great piece of tech.
Even if you don't require TouchID, no apps will be able to upload your private keys anywhere as they never leave the enclave. Sure, they can still _use_ the keys without your permission but to do that they need to be running on the workstation.
That said, TouchID is really not very inconvenient and if you couple that with control persistence, muxing and keepalive on the SSH client, it's really a no-brainer.
Even better, if possible switch to something like PGP keys on Yubikey which prevents exfiltration of the private key, and will only sign things when you enter PIN / touch the device.
This has been my SSH key solution for a while now.
Worked smoothly on most systems.
Kind of messy on Windows, because there are so many SSH agent implementations, but GPG4Win's latest version works with the native SSH now. Real progress.
I find that the PIV smart card stack is needlessly complicated if all you're trying to do is add a resident SSH key to your yubikey. Look at `ed25519-sk` [0], which is supported by default by recent versions of OpenSSH (and dropbear? idk)
Oh, I was under the impression that PIV referred to the smart card protocol and PGP was an application making use of that protocol, something like TCP and HTTP. Looks like I'm mistaken, thanks!
But then enter it every time you need to use the key, thus negating the advantage of just magically logging in without passwords? Because if you use ssh-add and only enter the passphrase once per reboot, apps will be able to use it, that's the point.
You can (and should) use ssh-agent/ssh-add to handle the key for you. It will still protect you against apps reading the key - ssh-agent only performs crypto operations on behalf of programs and will not hand out the private key.
So a malicious app instead could just read your known hosts file, use the SSH agent to connect to them and spread malware that way, including installing its own public key.
Doesn't really protect you.
Sandboxing is pretty much the only way to solve this, SELinux does place restrictions but that's a dumpster fire of over engineering that's useless for the end user, who when they find their computer isn't doing what they want it to do, will turn it off.
It protects from exfiltrating the key, which is something. Because yes, the app could connect (if the key has been loaded, which is not guaranteed) but that’s something entirely different. Not saying it’s not a threat, but it’s a different threat with different mitigation.
Could you individually authorize every app for ssh-agent access? Maybe like sudo, the app would get a temporary token. This would work well in combination with a sandbox.
Indeed. You can even break out the ssh-agent in an offline VM, proxy your ssh auth socket(s) from the agent, and have it prompt for approval that persists with a configurable timeout.
QubesOS calls this "split ssh" and you can use the same pattern with pgp.
A malicious program could also add a passphrase-logging wrapper around `ssh` or `sudo` to your PATH and nab your password the next time you try to use either of those. This whole model of computing assumes that you'll never run a malicious program, it completely collapses if you do.
Absolutely, but there are various attack vectors that different mitigations are effective against.
The program doesn’t even need to be malicious, for a while it was a pretty common attack vector to trick browsers into uploading random file you could access.
Later, a malicious ssh server could read memory of the ssh process, potentially exposing the private key (CVE-2016-0777)
Using an agent with an encrypted key protects against that. Using a yubikey/smartcard as well. So it’s strictly a good thing to use it.
A yubikey could potentially protect you against a malicious program that wants to open connections if you have set it up to confirm every key operation - but that comes at a cost. You could also use little snitch to see what network connections a program opens, protecting you against a program trying to use your agent to access a server.
Are you sure this is how, let's say, Linux behaves?
I tested it now in a minimal privilege account in a chroot on Debian 11 that I use for login from untrusted machines, and strace worked. This is how I captured a password entered into a ssh client password prompt, opened in another login shell of the same user:
"Just magically logging in" is more of a nice side-effect than the intended purpose, in my opinion. SSH keys allow you to let multiple people log into a server without needing to set up complicated user accounts and without sharing a password that quickly becomes difficult to change.
You can have the best of both worlds by storing the key itself in a place that's not readable by many programs. TPMs and other such tech can store a key securely without risk of FunnyGame.app sending it to a remote server. In this model the key would be stored inside a safe, sandboxed place, only readable by an SSH agent or similar, which will prompt for permission to use the key every time. With fingerprint scanners and other biometrics being available even in cheap devices, this process can be relatively seamless.
If you run sufficiently modern SSH software, you can also use external key stores like Yubikeys to authenticate with plain old OpenSSH.
Yeah, un-sandboxed programs can access all your user files. That's why there has been such a large push for sandboxing tech like Flatpak. (In general though, you really shouldn't be running programs you don't trust in anything but a VM.)
I understand the principle, but it seems too onerous on the end user.
What is a program you "trust"? Something you bought online from a curated app store? Those occasionally have trojans as well. Something you downloaded? Well, if it's open source, that's the norm. Something you build from source? Most people wouldn't be able to spot an exploit hidden in the source code.
So.. it's run everything sandboxed by default the recommendation for regular users? Or is it "do not download or buy anything, it's simply not safe"?
Unfortunately I think the option you propose (sandboxing) is unreasonable for most users. A lot of the software you want to run (e.g. games, but also lots of special software, including apps/experiments featured on HN) is not available as part of your distro. It's unreasonable to expect end users to sandbox everything just in case.
It may be the only think that works, but it's also an unreasonable expectation. In practice, this makes it a non-solution. A security solution must both work and be reasonably doable by most users.
But this problem isn't exclusive to Linux or Unix. It affects everyone using a computer (with the possible exception of mobiles that sandbox by default).
You should not confuse general wording, which is directed to people who read this website (by the fact that it's y'know posted here instead of somewhere else), with advice for the average person.
What percentage of HN readers do you guess sandbox every non-distro-packaged program by default? My guess: they probably are a minority even here, so it's a nonstarter for the general users population.
> So.. it's run everything sandboxed by default the recommendation for regular users?
Yeah, that is probably the best solution. Most mobile OSes do that by default now anyways. Desktop Linux has Flatpaks and Snaps. Windows has UWP apps. And I think MacOS has its entitlements system IIRC.
If you don't absolutely trust somethibg, you shouldn't allow it to run unrestricted.
If the OS does this by default and it becomes the standard way of working, then sure. You would need to change how to share files you do want to share and solve some other hurdles, of course.
If this isn't the default node -- transparent, where end users must do nothing in particular -- I don't see it succeeding though.
> I understand the principle, but it seems too onerous on the end user.
I agree that this is the state of affairs currently, but this could made to work similarly to how it works on Android perhaps, which has generally good UX for this.
Absent unexpected security issues which are usually patched very quickly as soon as they are discovered by legitimate researchers/white-hats, it is non-trivial to escape one. You are not supposed to be able to escape a VM.
If someone's targetting you with a 0-day exploit that can escape VM sandboxing, having your ssh key hijacked is probably one of your lesser problems. :-/
(If someone has a VM-busting 0-day, they're probably using it in a targetted fashion. The wider that kind of thing is used, the quicker it will be noticed, and patched, and made useless.)
This is how it has been, there are ways around this though:
1) use a pgp derived key, this means that anything authenticating will hit your gpg agent and only that, nothing is using that key then
2) load your key and then remove it, which I’ve done before using a LUKS encrypted partition (then load the key into ssh-agent, then remove the volume).
3) Storing your keys in the secure enclave on Apple computers. A little bit onerous if you use an external keyboard without touchID though.
I have a program on my computer that watches for read events in that folder to see if anything actually tries to read an access key. I can publish the source if you want. it uses inotify in linux.
That's usually my argument when someone mocks me for logging into all my computers as root. Having a separate nonprivileged user and running tons of desktop/shell programs isn't really much better considering all those programs have access to your ~, which is on a PC usually the most inportant directory IMHO.
firejail is a program that helps mitigate this issue by restricting syscalls of programs.
According to the Arch Wiki though, firejail relies on blacklisting by default (although this seems to be subject to change).
So if it's necessary to be careful about the defaults and to audit everything carefully etc. (i.e. if it's not idiot proof), I am doubtful this is as helpful in practice as one might expect.
I still agree with the general point of your comment though.
Fair point, but a browser bug leading to code execution in an unprivileged user could, as mentioned, read my SSH private keys, GPG private keys, ...
This in turn would allow an attacker to login to my servers and other computers leading to a total compromise, as well as breaking trust and integrity of my email (PGP keys).
For my PC a compromise of the user I login as would mean total chaos and compromise, regardless if this user is root or not.
Installation of executable programs isn't limited to the root user, a normal unprivileged one can have them as well. I mentioned firejail because running the browser inside firejail should provide more protection against attacks (provided it's correctly cofigured, as a sibling comment points out), as the attacker couldn't escape the browser sandbox. Though in the current modern world, a browser context compromise could be enough to exploit a power user -- webmail, domain registrar web interface, stored passwords.
I doubt many power users actualy separate their workflow well enough as to change to a different VT (or SSH connection when working remotely) when performing administrative tasks on the computer that require root access. Because if users don't do that and just use a suid binary, like sudo, a malicious attacker with access to code execution in the context of an unprivileged user that elevates privileges with sudo could snoop the password entered by ptrace or simpler means, like a wrapper binary that gets installed without user's knowledge.
(I am by no means a security expert and my opinion shouldn't be treated as useful advice!)
Logging in as something other than root also stops you from doing something really stupid to your system without explicit confirmation (usually by running the command with sudo).
Logging in as root just seems like a silly thing to do, if for no other reason than because so many applications will hassle you about being run as root. Why not just use sudo when you need it?
I ended up logging in as root mostly for the sake of convenience, as now I am no longer bothered with suid wrappers like sudo for mundane tasks, like editing system configuration files and udev rules for devices -- as the sole user of the computer I no longer face EPERM errors that force me into `sudo !!`.
I uninstalled sudo and started this habit on personal servers as well when the sudoedit vulnerability was announced, allowing anyone on a macine with sudo installed (regardless of sudoers config) to escalate to root.
This is true for SSH key, but not for all data on MacOS, e.g. if you run `find ~/Library/Application Support/AddressBook` the OS will ask you if you want to give access to contacts to iTerm2/whatever (unless you have given it before). I'm not aware of a way to create additional sandboxed "folders".
Also, some applications on MacOS are sandboxed, IIRC Mail is one of them. Also, some (all?) applications installed from AppStore. That's the reason I prefer installing applications from AppStore: they seem to be at least somewhat sandboxed.
For development, I try as much as possible to leverage remote development via [JetBrains Gateway](https://www.jetbrains.com/remote-development/gateway/) and [JetBrains Fleet](https://www.jetbrains.com/fleet/). VSCode also has remote development but they explicitly assume that remote machine is trusted (in the security note in the remote extension plugin readme). In the case of JetBrains tools I have not seen any explicit declaration whether remote host is trusted (as in: if remote machine is pwnd then we may as well let pwn your personal machine), but at a glance it seems like there are minimal precautions (if you run web application and open it in a browser, the Gateway will ask if you want to be redirected to a browser etc.)
Probably best scenario for such remote development clients on MacOS would be to put them in AppStore: this way they could leverage sandboxing and in the case of thin client, the sandboxing likely won't limit functionality.
Yes, it’s actually a bit disappointing they didn’t implement keychain support which makes this a lot harder. But then people would be screaming that Apple is peeping at your private keys, even though Apple can’t see the contents of the keychain.
> Any program running in the userspace can read the private key file;
Only programs running as you (or `root`). It's private to you⁰.
Programs running as other users cannot read the file.
(Assuming you've not changed the permissions on the file or the `~/.ssh/` directory)
⁰ and the sysadmin - but if they're not trustworthy they could just replace `/bin/bash` or the kernel with their own version that copied everything you typed anyway.
That's why it's a good idea to use a passphrase with your key so that the key by itself is not useful to anyone.
It's not easy for people to run only trustworthy software, or even software that has been reasonably vetted by others. Not everyone has the aptitude to know how to check for surreptitious file accesses, or have the desire to learn just to make functional use of their computers.
I do. Most probably they do too, but since any running apps can access the user’s private keys, the whole security depends on the strength of the passphrase that can be brute forced offline?
Passphrases protect against silent key exfiltration. Make them long enough (six or seven words these days, I think?) and they won't be cracked in your life time unless the quantum people figure their stuff out or you become a vampire.
If you're trying to protect against running programs, you also need to protect against key loggers. Using hardware-backed keys and systems like Windows Hello for validation can help with that, as their UI is not easily interceptable.
In the end, there's no perfect way to protect your keys if you have a virus running on your computer.
> First I've heard of this. Do you have some links where I can read more about this?
Sure, the comparison table on the Nitrokey site[1] is probably sufficient.
Anything without a green tick next to "tamper-resistant smart card" is a software implementation with the associated risks (e.g. firmware updates are available[2] - i.e. if you can update the firmware then you've also got a low-level attack vector for miscreants).
Meanwhile all YubiKeys are hardware backed and it has never been possible to update firmware on them.
I wrote a tiny node app wrapped as a single binary as a second factor for ssh public key login using pam_exec.so. It posts a Telegram poll, "allow login to server x as user y from ip z?" Yes/No with a 30second timeout to a private group. A simple way to add some additional protection.
If you really wanted to play a dangerous game, you could construct a terminal command that had a 1/6th chance of doing an rm -rf /* at the root directory with full admin privileges and automatic yes to all prompts, preferably on a non virtual machine in production with no backups.
Personally I would want it to remove the most critical files first, or at least corrupt the filesystem. Most damage as quick as possible, in case they get cold feet and Ctrl+C. Maybe trap all signals so they can't do that either. Maybe run in the background!
I sorta understand your point, but this wouldn't help in the case of running that script.
Namely, the JS sandbox of the browser already prevents filesystem access. But a user running `node` in a shell would not be protected by the browser or the hardening of browsers you mention. You would need to manually setup those protections for your command which most people will not do.
Similarly, Linux has filesystem namespaces, and tools like bubblewrap can achieve similar protections.
Lastly, the real risk of the above is that code is easily runnable automaticly with an `npm install` and if you have private repositories than node/npm would still need access to private key (or maybe token for http) information to fetch them.
Very cute. It would have been cooler as a shell alias for ssh.
Using node seems like cheating; plus you have to call it explicitly, you know you really want to use this to prank your colleague who left their laptop unlocked.
Probably random() returns a value from 0 to one, multiplying by 6 yields a random point between 0 and 6. There is a 1 in 6 chance for this number to be in range [0, 1), which is checked by rounding it down and checking if it's zero.
I would have agreed 50 years ago. It has been used in many libraries in many languages historically, nowadays it would be confusing if it returned anything else.
Every random generator API I've seen uses this interval by default. And it makes sense, what other range is useful in more situations and platform independent?
In practically every other case you'd want to specify your desired range with arguments.
The classic C rand() function returns an integer between 0 and RAND_MAX. Not being much of a JS developer I would have expected something more like that.
"Math.random() generates a randomized binary sequence, which is cast and returned as a floating point type; sign and exponent bits are hard-coded, such that the sequence may represent a number between 0.0f and 1.0f. NOTE: This method should NOT be used for cryptographic and/or security use cases. Implementation is performance oriented and is generally recommended for all non-sensitive programming, such as user interactive visualizations, and randomized suggestions."
Seems like a perfectly reasonable range, and what I'd expect for a method named that. If it was uniformly distributed over the range of a double, it would almost always be of large magnitude, which wouldn't be particularly useful.
It just doesn't feel natural. If I ask you to give me a random number, are you going to assume it's random among all existing numbers or between the two smallest ones? I'd do Math.random(min,max) and then it could default to 0,1 though I guess you could go Math.random()*100 or whatever.. guess it just doesn't feel like good design or very convenient/readable - but then again this is JavaScript we're talking about.
But then you'd need a different way to do random non-integers.
Randomness does not naturally produce an integer result. The 0-1 range with precision determined by the architecture is actually the simplest and most logical and flexible way to do it. [EDIT: floats are never simple, see below!]
Some languages offer convenience functions on top of the low level random number generator. I don't know what's available in JavaScript.
> Randomness does not naturally produce an integer result. The 0-1 range with precision determined by the architecture is actually the simplest and most logical and flexible way to do it.
The most trivial way, I would think, would be to interpret a stream of bits would be as an unsigned integer. E.g., a u32 is just "take 32 bits of random data from your RNG, and cast." That's certainly far more natural than trying to build an IEEE double from [0, 1) with a uniform distribution. I'm actually not sure how you'd do that without doing something like starting with a random u32 and dividing by (U32_MAX + 1). Like maybe you can shove the RNG output into the mantissa, but it is not at all obvious to me about the correctness of that.
At least for range [0,MAXINT], it is simple to fill the bitspace for an integer. Range [0,limit) requires slightly more-awkward division.
The problem of uniformity across the range in a populated float is critical -- as you point out, IEEE float layouts are not simple.
I would guess that the "float" result is constructed as a fixed-point decimal in the same way you would an integer (sprinkle the RNG bits over the storage bitspace), but returned as float (via cast or math) to match the language type system.
I'm not a javascript guru, but I see two ways how it may work.
1. The right part of equation converted into integer because the left part is integer. Conversion is done by rounding to nearest integer, and it will work this way.
2. The other way it may work is to convert 0 into floating point. Math.floor would make some canonical floating point representation of 0 from numbers in a range [0..1), and so do conversion of (int)0 into float.
One needs to really know his language of choice to master such subtleties.
The README.md, which is displayed on the linked page and should be above the fold on 95% of viewing devices, contains only two lines of text. One of which is the README's title and the name of the project. The other line is:
`node main.js` for a 1/6 chance of posting your SSH private key on pastebin :)
The question might be honest, but I don't think it adds much to the discussion.
The added ‘benefit’ or ‘disadvantage’ is dependent on your use case.
If I regard my PGP key as the toehold to my online identity, having an ssh key tied to that identity is quite useful. Kind of neat to see the guy that signed the git commit is the same guy logging into the server, installing the software signed with the same pgp key.
If your threat model needs to key your online identity somewhat anonymous, than a private key per server is likely the way to go.
An encrypted ssh key is somewhere in the middle there.
For me, physical smartcard storage + overarching identification. I've been using a Yubikey as my GPG key for a long time (and another in my safe as my GPG root). You can use SSH with FIDO2 today as well, but outside of SSH GPG provides a web of trust which has other benefits, such as a root of revocation (a different Yubikey in my document safe contains my GPG root) and signature validation not being tied to SSH key presence/absence on GitHub.
I too have a primary YubiKey with my 3 PGP subkeys on it (signing, authenticating, decrypting), and a backup YubiKey in my safe with my PGP master key on it.
I find it works quite well. The primary YubiKey goes everywhere I go; I have a lanyard for it. The backup YubiKey stays in the safe until I need it for something (e.g. signing someone else's PGP key, rotating subkeys, renewing the expiration date on a subkey, ...).
I also use both of them for FIDO, on websites that support real 2FA; more specifically, I enroll both of them, but I only routinely use the primary one.