Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Russhian Roulette: 1/6 chance of posting your SSH private key on pastebin (github.com/cyradotpink)
116 points by popcalc on Jan 28, 2023 | hide | past | favorite | 157 comments


  echo {a..z} | tr ' ' '\n' | sort -R | head -1 > /proc/sysrq-trigger 
this is my favourite linux russian roulette


This echoes a letter to /proc/sysrq-trigger, with Deck of Many Fates results:

https://www.kernel.org/doc/html/v4.15/admin-guide/sysrq.html...

EDIT: Yeah, uh, it works.


Here's a variant that works in #!/bin/ash

    base64 /dev/urandom | tr -d '/+'| sed s/[^[:alpha:]]//g |tr 'A-Z' 'a-z' | dd bs=1 count=1 2>/dev/null


I hate to be a golfer... buuuut this should still be pretty generic shell

    LC_ALL=C tr -dc 'a-z' </dev/urandom | head -c 1


I think you don't even need

  head -c 1
because sysrq-trigger will read only one character anyway


Nice one! I wasn't taking testing that far =)

edit - Actually I don't think it closes the input, so for an initial char that doesn't cause something fatal it looks to continue writing.


just tested -- only reads first char, at least with echo (also any character which is not used is used as "h")

  echo apm > /proc/sysrq-trigger
I'm too scared to test with your urandom stuff...


I saw that too, but in this case `echo` exits. My note was just regarding the pipeline will continue to write forever if the first character was any of the non destructive commands.

Without trashing vms...

    ( while true; do printf 'a'; done ) > /proc/sysrq-trigger
With the `head`, the output is closed.


For a moment I was excited to learn about a new shell named ash.



Got your wish


One of my colleagues was asking me a question about this last week. Can all/any applications running on our device read the key? They work on a mac, and wrote a simple python script to confirm. Any program running in the userspace can read the private key file; have the private keys always been not so private all this time?


> Any program running in the userspace can read the private key file; have the private keys always been not so private all this time?

That's right, and the reason for that seeming surprising is that the threat model has quietly changed.

Previously: You owned your computer and your data on it, and you ran programs you trusted e.g. you'd buy Microsoft Word and you'd assume that that program acted in your interests, after all the seller wants you to buy the program. Desktop operating systems originated from the time when this was the current threat model.

Now: Programs don't necessarily act in your interest, and you can't trust them. The mobile phone operating systems were built with this threat model in mind, so mobile "apps" run in a sandbox.

As an example of a modern program that doesn't act in your interest, Zoom "accidentally" left a web server on Macs, even after it was uninstalled. https://techcrunch.com/2019/07/10/apple-silent-update-zoom-a...


Correction: Mobile phone operating systems are designed to give a single player in the market unlimited access to your privacy while locking out competitors. The operating system is not your friend.

Bravo on the rest, you nailed it.


Correction: The operating system is a friend that vets your friends. Sometimes I don't want to have to do a full background check on "everyone" I want to "friend" so I let the OS do it for me.


More like an abusive parent that unilaterally decides who you're allowed to do what with - sometimes because they think they know better than you and sometimes just because it's more convenient to them.


I legitimately experienced the abusive scenario you’re describing as a child. I’ve never once felt even an analogous experience from my OS vendor (which is Apple on all of the devices I own).


Obviously the analogy is deeply flawed, I was trying to fit it to the style of previous comments. It's possible you never had a use-case that required such a feature, since you're fully in the Apple ecosystem. They intentionally limit their OS to give their own solutions an edge: clipboard sharing, notification mirroring, call forwarding, etc. only work iOS-macOS - if you have a Windows or Linux PC, Apple won't let you have those features, even if you're willing to develop them from scratch. Access to the WiFi, NFC and Bluetooth hardware is heavily limited - you won't find "WiFi Analyzer" on iOS. There are also many entirely legal categories of apps (web browsers, things that run code, porn, gambling...) that Apple refuses to allow on iOS, even when the user is fully informed of their "risks" and wants to use them. They won't let anyone but themselves fix your device because they think nobody could do it right, despite the fact that their own service technicians are almost always much worse than the third party, who then have to scavenge parts from damaged devices because Apple forced their suppliers into exclusivity contracts.


Indeed. One data point is here: https://issuetracker.google.com/issues/79906367


What an incredibly uncharitable take.


Care to elaborate? Because nothing the parents said is untrue. Even if you yourself don't feel that way, there are numerous reports of predatory and unethical behavior on the part of any corporation that is able to control your device, whether this is Sony[0], Samsung[1], Microsoft, Google or Apple[2][3].

They even stopped apologizing and consider their actions a standard practice. You know, Microsoft actually used to asked me if I allow them to send a report when Word crashed. What happened? What changed that they no longer ask me but do whatever they want? Why with each update they insist on "syncing my ms account" and I have to disable it each time?

The take is not uncharitable, it's realistic.

[0] https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootk...

[1] https://old.reddit.com/r/assholedesign/comments/pqi486/samsu...

[2] https://gizmodo.com/apple-iphone-analytics-tracking-even-whe...

[3] https://www.forbes.com/sites/jeanbaptiste/2019/07/30/confirm...


Being charitable to huge corporations (paperclip maximizers) is extremely naive.


No, experienced. Too many examples of this being true have been presented over the years. You do not own the software on your devices. You never have.


> As an example of a modern program that doesn't act in your interest, Zoom "accidentally" left a web server on Macs, even after it was uninstalled. https://techcrunch.com/2019/07/10/apple-silent-update-zoom-a...

Isn't this ridiculous? "the update does not require any user interaction and is deployed automatically." OK, how do I know if it's installed, or how to get it installed if it doesn't work? I guess there is just no help for me if I don't remember exactly how many auto-update mechanisms I've turned off.

</offtopic>


Malware has been around for a while. I think the bigger difference is that we’ve started to design computer software with inside threats in mind.


It’s worth noting that desktop Linux has mostly missed this development


Not a security expert, so I could be wrong.

I imagine stuff like AppArmor, Snap (or Craft? I forget) sandboxes, or Docker and LXCs help with this. Or do they not?


That is exactly what snap is aiming for.

Apps run in a sandbox and have no access to user files except through "portals", which are secure file pickers essentially.


Yes, AppArmor and snap try to. Still worlds away from what Windows and OS X are doing, not to even mention mobile platforms.


> Still worlds away from what Windows

Not really, it's a on-purpose contrived thing to attempt to deploy sandboxed apps on Windows.

Developing a sandboxed app in Windows means deploying a correctly sandboxed Appx in Microsoft Store, and getting those (Appx deployed on Microsoft Store) correctly working is hell for any non-trivial application.

On Linux, you can attempt (it's not garanteed to work) to sandbox anything you want. Whenever the sandbox even is able to conveniently defend what really matters to you (say, your private key files) is another matter.


Linux with snap or flatpak is far closer to mobile than whatever isolation Windows and MacOS have. The difference is in how widely and well implemented it is (it's neither).


Linux was ahead of the game for quite a while. Back in the day, most desktop OSes assumed a single user.


Desktop linux still exists in a single user world today, excluding some exotic and super fragile setups you might see in .edu networks.


I think he's referring to the time when desktop Linux was competing against the likes of Windows 98. At that time, it was common for household PCs to be multi-user because one computer was shared by several people in the house. But with Windows 98, there was no protection between users; anybody using the computer could read anybody else's files. Even if you didn't have an account on the computer, you could just press [cancel] at the login screen and have access to the computer. User accounts on Windows 98 were only for the convenience of having different desktop settings, there was no concept of files being owned by specific users.

Linux was a lot different at that time, in that it actually had a concept of users owning files. If you wanted to access another user's files without their permission you had to jump through more hoops like booting into single user mode.


single user == root only. While linux has a single user mode, it is rarely used. Certainly not everywhere "excluding some exotic and super fragile setups you might see in .edu networks"


What do you have in mind? I'm using terminal only and don't track desktop development. Whenever I have to run something I don't trust, I use another account or, if it demands elevated privileges, a virtual machine. I guess with desktop it's not much different?



Also related to how the threat model has changed: https://xkcd.com/1200/


You can store them in the Secure Enclave on OSX and require TouchID to use the key for signing.

See: https://github.com/maxgoedjen/secretive


I've been using Secretive for a long time now. It's a great piece of tech.

Even if you don't require TouchID, no apps will be able to upload your private keys anywhere as they never leave the enclave. Sure, they can still _use_ the keys without your permission but to do that they need to be running on the workstation.

That said, TouchID is really not very inconvenient and if you couple that with control persistence, muxing and keepalive on the SSH client, it's really a no-brainer.


That’s why ideally you use a pass phrase with you ssh key. Apps can still read it but not use it.


Even better, if possible switch to something like PGP keys on Yubikey which prevents exfiltration of the private key, and will only sign things when you enter PIN / touch the device.


This has been my SSH key solution for a while now.

Worked smoothly on most systems.

Kind of messy on Windows, because there are so many SSH agent implementations, but GPG4Win's latest version works with the native SSH now. Real progress.


I find that the PIV smart card stack is needlessly complicated if all you're trying to do is add a resident SSH key to your yubikey. Look at `ed25519-sk` [0], which is supported by default by recent versions of OpenSSH (and dropbear? idk)

[0]: https://news.ycombinator.com/item?id=29231396


PGP is definitely complicated if you’re not going to use it for other functionality.

And that’s completely separate to the PIV functionality on the key.


Oh, I was under the impression that PIV referred to the smart card protocol and PGP was an application making use of that protocol, something like TCP and HTTP. Looks like I'm mistaken, thanks!


Not the map you are looking for but there is this comparison chart of SSH clients and its algorithms.

https://ssh-comparison.quendi.de/comparison/cipher.html


https://github.com/rupor-github/win-gpg-agent/blob/main/docs...

Don’t forget this diagram of all the agents, protocols and bridges you might hit on Windows.


That is the scariest system diagram chart that I have ever seen.

It should be a prime example of what NOT to do.


But then enter it every time you need to use the key, thus negating the advantage of just magically logging in without passwords? Because if you use ssh-add and only enter the passphrase once per reboot, apps will be able to use it, that's the point.


You can (and should) use ssh-agent/ssh-add to handle the key for you. It will still protect you against apps reading the key - ssh-agent only performs crypto operations on behalf of programs and will not hand out the private key.


So a malicious app instead could just read your known hosts file, use the SSH agent to connect to them and spread malware that way, including installing its own public key.

Doesn't really protect you.

Sandboxing is pretty much the only way to solve this, SELinux does place restrictions but that's a dumpster fire of over engineering that's useless for the end user, who when they find their computer isn't doing what they want it to do, will turn it off.


It protects from exfiltrating the key, which is something. Because yes, the app could connect (if the key has been loaded, which is not guaranteed) but that’s something entirely different. Not saying it’s not a threat, but it’s a different threat with different mitigation.


Could you individually authorize every app for ssh-agent access? Maybe like sudo, the app would get a temporary token. This would work well in combination with a sandbox.


Indeed. You can even break out the ssh-agent in an offline VM, proxy your ssh auth socket(s) from the agent, and have it prompt for approval that persists with a configurable timeout.

QubesOS calls this "split ssh" and you can use the same pattern with pgp.

There's also this which I don't see mentioned much: https://manpages.debian.org/unstable/ssh-agent-filter/ssh-ag...


A malicious program could also add a passphrase-logging wrapper around `ssh` or `sudo` to your PATH and nab your password the next time you try to use either of those. This whole model of computing assumes that you'll never run a malicious program, it completely collapses if you do.


Absolutely, but there are various attack vectors that different mitigations are effective against.

The program doesn’t even need to be malicious, for a while it was a pretty common attack vector to trick browsers into uploading random file you could access.

Later, a malicious ssh server could read memory of the ssh process, potentially exposing the private key (CVE-2016-0777)

Using an agent with an encrypted key protects against that. Using a yubikey/smartcard as well. So it’s strictly a good thing to use it.

A yubikey could potentially protect you against a malicious program that wants to open connections if you have set it up to confirm every key operation - but that comes at a cost. You could also use little snitch to see what network connections a program opens, protecting you against a program trying to use your agent to access a server.


The app in question can just dump the memory of ssh-agent and obtain the private key from there. Or not?


Usually no. It requires root / Admin to dump memory of other processes, generally. Although vulnerabilities do exist.


Are you sure this is how, let's say, Linux behaves?

I tested it now in a minimal privilege account in a chroot on Debian 11 that I use for login from untrusted machines, and strace worked. This is how I captured a password entered into a ssh client password prompt, opened in another login shell of the same user:

-bash-5.1$ ps aux | grep abcde

z 2502130 0.0 0.3 9500 6132 ? S+ 18:04 0:00 ssh abcde@localhost

z 2502140 0.0 0.1 6316 2336 ? S+ 18:04 0:00 grep abcde

-bash-5.1$ strace -p 2502130

strace: Process 2502130 attached

read(4, "s", 1) = 1

read(4, "e", 1) = 1

read(4, "c", 1) = 1

read(4, "r", 1) = 1

read(4, "e", 1) = 1

read(4, "t", 1) = 1

read(4, "\n", 1) = 1

write(4, "\n", 1) = 1

ioctl(4, TCGETS, {B38400 opost isig icanon -echo ...}) = 0


"Just magically logging in" is more of a nice side-effect than the intended purpose, in my opinion. SSH keys allow you to let multiple people log into a server without needing to set up complicated user accounts and without sharing a password that quickly becomes difficult to change.

You can have the best of both worlds by storing the key itself in a place that's not readable by many programs. TPMs and other such tech can store a key securely without risk of FunnyGame.app sending it to a remote server. In this model the key would be stored inside a safe, sandboxed place, only readable by an SSH agent or similar, which will prompt for permission to use the key every time. With fingerprint scanners and other biometrics being available even in cheap devices, this process can be relatively seamless.

If you run sufficiently modern SSH software, you can also use external key stores like Yubikeys to authenticate with plain old OpenSSH.


Yeah, un-sandboxed programs can access all your user files. That's why there has been such a large push for sandboxing tech like Flatpak. (In general though, you really shouldn't be running programs you don't trust in anything but a VM.)


I understand the principle, but it seems too onerous on the end user.

What is a program you "trust"? Something you bought online from a curated app store? Those occasionally have trojans as well. Something you downloaded? Well, if it's open source, that's the norm. Something you build from source? Most people wouldn't be able to spot an exploit hidden in the source code.

So.. it's run everything sandboxed by default the recommendation for regular users? Or is it "do not download or buy anything, it's simply not safe"?


I trust the maintainers of my distro software repositories. Any non-distro software, I want to audit before I install or it should be sandboxed.

And yes. The recommendation is to not just download and run programs you find on the web.


Unfortunately I think the option you propose (sandboxing) is unreasonable for most users. A lot of the software you want to run (e.g. games, but also lots of special software, including apps/experiments featured on HN) is not available as part of your distro. It's unreasonable to expect end users to sandbox everything just in case.

It may be the only think that works, but it's also an unreasonable expectation. In practice, this makes it a non-solution. A security solution must both work and be reasonably doable by most users.


It doesn't have to be reasonable for most users. GNU/Linux in general isn't reasonable for most users.


But this problem isn't exclusive to Linux or Unix. It affects everyone using a computer (with the possible exception of mobiles that sandbox by default).


Most users aren't on hacker news.

You should not confuse general wording, which is directed to people who read this website (by the fact that it's y'know posted here instead of somewhere else), with advice for the average person.


What percentage of HN readers do you guess sandbox every non-distro-packaged program by default? My guess: they probably are a minority even here, so it's a nonstarter for the general users population.


> so it's a nonstarter for the general users population.

I agree. My point was that this point isn't important for a discussion on a niche site.


> So.. it's run everything sandboxed by default the recommendation for regular users?

Yeah, that is probably the best solution. Most mobile OSes do that by default now anyways. Desktop Linux has Flatpaks and Snaps. Windows has UWP apps. And I think MacOS has its entitlements system IIRC.

If you don't absolutely trust somethibg, you shouldn't allow it to run unrestricted.


If the OS does this by default and it becomes the standard way of working, then sure. You would need to change how to share files you do want to share and solve some other hurdles, of course.

If this isn't the default node -- transparent, where end users must do nothing in particular -- I don't see it succeeding though.


> I understand the principle, but it seems too onerous on the end user.

I agree that this is the state of affairs currently, but this could made to work similarly to how it works on Android perhaps, which has generally good UX for this.


Is running untrusted programs in a VM actually safe? Are they sufficiently secure that it's not trivial to escape one if that's the expected scenario?


Absent unexpected security issues which are usually patched very quickly as soon as they are discovered by legitimate researchers/white-hats, it is non-trivial to escape one. You are not supposed to be able to escape a VM.

If someone's targetting you with a 0-day exploit that can escape VM sandboxing, having your ssh key hijacked is probably one of your lesser problems. :-/

(If someone has a VM-busting 0-day, they're probably using it in a targetted fashion. The wider that kind of thing is used, the quicker it will be noticed, and patched, and made useless.)


This is how it has been, there are ways around this though:

1) use a pgp derived key, this means that anything authenticating will hit your gpg agent and only that, nothing is using that key then

2) load your key and then remove it, which I’ve done before using a LUKS encrypted partition (then load the key into ssh-agent, then remove the volume).

3) Storing your keys in the secure enclave on Apple computers. A little bit onerous if you use an external keyboard without touchID though.

I have a program on my computer that watches for read events in that folder to see if anything actually tries to read an access key. I can publish the source if you want. it uses inotify in linux.


Not that it's very practical, but you can always encrypt your key with a passphrase. Useless for automation, very useful for cases like these.


That's usually my argument when someone mocks me for logging into all my computers as root. Having a separate nonprivileged user and running tons of desktop/shell programs isn't really much better considering all those programs have access to your ~, which is on a PC usually the most inportant directory IMHO.

firejail is a program that helps mitigate this issue by restricting syscalls of programs.


According to the Arch Wiki though, firejail relies on blacklisting by default (although this seems to be subject to change).

So if it's necessary to be careful about the defaults and to audit everything carefully etc. (i.e. if it's not idiot proof), I am doubtful this is as helpful in practice as one might expect.

I still agree with the general point of your comment though.


This is wrong. Data is important but so too is control of executable programs installed on your computer.

Running as root allows a bug in an application like a browser to be exploited and give them root access.

Then they can modify programs like firejail and suddenly things you thought were protected aren't.


Fair point, but a browser bug leading to code execution in an unprivileged user could, as mentioned, read my SSH private keys, GPG private keys, ...

This in turn would allow an attacker to login to my servers and other computers leading to a total compromise, as well as breaking trust and integrity of my email (PGP keys).

For my PC a compromise of the user I login as would mean total chaos and compromise, regardless if this user is root or not.

Installation of executable programs isn't limited to the root user, a normal unprivileged one can have them as well. I mentioned firejail because running the browser inside firejail should provide more protection against attacks (provided it's correctly cofigured, as a sibling comment points out), as the attacker couldn't escape the browser sandbox. Though in the current modern world, a browser context compromise could be enough to exploit a power user -- webmail, domain registrar web interface, stored passwords.

I doubt many power users actualy separate their workflow well enough as to change to a different VT (or SSH connection when working remotely) when performing administrative tasks on the computer that require root access. Because if users don't do that and just use a suid binary, like sudo, a malicious attacker with access to code execution in the context of an unprivileged user that elevates privileges with sudo could snoop the password entered by ptrace or simpler means, like a wrapper binary that gets installed without user's knowledge.

(I am by no means a security expert and my opinion shouldn't be treated as useful advice!)


I’m the only user on my system, compromise of uid 1000 is as bad as root. If you really care, move into a containerised operating system.


Logging in as something other than root also stops you from doing something really stupid to your system without explicit confirmation (usually by running the command with sudo).


Logging in as root just seems like a silly thing to do, if for no other reason than because so many applications will hassle you about being run as root. Why not just use sudo when you need it?


I ended up logging in as root mostly for the sake of convenience, as now I am no longer bothered with suid wrappers like sudo for mundane tasks, like editing system configuration files and udev rules for devices -- as the sole user of the computer I no longer face EPERM errors that force me into `sudo !!`.

I uninstalled sudo and started this habit on personal servers as well when the sudoedit vulnerability was announced, allowing anyone on a macine with sudo installed (regardless of sudoers config) to escalate to root.


> have the private keys always been not so private all this time?

It's not called private key because it is very secure and can't be accessed... It's on you to ensure that!


Only if they run under your user as your private key permission should be only you can read it. Programs running as you are basically you.


This is true for SSH key, but not for all data on MacOS, e.g. if you run `find ~/Library/Application Support/AddressBook` the OS will ask you if you want to give access to contacts to iTerm2/whatever (unless you have given it before). I'm not aware of a way to create additional sandboxed "folders".

Also, some applications on MacOS are sandboxed, IIRC Mail is one of them. Also, some (all?) applications installed from AppStore. That's the reason I prefer installing applications from AppStore: they seem to be at least somewhat sandboxed.

For development, I try as much as possible to leverage remote development via [JetBrains Gateway](https://www.jetbrains.com/remote-development/gateway/) and [JetBrains Fleet](https://www.jetbrains.com/fleet/). VSCode also has remote development but they explicitly assume that remote machine is trusted (in the security note in the remote extension plugin readme). In the case of JetBrains tools I have not seen any explicit declaration whether remote host is trusted (as in: if remote machine is pwnd then we may as well let pwn your personal machine), but at a glance it seems like there are minimal precautions (if you run web application and open it in a browser, the Gateway will ask if you want to be redirected to a browser etc.)

Probably best scenario for such remote development clients on MacOS would be to put them in AppStore: this way they could leverage sandboxing and in the case of thin client, the sandboxing likely won't limit functionality.


Yes, it’s actually a bit disappointing they didn’t implement keychain support which makes this a lot harder. But then people would be screaming that Apple is peeping at your private keys, even though Apple can’t see the contents of the keychain.


> Any program running in the userspace can read the private key file;

Only programs running as you (or `root`). It's private to you⁰.

Programs running as other users cannot read the file.

(Assuming you've not changed the permissions on the file or the `~/.ssh/` directory)

⁰ and the sysadmin - but if they're not trustworthy they could just replace `/bin/bash` or the kernel with their own version that copied everything you typed anyway.


That's why it's a good idea to use a passphrase with your key so that the key by itself is not useful to anyone.

It's not easy for people to run only trustworthy software, or even software that has been reasonably vetted by others. Not everyone has the aptitude to know how to check for surreptitious file accesses, or have the desire to learn just to make functional use of their computers.


Yep, same with cookies and cloud credentials: https://www.macchaffee.com/blog/2023/hacking-myself/


Use a pass phrase!


I do. Most probably they do too, but since any running apps can access the user’s private keys, the whole security depends on the strength of the passphrase that can be brute forced offline?


Passphrases protect against silent key exfiltration. Make them long enough (six or seven words these days, I think?) and they won't be cracked in your life time unless the quantum people figure their stuff out or you become a vampire.

If you're trying to protect against running programs, you also need to protect against key loggers. Using hardware-backed keys and systems like Windows Hello for validation can help with that, as their UI is not easily interceptable.

In the end, there's no perfect way to protect your keys if you have a virus running on your computer.


Don't run apps you don't trust outside of a container. If there is malware on your system, your SSH keys are only one of your many troubles.


What are apps you do trust?


Use a long one.


It's totally fine, just do that npm install or `curl | bash`, no need to read anything.


You forgot the /s


Bug: Doesn't work with non-RSA or U2F/GPG keys. Some players will get an unfair advantage.


Bug: Passphrase-protected private keys are posted encrypted.


Bug: Uncaught exception if no file exists at the default private key path.


Buy Yubikey, put SSH key on Yubikey, job done.

You can use Nitrokey too, but IIRC be careful which one you buy as some are software-only implementations.


> You can use Nitrokey too, but IIRC be careful which one you buy as some are software-only implementations.

First I've heard of this. Do you have some links where I can read more about this?


> First I've heard of this. Do you have some links where I can read more about this?

Sure, the comparison table on the Nitrokey site[1] is probably sufficient.

Anything without a green tick next to "tamper-resistant smart card" is a software implementation with the associated risks (e.g. firmware updates are available[2] - i.e. if you can update the firmware then you've also got a low-level attack vector for miscreants).

Meanwhile all YubiKeys are hardware backed and it has never been possible to update firmware on them.

[1] https://www.nitrokey.com/#comparison [2] https://www.nitrokey.com/releases


You can check this guide: https://github.com/drduh/YubiKey-Guide


I wrote a tiny node app wrapped as a single binary as a second factor for ssh public key login using pam_exec.so. It posts a Telegram poll, "allow login to server x as user y from ip z?" Yes/No with a 30second timeout to a private group. A simple way to add some additional protection.


If you really wanted to play a dangerous game, you could construct a terminal command that had a 1/6th chance of doing an rm -rf /* at the root directory with full admin privileges and automatic yes to all prompts, preferably on a non virtual machine in production with no backups.


Suicide Linux, already exists


Personally I would want it to remove the most critical files first, or at least corrupt the filesystem. Most damage as quick as possible, in case they get cold feet and Ctrl+C. Maybe trap all signals so they can't do that either. Maybe run in the background!


And this is a good example of why people really should start looking seriously at OpenBSD.

By default chrome and firefox uses plege(1) and unveil(1). With the defaults, ~/.ssh cannot be seen by these browsers.


I sorta understand your point, but this wouldn't help in the case of running that script.

Namely, the JS sandbox of the browser already prevents filesystem access. But a user running `node` in a shell would not be protected by the browser or the hardening of browsers you mention. You would need to manually setup those protections for your command which most people will not do.

Similarly, Linux has filesystem namespaces, and tools like bubblewrap can achieve similar protections.

Lastly, the real risk of the above is that code is easily runnable automaticly with an `npm install` and if you have private repositories than node/npm would still need access to private key (or maybe token for http) information to fetch them.


Joke's on them. I only use ed25519 keys.

Seriously, where's the downvote button when you need one?

But yes, it would be nice for Linux to gain a version of OpenBSD's unveil system call.


TIL: malware + gamification = how to get #1 FP on HN


It's not malware if you run it on purpose


Social engineering is a legitimate attack vector.

"try this edgy nerd game/challenge for the lulz" is just as valid an exploit as "to see the dancing pigs, just run this `curl | bash` command"


I'm already sure that this somehow slips into a dependency of a dependency of a dependency of React, and the world will end.


Really good! Sent this to my colleagues to raise awareness on third party package security risks.


Very cute. It would have been cooler as a shell alias for ssh.

Using node seems like cheating; plus you have to call it explicitly, you know you really want to use this to prank your colleague who left their laptop unlocked.


> Using node seems like cheating

Well, you can easily turn it into an executable (e.g., using `pkg` [1]) so that the target computer doesn’t even need Node.js installed.

[1]: https://www.npmjs.com/package/pkg


That’s even more cheating.

This can be done purely in shell, no extra tools!


Fair enough.


Trivial to avoid by using any other path than the default for RSA keys, so a lot of keys made in the last few years.

Also that's why you should have a strong password on things.



Sometimes I wish Linux and *BSD's followed the Plan9/9front security approach. Keys are handled by factotum, and not under your home directories.


> ${process.env.HOME}/.ssh/id_rsa

Oh well, good thing I always rename my keys with the remote host it connects to.


How is "(Math.floor(Math.random() * 6) === 0)" 1 in 6?


Probably random() returns a value from 0 to one, multiplying by 6 yields a random point between 0 and 6. There is a 1 in 6 chance for this number to be in range [0, 1), which is checked by rounding it down and checking if it's zero.

right?


> The Math.random() static method returns a floating-point, pseudo-random number that's greater than or equal to 0 and less than 1

What a poor name for a method.


I would have agreed 50 years ago. It has been used in many libraries in many languages historically, nowadays it would be confusing if it returned anything else.


Every random generator API I've seen uses this interval by default. And it makes sense, what other range is useful in more situations and platform independent? In practically every other case you'd want to specify your desired range with arguments.


The classic C rand() function returns an integer between 0 and RAND_MAX. Not being much of a JS developer I would have expected something more like that.


"Math.random() generates a randomized binary sequence, which is cast and returned as a floating point type; sign and exponent bits are hard-coded, such that the sequence may represent a number between 0.0f and 1.0f. NOTE: This method should NOT be used for cryptographic and/or security use cases. Implementation is performance oriented and is generally recommended for all non-sensitive programming, such as user interactive visualizations, and randomized suggestions."

Better?


Seems like a perfectly reasonable range, and what I'd expect for a method named that. If it was uniformly distributed over the range of a double, it would almost always be of large magnitude, which wouldn't be particularly useful.


why?


It just doesn't feel natural. If I ask you to give me a random number, are you going to assume it's random among all existing numbers or between the two smallest ones? I'd do Math.random(min,max) and then it could default to 0,1 though I guess you could go Math.random()*100 or whatever.. guess it just doesn't feel like good design or very convenient/readable - but then again this is JavaScript we're talking about.


> I'd do Math.random(min,max)

But then you'd need a different way to do random non-integers.

Randomness does not naturally produce an integer result. The 0-1 range with precision determined by the architecture is actually the simplest and most logical and flexible way to do it. [EDIT: floats are never simple, see below!]

Some languages offer convenience functions on top of the low level random number generator. I don't know what's available in JavaScript.

E.g. in ruby:

  irb(main):001:0> rand
  => 0.5562701792527804

  irb(main):002:0> rand(100)
  => 44

  irb(main):003:0> rand(44..203)
  => 188
...but of course, I could go on all day long about how pleasant Ruby is to work with. :)


> Randomness does not naturally produce an integer result. The 0-1 range with precision determined by the architecture is actually the simplest and most logical and flexible way to do it.

The most trivial way, I would think, would be to interpret a stream of bits would be as an unsigned integer. E.g., a u32 is just "take 32 bits of random data from your RNG, and cast." That's certainly far more natural than trying to build an IEEE double from [0, 1) with a uniform distribution. I'm actually not sure how you'd do that without doing something like starting with a random u32 and dividing by (U32_MAX + 1). Like maybe you can shove the RNG output into the mantissa, but it is not at all obvious to me about the correctness of that.


Mmm, great points.

At least for range [0,MAXINT], it is simple to fill the bitspace for an integer. Range [0,limit) requires slightly more-awkward division.

The problem of uniformity across the range in a populated float is critical -- as you point out, IEEE float layouts are not simple.

I would guess that the "float" result is constructed as a fixed-point decimal in the same way you would an integer (sprinkle the RNG bits over the storage bitspace), but returned as float (via cast or math) to match the language type system.


Math.random() generates a decimal that can be 0 but is less than 1, so when multiplied by 6 the range is 0-5.99999, which is then rounded down.


I'm not a javascript guru, but I see two ways how it may work.

1. The right part of equation converted into integer because the left part is integer. Conversion is done by rounding to nearest integer, and it will work this way.

2. The other way it may work is to convert 0 into floating point. Math.floor would make some canonical floating point representation of 0 from numbers in a range [0..1), and so do conversion of (int)0 into float.

One needs to really know his language of choice to master such subtleties.


There are no integers in JavaScript, only double precision floating point numbers.


So the second scenario is at play. 0 is converted into floating point.


So how many of y’all have already ran this? Haha.


Landshark, cleverest species of them all..


I didn’t realize the browser could exfiltrate files like this. Or is this intended to be run via Node?


> Or is this intended to be run via Node?

Yes, as can be seen in the single line in the linked readme.


Definitely Node (or Electron, which is basically Node with a browser bolted on).


Node.


I figured an honest question would be downvoted on such a site as this.


The README.md, which is displayed on the linked page and should be above the fold on 95% of viewing devices, contains only two lines of text. One of which is the README's title and the name of the project. The other line is:

    `node main.js` for a 1/6 chance of posting your SSH private key on pastebin :)
The question might be honest, but I don't think it adds much to the discussion.


Everyone likes to crap on PGP, but this is why my ssh keys are subkeys of my GPG key and locked up with the GPG SSH agent.

This approach is far from perfect but certainly disallows outright exfiltration attacks.


Can you elaborate more? What's the difference between PGP, and having a regular SSH key password protected + running a regular SSH agent?


The advantages are very similar actually.

The added ‘benefit’ or ‘disadvantage’ is dependent on your use case.

If I regard my PGP key as the toehold to my online identity, having an ssh key tied to that identity is quite useful. Kind of neat to see the guy that signed the git commit is the same guy logging into the server, installing the software signed with the same pgp key.

If your threat model needs to key your online identity somewhat anonymous, than a private key per server is likely the way to go.

An encrypted ssh key is somewhere in the middle there.


For me, physical smartcard storage + overarching identification. I've been using a Yubikey as my GPG key for a long time (and another in my safe as my GPG root). You can use SSH with FIDO2 today as well, but outside of SSH GPG provides a web of trust which has other benefits, such as a root of revocation (a different Yubikey in my document safe contains my GPG root) and signature validation not being tied to SSH key presence/absence on GitHub.


I too have a primary YubiKey with my 3 PGP subkeys on it (signing, authenticating, decrypting), and a backup YubiKey in my safe with my PGP master key on it.

I find it works quite well. The primary YubiKey goes everywhere I go; I have a lanyard for it. The backup YubiKey stays in the safe until I need it for something (e.g. signing someone else's PGP key, rotating subkeys, renewing the expiration date on a subkey, ...).

I also use both of them for FIDO, on websites that support real 2FA; more specifically, I enroll both of them, but I only routinely use the primary one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: