Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
There's a new way to flip bits in DRAM, and it works against the latest defenses (arstechnica.com)
179 points by vo2maxer on Oct 19, 2023 | hide | past | favorite | 83 comments



For those interested, the key takeaway from this IMO is that by issuing many sequential reads, the memory controller will hold a target row open for an extended amount of time to service the consecutive accesses.

This is in contrast to the original rowhammer attack, which issues accesses such that target rows are repeatedly opened and closed to trigger bitflips in neighboring rows.

By stretching out the row open time to 30ms (!), the authors claim they are able to reliably trigger bitflips with a single row opening in 13% of tested rows at 50°C[1]. Some rows in certain chips can be flipped with access times of under 10ms[2].

At more realistic row open times of 7.8 - 70us, there seems to be a 1/x relationship between row open time and number of activations required, they cumulative amount of time the row needs to be held open for to trigger a flip seems to remain fairly constant (around 50ms total from my very approximate estimations). Note that the attack needs to be executed in under 64 ms total, otherwise the automatic DRAM refresh will reset any progress made.

The authors demonstrate this attack with a userspace program that maps a 1 GB hugepage to be able to directly manipulate the lower 30 physical address bits[3], although they don't seem to provide the row open times they end up being able to achieve in practice.

The attack code itself: https://github.com/CMU-SAFARI/RowPress/blob/main/demonstrati...

https://arxiv.org/pdf/2306.17061.pdf [1] pg 5. obsv. 2 [2] pg 6. obsv. 6 [3] pg 11. sec 6.1


So this is a direct DRAM spec violation: there is a spec in the DRAM datasheet known as tRAS (row address strobe: time from row open (read) to row close (write back)). Min is 33 ns, Max is 9*tREFI. tREFI (average refresh period) depends on temperature: for below 85C, it's 7.8 us. So tRAS max is 70 us. (this is from some random Micron DDR4 datasheet)

Um, so of course they can trigger problems when they violate the spec!

Were they able to find a DRAM controller that violates the spec? If so, that's a simple bug in the DRAM controller. Well I guess so, the paper mentions Intel i5-10400 (Comet Lake). Do AMD processors have this issue?


Most vulnerabilities are spec violations. If there is a real system with this bug, then that system is vulnerable.

The distinction with RowHammer is not if it is a vulnerability, but what component can be blamed for the vulnerability.


I too am not seeing the gotcha here. The paper seems to be:

1. We ran a bunch of DDR4 outside spec directly with an FPGA, and the ram failed to nobody's surprise, and we characterized it.

2. We found a way to bamboozle the CPU's ram controller to achieve a similar effect.

I've written SDR, DDR1/2 ram controllers in verilog, I'm very familiar with autorefresh timing. You need to refresh each row every 64ms, and if you have, say , 1024 rows, then you must issue an autorefresh at least every 62uS. In newer rams there are some allowances to optimize for PVT, but this is a fundamental requirement of DRAM spanning almost 40 years.

In ye olden days of EDO and FPM drams, before synchronous, you had to manually select each row to refresh. Nowadays you just send autorefresh command with no argument. The chip itself maintains a row counter and auto increments it for round robin refresh.

I see 2 potential snags. The first is, JEDEC says that you're sometimes allowed to defer refresh 9 periods. But you do have to refresh more later to make up for it.

The second is if Intel cut corners in their controller. The controller should enforce a hard cutoff after the row stays open too long. The paper mentions this as a potential mitigation (but isn't this simply a hard design rule anyway?) The paper mentions such mitigations would not work because "the row would've been open for too long before refresh anyway". This bafflingly circular logic I cannot follow.

What am I missing?


After having read the document now I think I see the misunderstanding.

From my understanding the attack is:

1. Hold a row open for 70us

2. Ram controller may refresh a row

3. Go to 1

I saw nothing that mentioned a hard requirement of 64ms referenced in the post you replied to - they only mentioned that they kept it within 64ms to "Prevent data-retention failures from interfering with read-disturb failures"


the CPU has microcode... I wonder if that includes its DRAM controller.


Maybe they're using DDR3? Looking at a micron DDR3 datasheet there's no maximum for active-to-precharge

:edit: no, I should have read the paper - they tested with DDR4. Strange


I found the "ACTIVATE-to-PRECHARGE command period" in a Micron DDR3 datasheet, same value: 9 x tREFI

https://www.mouser.com/datasheet/2/671/4Gb_DDR3L-1283964.pdf

Page 78, speed bin tables.

Refresh basically is an activate/precharge sequence, so keeping a row open long is the same as denying refresh to that bank.


Won't ECC memory be a sufficient defense against that? I think it was invented specifically to overcome random bit flips.

If so, server / cloud infrastructure is largely unaffected.


> I think it was invented specifically to overcome random bit flips

The trouble is that in an attack, the bit flips aren't random and uncorrelated but they are purposefully being made in a small memory region

All it takes is two bit flips to defeat most ECC



«Importantly, the researchers haven't demonstrated that ECCploit works against ECC in DDR4 chips, a newer type of memory chip favored by higher-end cloud services. They also haven't shown that ECCploit can penetrate hypervisors or secondary Rowhammer defenses. Nonetheless, the bypass of ECC is a major milestone»

Wow anyway.


wait, did it or didn't it bypass ECC? First sentence doesnt line up with last sentence in the quote?


Any chance such access patterns could occur by accident?

Specific types of computations, processing datastructures with a specific layout, poorly written (but correct) code, ...?


"50 C" refers to 50°C as in average kinetic energy of particles. They did the measurements at elevated temperatures.


It isn't that elevated; RAM inside a laptop or server that is doing anything compute intensive will often be warmer than 50°C (122°F).


You are right. I was thinking elevated above room temperature, but that doesn't make much sense in this scenario.


I thought they might be trying to throw in some old Halt and Catch Fire trick


RowHammer and the other speculative execution are really interesting attacks, but have they ever actually been used in the wild yet? I can't seem to find anything that says yes.

https://news.ycombinator.com/item?id=27318960

2 years ago someone asked and it looks like the answer was "nope"

https://www.csoonline.com/article/573715/rowhammer-memory-at...

3 years ago they were "closer"

I couldn't seem to find anything that said they're known to be successfully exploited yet. My understanding (which is shallow at best) makes me think they're still really tough to use in a real world attack.


> RowHammer and the other speculative execution are really interesting attacks, but have they ever actually been used in the wild yet?

It's a question I've been asking myself for many highly publicized vulns. I once made this table: https://github.com/hannob/vulns


Thanks for making this table, it's really interesting. But I'm not sure I fully understand the point it makes (if there is one). Most of the vulnerabilities absolutely had potential of abuse, but were not used in the wild because the vulnerability disclosure process actually worked as intended and they were promptly patched.


> not used in the wild because the vulnerability disclosure process actually worked

This mental model kind of works for thinking about widespread exploitation but of course the vulnerabilities don't materialize when they are publicized, and most exploitations aren't publicly documented even if detected.


> and the other speculative execution

I don't think RowHammer is a speculative execution attack, is it? I thought Spectre and Meltdown were the best-known examples of SE attacks.


That’s correct. RowHammer is a HW attack on the DRAM controller and how DRAM refresh works at a low level. Spectre/meltdown are a HW attack on the speculative execution unit within the CPU. Similar in terms of being HW attacks to exfil data across process boundaries. Different in that the former is more of a a physical attack while the latter is more of a logical side channel attack.


The thing is, you don’t want to wait until someone finally figures out a practical attack before implementing the mitigations. You never know when a combination of weaknesses suddenly opens up a serious vulnerability.


the folks using them in the wild don't leave evidence they were threre


Same can be said of ghosts. I think it’s reasonable to question their existence.


The difference is that we haven't produced any proof-of-concept ghosts yet.


“Sure, MIT can extract the layout of a maze from a rat’s soul up to 30 minutes after death using a Ouija board, but the headline doesn’t mention that it also takes a 4 T magnet, a 3 TW pulsed laser, and a convergence of at least five Leylines. I think the philosophical implications are overstated.“


Yeah, I am laughing over here. Perfect.


Bit flipping is based on falsifiable science. There's no dispute about whether it exists, just whether it's feasible to exploit it.


I’ve never seen a proof-of-concept ghost.


Millions however have seen an actual one, or so they say.


Billions say <insert their deity> exists too. So. What. That's not proof.


Isn't it? Historically it has qualified as proof to whole societies.


Rowhammer and similar bugs where the attacker can cause bitflips but not have tight selection of which bits are flipped are going to leave evidence.

On systems with ECC, you're highly likely to get elevated correctable and uncorrectable error counts. Most systems will halt or reboot on uncorrectable errors, and usually that leaves a lot of logs. Correctable errors should also leave a trail, too. With RowHammer, sometimes you get enough bitflips that ECC can't detect or correct the error, but it's a distribution and many of the bitflips will be detected or corrected. I don't quite understand this one, but I imagine it's going to be similar --- you usually need to do a lot of attempts to get the bits you wanted flipped, and some of those failed attempts will trigger ECC; if the attacker gets really lucky, there won't be a trace, but if they're attacking multiple systems, they're unlikely to get the flips they want on the first try on all the systems.

On systems without ECC, you're likely to get increased crashiness on systems under attack. It'll be much harder to track down of course, but it won't be without evidence.

Speculative execution attacks to read memory can leave evidence too, but it depends on what level of statistics are tracked. I would suspect few systems collect the detailed statistics you'd need.


But why not? Intruders attempting escalation leave all sorts of detritus.


Cloud providers would be the most lucrative targets, and they keep their incidents quiet.


The main concern is probably that the 'safety factor' between current publicly known state of the attack and it being dangerous is less than the community is comfortable with sensibly recommending is safe.


Related:

RowPress: Amplifying read disturbance in modern DRAM chips [pdf] - https://news.ycombinator.com/item?id=36479683 - June 2023 (6 comments)


Isn't it the case that this attack vector and almost all others are the result of the poor architectural choice made by the first and all subsequent CPUs - the mixing of data and instruction?


It is not a "poor" architectural choice, it is a choice with advantages and disadvantages, and something that has been studied since the 1940s. See Harvard vs Von Neumann architecture. Harvard: separated, Von Neumann: combined.

Modern processors are usually a hybrid, Harvard on the inside (with separate instruction and data caches), Von Neumann on the outside. But some processors (ex: DSPs) are pure Harvard and both styles have co-existed since the very first computers, the ones made of vacuum tubes and relays. There is no "original sin".


No? Direct control is harder in a Harvard architecture, but often changing data is the attack you want to do anyway.


Many would say, myself included, that is not a poor choice. It was the better of two possibilities.


Is ECC memory vulnerable to these kinds of bitflips?


The paper has some mentions of ECC.

https://people.inf.ethz.ch/omutlu/pub/RowPress_isca23.pdf

For example, the first mention of ECC says this:

> we ensure that the tested DRAM modules and chips have neither rank-level nor on-die ECC. Doing so ensures that we directly observe and analyze all circuit-level bitflips without interference from architecture-level correction and mitigation mechanisms.

Later in the paper they say:

> We examine the capability of ECC, which is widely used in modern memory systems to correct memory errors, in mitigating Row Press. We analyze the number of bitflips in every 64-bit word for both single- and double-sided RowPress for a tAggON of 7.8 Ms. To maximize the number of bitflips at this tAggON, we activate the aggressor row(s) as many times as possible within 60 ms at 80°C. […]

> We make two key observations from our analysis. First, there are up to 25 Row Press bitflips (not shown) in a 64-bit data word. ECC schemes that are widely used in memory systems (e.g., SECDED [122] and Chipkill [123-125\23) cannot correct or detect all RowPress bitflips we observe, which can lead to silent data corruption [126- 128]. Even a (7, 4) Hamming code (correcting one bitflip in a 4-bit data word) [122] with 75% DRAM storage overhead (3 parity bits for every 4 data bits), is not capable of correcting 25 bitflips in a 64-bit data word. Other ECC schemes that can correct all Row Press bitflips require prohibitively large storage overheads. Thus, relying on ECC alone to prevent all RowPress bitflips is a very expensive solution. Second, for all three manufacturers (Mfrs. A, B, and C), a significant fraction (up to 0.99%, 35.77%, and 10.08% for tAggON = 7.8 us, respectively) of 64-bit data words exhibit at least three Row-Press bitflips. This makes RowPress bitflips costly to prevent using techniques like memory page retirement (where erroneous DRAM rows are not used in the system) [129, 130] since such techniques could render up to 35.77% of storage capacity useless.


What a thoroughly disingenuous argument they make.

Let me rephrase that in terms of adventures trying to break in to the castle throne room. They say that if you keep sending a million adventuring parties, eventually some will break in, perhaps as many as 25 adventurers at once and there's no way you can afford 25 guards 24/7 to protect anything.

The obvious counter-argument is that, sure, maybe the millionth party will get in... but someone had to notice the guards stopping the first 999,999 failures, right? That counts for something?

Most "defeats" of ECC in these RowHammer style attacks are of this form, and are not in my opinion actually defeats. Authors are just sleazily handwaving away the ECC to get a sexier paper.

(ECC with OS error reporting is like having posted guards with someone supervising. If this analogy makes you wonder how things can possibly work without any ECC/guards, congratulations, you see the argument the ECC-everywhere camp are making!)


ECC is very resistant but not immune, but at least it will let you detect rowhammer attacks


Yes, the advantage of ECC against RowHammer or RowPress attacks is detection, not prevention.

The attacks usually produce more bit flips than can be corrected by ECC, but the attacker cannot control the number and the position of the bit flips.

When a memory uses ECC (true end-to-end ECC, not the internal ECC of DDR5, which is useless against these attacks, because the errors are not reported to the OS), there are many more invalid memory bit configurations, than valid ones.

Because of this, applying a random pattern of bit flips to the memory is much more likely to be detected as an ECC error, than to be ignored. Computers with ECC are normally configured to scrub all the memory periodically, so the bit flips caused by an attack will be detected even in unused memory.

Because in a normal computer it is extremely unlikely to encounter more than one ECC error per day (either correctable or non-correctable), a decent operating system should immediately notify the user or the administrator in the case of repeated errors, which can appear only as a consequence of a hardware malfunction or of a RowHammer/RowPress attack (many servers are configured to send immediately an e-mail message or an SMS alert to the administrator when such events happen).

Therefore, on computers with ECC it should be impossible for such attacks to remain undetected, even if they can succeed to corrupt the memory.


When I read about these kind of things I am reminded of this James Mickens quote

"Unfortunately, large swaths of the security community are fixated on avant garde horrors such as the fact that, during solar eclipses, pacemakers can be remotely controlled with a garage door opener and a Pringles can. It’s definitely unfor- tunate that Pringles cans are the gateway to an obscure set of Sith-like powers that can be used against the 0.002% of the population that has both a pacemaker and bitter enemies in the electronics hobbyist community. However, if someone is motivated enough to kill you by focusing electromagnetic energy through a Pringles can, you probably did something to deserve that. I am not saying that I want you dead, but I am saying that you may have to die so that researchers who study per-photon HMACs for pacemaker transmitters can instead work on making it easier for people to generate good passwords."

I feel like the same thing applies when we are talking about these kind of attacks. I mean right now the latest version of express currently has a dependency on a library that has a known RCE. I mean listen 99% of us work in a company where if someone wanted to steal our information they'd pay the janitor $500 to grab it on his way out. Is this really a huge priority?


> Is this really a huge priority?

It is for people who rent out slices of a computer to a bunch of different people and promise that the script kiddie that lives on the same machine as you can't steal your members-only cat photos (e.g. cloud providers.)


Like yes hypothetically they could do that. But being an info sec person what I can tell you is much more likely is that 2 years ago a developer was having a bug that he fixed by downgrading the library that interacted with the database. It turned out there was a vulnerability in that version of the library allowing SQL injection, but now that is a core piece of business functionality and no cycles can be spared till "next sprint" which will never come because the company is still a scrappy startup that moves fast and break things. (Despite now having 100 developers and millions in revenue) then someone finds out and can exfiltrate your entire DB in about 20 minutes with automated tools.

Or what is more realistic is that they send an email to Sarah the the CEOs PA that says she needs to grant access to "John Smith" and she puts in her username and credentials in the corresponding link. Then those credentials are used to access GitHub (of course the secretary has GitHub access because one time the CEO wanted to look at something and couldn't so now he demands his secretary has full GitHub access) and then they find the root db username and password because after it was accidentally committed the intern decided just to delete it and put in a new commit because he didn't want to get in trouble. That attack took 10 minutes and an email.

My point being is if you are running something that is so secure it needs to be protected from this kind of hypothetical attack, while in that case you're probably already paying for a dedicated instance in the first place.


Also anyone who runs JavaScript, since they work across processes.


It's a matter of scale. If you or I individually have a moderately clever and determined local enemy, with access to an average garage toolbox and some commodity PC accessories, who is not afraid of legal consequences for stealing your information...it would take a lot less than $500 to get your information.

The difference, and the reason this exploit is a big deal, is that a huge swath of our economy and society is in The Cloud. There are extremely, extremely clever enemies who (like our local hypothetical) are unafraid of legal consequences, because they're protected by online anonymity and possibly thousands of miles of open ocean and countries without extradition treaties. But they're not determined in the slightest! If it takes 60 seconds or costs $0.06 to break your unique configuration it's not worth it to them.

It doesn't concern me that my coworker has a coffee cup in their desk drawer that contains sticky notes for some of their most important passwords. By the time there's a human being inside our building digging through desk drawers, we've pissed someone off far too much for that to be the weak link.

I am concerned that said coworker does not use the same password on their pizza delivery app as they do on our work accounts, because it's only a matter of time before the former suffers a leak of ten million plaintext password-email combos, and tries pointing automated ransomware at all of the associated domains.


RowHammer/RowPress are scarier than most other attacks, because they expose the fact that the memory chips do not behave as advertised, i.e. they do not keep the promise to return on reading the bits that have been written into them, even if their datasheet specifications are not violated.

Unlike any other attacks, the memory access patterns used by RowHammer/RowPress may happen when some program is run, without any intention, accidentally, and such an event would cause memory corruption.

Even if such access patterns are very unlikely, it would be an unreasonable burden for programmers to analyze all the programs, to be able to guarantee that such memory access patterns can never happen, especially taking into account that the actual access pattern can vary non-deterministically, due to out-of-order execution and interaction between concurrent threads.

The only acceptable solution is for the memory producers to improve their products so that their datasheets will correspond to their real behavior.


Thank you so much for that quote. I found the original article[1] from 2014 and it is perhaps the best computer security article I have ever read.

[1] https://www.usenix.org/system/files/1401_08-12_mickens.pdf


On one hand, I agree. On the other, we do have examples of nation states killing journalists and other people with the use of security bugs in applications. No one really knows when they're being targeted by nation state or when they fall into that category.... but when they do, they won't just being doing things to your email.

see: https://www.theguardian.com/world/2021/jul/18/nso-spyware-us...


I half expected the article to be a single page telling the reader in a large font to never open PDF files linked to by random users on internet forums. It isn't that, but I wouldn't have been mad if it had been.


Fantastic.


Yes, I have the same feeling about the SPECTRE mitigations, for example. No one is going to be attacking my home Linux box with SPECTRE (has anyone ever been attacked with SPECTRE?), but by default they remove like 10+% CPU performance, just in case. I disabled the mitigations.


For multi tenant environments like a provider running VMs, preventing speculative execution attacks is a priority; otherwise someone can literally see what another tenant is doing (and there are attack demos demonstrating it).

Furthermore, the entire concept of paging and virtual memory is built around the concept of not letting one process see what the other one is doing; the described attacks in the prime+probe/flush+reload attacks are quite practical and could let some JS on a website steal passwords (though the information leak rate is a little lower than what you generally get by outright stealing credential files using malware, though that tends to show up on EDRs, whereas speculative execution attacks just show up as memory access patterns with fewer cache hits than what’s expected.)

If we’re to say that you don’t need isolation, we could just drop paging and every process just have a view of every other process.


> the entire concept of paging and virtual memory is built around the concept of not letting one process see what the other one is doing... If we’re to say that you don’t need isolation, we could just drop paging and every process just have a view of every other process.

This seems to be moving goalposts. It's like saying curtains were built around the idea of not letting one person see inside a house, so the moment IR sensors (or Wi-Fi or radar or whatever) start being able to see in the house, everyone might as well just remove their curtains.


I think the threat there isn’t someone else attacking their VM to get to your machine - but that there was at least some proof of concept over JavaScript remotely.

But otherwise I agree.


> but that there was at least some proof of concept over JavaScript remotely

Yeah, but that's exactly the rube goldberg kinda thing the quote in the OP is parodying. The threat is IF I come across a hostile page, AND I'm running a specific web browser with the right JS properties, AND I am also running the right security-critical software, AND the attacker can correctly guess the memory layout, AND the JS runs for long enough to pull enough bytes to actually be meaningful, AND the attacker can connect the meaningful secret with whatever purpose it's intended for, AND the attacker can pull my legit data out with all the false positives they're getting from running an un-targeted website, THEN they can maybe like log into my old AOL account or something.

It just isn't going to happen. There are usecases where SPECTRE is a meaningful attack (shared hosting being the obvious one), but a normal end-user machine? Nah. Certainly not likely enough to nuke 10% CPU perf on every machine by default.


> AND the attacker can correctly guess the memory layout

AMD has several unpatched vulnerabilities that leak the memory layout/page table mappings, effectively nullifying KASLR as well.

they can be mitigated by KPTI, but AMD ships that in a disabled state by default because of the performance implications (with ryzen's reliance on giant caches, flushing caches every time they make a syscall would be even more disastrous for them than for Intel, so they cheat and ship insecure-by-default).

The rest of that sentence is just an overwrought description of running a javascript segment in a web browser, there's nothing load-bearing there to prevent an attack.


There's also netspectre which remotely attacks a kernel with malformed network packets.

https://arxiv.org/abs/1807.10535


Yeah I think these sort of attacks only matter to people who run random code sent by untrusted parties on their machines.

But I mean that includes Amazon and everybody that browses the web with JavaScript enabled, so I guess it is a pretty big population that might be interested?


If your threat model now requires a browser in the chain, then we can fix it in the browser, no?

https://blog.mozilla.org/security/2018/01/03/mitigations-lan...

https://security.googleblog.com/2018/07/mitigating-spectre-w...

Still safe at full CPU perf.


I don’t think I’ve described a threat model, just given and example of a single vector that is pretty widespread.

I just assume processes on my system can get root if they really want.


Yes, but Spectre was easily fixed by limiting the accuracy of the Browsers’ window.Performance interface AFIR. Meltdown on the other hand required a Kernel patch and a general performance penalty.

I also have all meltdown mitigations turned off in the Kernel on my Laptop.


Well it is in a way like "herd immunity" - if everyone around has mitigations on, no one would bother to use it as an attack vector. If everyone would for some reason start disabling mitigations, then the bug will stasrt getting exploited.


> However, if someone is motivated enough to kill you by focusing electromagnetic energy through a Pringles can, you probably did something to deserve that.

Someone who isn't me picked up archery with the express purpose of shooting either their completely innocent and reigthous ex partner and/or their ex partners new partner during their wedding, so I would not say effort made to kill someone equates to them deserving it.


this is what people thought about buffer-overflow attacks back in the 80s and, unaccountably, 90s: theoretical but not practical

if you haven't been following computer security, exploiting buffer overflows has been routine and widespread for decades, and on one notable occasion in the 80s morris's worm used one to take down most of the internet at once by accident


As with many other areas of development, sexy stuff often edges out practical stuff for attention. A lot of this comes from the natural tendency of engineers to be interested in sexy stuff, and a lot of it comes from career advancement and the desire to pad your resume out with impressive looking projects to show how smart you are.


Somebody here on HN pointed out there's a security researcher constantly pumping out side-channel attacks that are hard to pull off but sound sensational - and making PR around it that makes the rounds on the internet including HN.


I’ve always thought it a little odd the janitor threat model is called Evil Maid.

Perhaps because maid is a household thing and using encryption and authn/z schemes to stop it is less likely in an informal home environment?


I was under the impression that it usually referred to the 'maid' in a hotel room cleaning service, when rooms were cleaned more regularly before COVID.

If you were away on business somewhere sketchy, the cleaning service could physically access your room and your documents / laptop in an at rest state, potentially without you knowing. This could be particularly a concern years ago if the country that you were in was hostile and the cleaning crews in major hotels had embedded state security (the other side of the iron curtain than you were from or something)


Hmm, well, I suppose I see my home cleaning lady more than I travel!

Fair enough.


Maybe that's my interpretation but when I think "maid", I thing of someone who can access my bedroom, a private space. When I think "janitor" I think more of public spaces where I would be less inclined to leave my stuff unattended and where it would be easier for the evildoer to get caught.

In other words, a maid would be a more trusted and therefore more powerful attacker than a janitor.


IIRC the name refers more to cleaning staff at a hotel, who would have access to your laptop while you’re out of the room.



My son was asking me how I hacked into things back in the day. I told him 95% of it is literally just talking to people and asking for the thing you want. You don't even have to lie because most people never even thought to ask "why." Well, the older generation didn't. I've noticed most younger people will ask "why", even when it is obvious.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: