Hacker Newsnew | past | comments | ask | show | jobs | submit | AgloeDreams's commentslogin

Can someone tell me what the living heck is `Fuzzing`?

I read this twice and I really don't have a single clue other than it having something to do with or requiring fast memory?



Testing code via semi-random inputs[1]. The most common fuzzers, AFL-Fuzz[2] and libFuzzer[3] are coverage-guided: they compile the program with special instrumentation to determine code coverage, then call the program repeatedly, changing the inputs via genetic algorithm to try to maximize the code paths executed. When unexpected behavior is observed (typically the test harness crashing) the fuzzer saves the test's input for future use.

Basically automatic generation of test case inputs. It's non-deterministic, so it won't always find problems, but it can save a lot of manual effort.

[1] https://en.wikipedia.org/wiki/Fuzzing [2] https://lcamtuf.coredump.cx/afl/ [3] https://www.llvm.org/docs/LibFuzzer.html


For an interesting, similar idea, see also:

https://en.wikipedia.org/wiki/QuickCheck


Fuzzing: give a program structured random garbage as input and see what happens, then fix the resulting bugs.


Originally: for each terminal program, pass every file as input. If crash results: document it.

Effectively: random inputs to achieve unexpected results. It's now come to mean "random data testing of an API"


Here is a tutorial I found: https://fuzzing-project.org/tutorial1.html


Quantity. In the US, many actually eat meat with every dinner and much of fast food is way outside of safe quantities with ease. Once you get used to the large amounts it gets really easy to feel underfed at safe quantities.


THIS. I saw multiple headlines calling it a processor, which may be somewhat true but it's not a General purpose processor and certainly is not a CPU. Yes, Apple's T2 chip has an A8 in it, but they don't call it a processor.


I think another shock here is that a lot of people discredited the ability of ARM cpus as well.

Back when the iPad Pro with the A10X came out, Apple claimed it was faster than half of all Laptops sold and people in the PC space were yamming on and on about how numbers don't show how much better x86 cpus are at 'desktop stuff' and that ARM cpus can't equal x86, even with the same thermal envelope and shouldn't ever be compared. Ironically, many are now stating that the reason why they are so good is because of ARM, which isn't true either lol.


>people in the PC space were yamming on and on about how numbers don't show how much better x86 cpus are at 'desktop stuff' and that ARM cpus can't equal x86, even with the same thermal envelope and shouldn't ever be compared.

It needs 30W at 4 cores 3.2Ghz. Ryzen needs around 5W per core but it's on a worse process. The entire system does use less power than a x86 system but that has nothing to do with the processor. It's more about how the SoC is arranged and that RAM is (almost) on the same package. It means they can get away with higher bandwidth and lower power consumption for the entire system.

The idea that it's all about the processor is completely wrong. Yet all we have heard is how fanboys cry it's going to be 3x faster than desktop CPUs because of misleading TDP numbers.


As far as we know, all four large cores at max plus the 4 small is ~20W. Whole chip max power use is 30W including GPU and the ML processor. Ryzen also blows a lot of power on things other than cores but AMD is absolutely the closest to this however. The hard thing for them is that the Big/Little arch is a huge advantage for battery use at idle. I would say the game being played here is that Apple is betting on this to scale all the way up for fast burst but they know that the real advantage is that their cores can also scale much much lower than anything out there. Less about magical performance gains and more about remarkable power use paired with much better power management lessons learned by making smartphones. Qualcomm could do this too if they actually cared about it.


AMD has tried a bit, Intel probably won't. You can still get great numbers on desktop x86 with better cores and processes. Zen 4 is perfectly good so far. Apple's M1 already outperforms most desktop CPUs, it goes to reason that a model with more cores and bigger L1 could outperform the whole industry.

Surface Go is nowhere close, half as fast in single core, 1/5th as fast in multicore. The 5W TDP is really a generic number with no real meaning as Intel doesn't really abide by it, I would say it probably uses about the same power as the M1, possibly much more under turbo while also having a much higher power floor (IE: When at idle the Surface go uses much more power)

Keep in mind that the Surface Go is very low-cost and the CPU is at a 14nm build.


Would the M1 with more cores be able to beat the threadripper at the same wattage? Right now the M1 stands at a score of 8000 Vs Threadripper's 25000. The comparison I am sure is not just about comparing benchmark scores, but is there a prediction possible given that the M1 is at a 24W TDP whereas the threadripper has a 280W TDP (A 3x change in the benchmarks alongside a 10x change in TDP)

Does Qualcomm or Samsung have a M1 beater in their kitty?


It costs more to move 2 bytes into the CPU than to actually add them. As you go bigger, you spend an increasingly larger amount of time and energy moving data as opposed to actually calculating things.

Anandtech numbers showed 50-89% of total power consumption for the 7601 being used for Infinity Fabric. With 89w remaining spread among 32 cores, that's a mere 2.78w per core or 1.39w per thread at an all-core turbo of 2.7GHz.

Oh, I'd note that the 7601 is a 14nm Zen 1 part.

https://www.anandtech.com/show/13124/the-amd-threadripper-29...!


I'm not sure I or really anyone actually wants a 2 day battery life. Like, we can do that on smartphones right now but users have signaled that the 1 day device is fine for them, notably because of the human gap.

You know the gap.

If you charge your phone every night it becomes a habit tied to your daily routine.

If you were to charge your phone every other night, you might lose track of what day you are on, not charge it and then the perceived battery life experience is worse. This is why smart watches with 3-4 days of battery have not prevailed over those with one heavy day of battery. They are annoying to know what day you are on so you might just charge it every night and if you do, the platform is trading off so much power that the experience is worse.

Plus, then you have to carry 2 days worth of battery or have half the power envelope as a laptop with one day. the concept all sounds great but the reality of people using things really has honed in on the fact that these things need to fit into habit and use cases that make sense.


Why are you assuming you need to charge a 2-day device every other day? You charge it every night, and in exchange you make it through heavy use days, late nights, and the times you forget to charge it. I had a 2-day phone and downgraded to a 1-day phone and my phone now dies on me much more often, including in each of those scenarios, and looking at the battery level and charging have become a bigger part of my life.


I think it's implied in "two day battery life".

If that's not what you actually want then just call it 'heavy use all day battery life' or something.


> I'm not sure I or really anyone actually wants a 2 day battery life.

I do, because that means it could probably do 8 hours at high load.


My Garmin lasts about a week if I don't use GPS and it's by far the best feature.


There already is an insider build :)


Oh excellent!


Part of it is feedback to me. The timeline view in most Git Apps makes it super clear to see where you are and where you can go. Starting out in Git with an established repo is like driving at night without headlights.


More likely they will ship more Firestorm cores and keep the Icestorm. Their future chip designs will likely be cross desktop/mobile. Keeping Icestorm lowers the cost in whole by allowing them to ship more chips and gives about a 30% performance gain in multicore.

Far more interesting to me is the idea that in heavy use, the Icestorm cores can run the OS, notifications and all that, allowing full uninterrupted use of the firestorm cores. Also when the mac is in idle it uses far less power.

Basically, I fail to see a reason to not keep them :).


> Far more interesting to me is the idea that in heavy use, the Icestorm cores can run the OS, notifications and all that, allowing full uninterrupted use of the firestorm cores. Also when the mac is in idle it uses far less power.

It's impressive what Apple has been able to do when they can fine tune macOS and ASi to work together.


I also feel like Intel's depth is also limited by the breath of CPUS they must develop. With every release they are shipping tons of specific sets of cores and clock speeds to meet their market. Then you have the raw investment in Fab that has turned out to be just lighting cash on fire for Intel. They make all kinds of claims and then fail over and over, plus they are hemorrhaging key talent. I think their soul really isn't in the game.

Apple has the luxury of building two or three chips total per year and simply funding TSMC fab. All of this is to fund the largest grossing annual product launch. If their chips fail at being world beaters, hundreds of billions of dollars are on the table. All in, Apple spends an incredible amount of money here, ~$1 billion. Per chip design shipped, Apple is probably spending much more but also getting their return on investment. It's such a tight integration that if TSMC were ever delayed by say, four months, I have no idea what Apple would do.

AMD is playing smart, fast and loose. Best chip CEO by a wide margin. AMD's gains really are on Apple's back, their chip design is brilliant and they get to reap the leftovers when Apple turns out their latest chip. They don't have to fund Fab, they don't have to make crazy claims to appear relevant like Intel does. They just ship great bang for the buck and the fab gains and their own hard work has given them best performance title too. Going Fabless was one of the most controversial choices ever made in the industry...and wow, was it the right move.


> Best chip CEO by a wide margin

Reminds me of this story: https://www.theregister.com/2018/04/16/amd_ceo_f1/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: