Hacker Newsnew | past | comments | ask | show | jobs | submit | sedeki's commentslogin

I recently started lifting weights. I just bought gloves. I like this tip though - thanks.


In the theme of the OP and by way of response, I've been lifting weights for decades but only in the last couple years realized that gloves were detrimental to lifting. They add a layer of material around the bar that increases its circumference and thereby increases the effort required to grip it for pulling movements. They reduce your tactile connection to the bar which is important for engaging secondary muscles in certain lifts (e.g., lats in the overhead press) as you progress in strength and for very technical lifts such as the clean. The only time hand protection may be beneficial is for movements in which your hand rotates relative to a bar. That is never the case for a (properly performed) barbell exercise but may be the case in certain gymnastic movements on a pullup bar and in that case, you might consider gymnastics grips.


I would highly recommend not using gloves. They reduce the feedback you have on the bar and can lead to entrenching bad habits with regard to grip. Chalk and the technique in the video are a time-tested approach.


I took too much chalk and "ripped" part of my skin under my hands.

Where can I read more about the techniques and/or bad habits? Interesting.


Can someone provide book titles that are affected by this...?


Whatever random book any random teacher bought or was given.

The whole point is that they have to remove ALL books that aren't approved through The Process. Of course there's not going to be a list of those. There isn't some master list of books that are lying around classrooms everywhere.


The rules are whitelisting not blacklisting oriented, so it’s easier to give the set of list that would be allowed, and every other book in existence outside that allowed set is potentially blocked.

The rules would block a book like East of Eden. Pornographic elements and too advanced for kids (even if it isn’t)



from another comment: https://nypost.com/2022/04/22/floridas-banned-math-textbooks...

So you should teach how to read graphs in Math. But if those graphs show that racial prejudice exists, that's a felony. Simple.


All books are affected by this.


I want to start doing this too. I bought a flip-phone some time ago, but it never stuck.

My usage is pretty much: listen to music, use (Google) calendar, texting, FaceTime.

Any surfing is plain doom scrolling and not productive.

Recommendations for me? Analog/simpler substitutes?

For ppl that has gone down this path - how do you feel now?


Personally it wasn't very hard for me to go down this path because I never found social media (which lets face it is the big driver of phone overuse) to be all that interesting in the first place.

That said I treat my phone mostly like a phone. I have my calls and texts whitelisted. If you're not in my contact list, the phone won't ring or ding. Then if I hear it ring, I know it's someone I know.

I remove most of the apps on my phone and keep it on my desk at all times. No different than a phone you would have hanging on a wall. I treat it the same, so I don't feel the need to really "use" it.

I will take my phone with me when I go places, and use it for music or maps in the car, but because I mostly treat it as a phone, I don't really see it as anything other than that.

When I started doing this I was admittedly pretty bored, but over time I just found other things to do and am never bored now. You just get used to it. Humans are pretty adaptable.


You could use the parental controls to lock it down.

iOS has Screen Time, which allows you to set limits on how much you can use an app. I'm sure there is some equivalent on Android.

I made the mistake of purchasing FTL (a game) and had to add a Screen Time rule for it. :)


Yeah, I have it activated. But I want to experiment with the idea to abolish my phone as much as possible.


I'd be curious as well. I realized in the last few months that I rarely enjoy looking at anything on my phone. I like it when my friends and family text me. I like the discord group I have with my friends, especially now that we all have kids and have moved away. I started by deleting apps that weren't essential, keeping the utilities or the things I don't do compulsively, or do situationally.

With Twitter in particular, I don't know a better way to find artists, writers, and podcasts. I have never enjoyed Twitter less than I do now, but it is unfortunately the best way I currently know. I guess it's time to start digging.


I've taken a different approach: I spent about a year silencing anything that grabbed my attention and shouldn't. Ad blockers handle that job just fine. On social media, I just unfollowed everyone, and blocked every related content feed. These websites are empty, but accessible. They can be queried, but not browsed.

It worked great. I sometimes just don't know what to even look at on my phone. It's as exciting as an old flipphone.

That took care of the impulse. Beyond that, it's just a matter of self-discipline.


All I do is put the phone on vibrate and put it in my pocket. Other than taking out the phone to capture a photo, I never really feel the urge to pull it out and waste time on it unless I am actively waiting and there is nothing else to do. I like to be 100% engaged in what I am doing whether that's socializing with friends, enjoying a hobby or walking somewhere. Maybe that's my secret? The desire to feel fully engaged and present?

All that being said, I work on a computer all day long. I am in front of the computer all day and do a fair bit of "time wasting" there.


This describes my usage too, I also am on a computer all day long, its also where I waste my time. I see my phone as a less useful computer, with the worst typing interface possible and try to use it as little as possible.

I'd be totally content to replace my physical phone, with a virtual one on a computer, provided I could make calls and send/receive text messages. Maybe I need to look into Google Voice...


When I attempted this I did a flip phone and transitioned the rest to my laptop. As a result, I just carried my laptop around all the time. Didn't solve the problem!!!


Between texting and FaceTime is pretty much the entire functionality of a modern smart phone.


I like the idea of this, but $149 for a deck of cards...?


Yeah I was also shocked by that lol. I don’t think I paid that much however many years ago I bought them… at least I hope I didn’t!

It looks like they have a free sample they’ll email you from their site. You can probably take it as inspiration and make your own deck if you want!


I agree that you don't need to change who you are by "fixing" something.

But it is a weak argument for WFH by itself, don't you think?


I didn't say it was my only reason. I said "This is a huge reason...". There are quite a few others as well. I didn't list them, because they're irrelevant to the "Ask HN" question, and are therefore off-topic.


OK, yes you're right - it would indeed be off topic.

I actually recognize that line of reasoning myself. But since losing weight and working on myself (e.g. going to a therapist) I feel much better.

My point is: There was bigger issues at play for me when I didn't want to work from an office...


How far off is bpy (Blender's Python API) do you think?


How can a (badly chosen) typedef name trigger _undefined behavior_, and not just, say, a compilation error...?

I find it difficult to imagine what that would even mean.


You can declare a type without (fully) defining it, like in

    typedef struct foo foo_t;
and then have code that (for example) works with pointers to it (*foo_t). If you include a standard header containing such a forward declaration, and also declare foo_t yourself, no compilation error might be triggered, but other translation units might use differing definitions of struct foo, leading to unpredictable behavior in the linked program.


One potential issue would be that the compiler is free to assume any type with the name `foobar_t` is _the_ `foobar_t` from the standard (if one is added), it doesn't matter where that definition comes from. It may then make incorrect assumptions or optimizations based on specific logic about that type which end up breaking your code.


The problem being that to trigger a compile error the compiler would have to know all its reserved type names ahead of time.

It is not required to do so, hence undefined behavior. You might get a wrong underlying type under that name.


But wouldn't one be required to include a particular header in such case (i.e. the correct header for defining a particular type)?

I mean, no typedef names are defined in the global scope without including any headers right? Like I find it really weird that a type ending in _t would be UB if there is no such typedef name declared at all.

Or is this UB stuff merely a way for the ISO C committee to enforce this without having to define <something more complicated>?


[Note: What I originally wrote in my top-level comment was inaccurate; I edited that comment, but later posted another update: https://news.ycombinator.com/item?id=33773043#33775630.]

The purpose of this particular naming rule is to allow adding new typedefs such as int128_t. The "undefined behaviour" part is for declaration of any reserved identifier (not specifically for this naming rule). I don't know why the standard uses "undefined behaviour" instead of the other classes (https://en.cppreference.com/w/cpp/language/ub); I suspect because it gives compilers the most flexibility.


[Edit: My link to the behaviour classes was wrong (it was for C++ instead of C), it should have been https://en.cppreference.com/w/c/language/behavior]


Doesn’t the compiler need to know all of the types to do the compilation anyway?


I'm not sure, but in general having incompatible definitions for the same name is problematic.


Off topic: What level of sophistication about modern CPUs is _good_ to have? And where does one learn it? Resources?

Say that I want to work with HPC-type applications and otherwise just squeeze the most out of the processor, e.g. quant finance / trading systems.


First question is a bit subjective, but Hennessey and Patterson is a good place to start.

It's thick but reads fairly easy in my recollection


There are a lot of issues here, so I can share some stuff about some of them and hope that some helpful internet commenters come along and point out where I have neglected important things.

A single modern CPU core is superscalar and has a deep instruction pipeline. With your help, it will decode and reorder many instructions and execute many instructions concurrently. Each of those instructions can operate on a lot of data.

As famous online controversial opinion haver Casey Muratori tells us, most software just sucks, like really really bad (e.g. commonly people will post hash table benchmarks of high-performance hash tables that do bulk inserts in ~100ns/op, but you can do <10ns/op easily if you try), and using SIMD instructions is table stakes for making good use of the machinery inside of a single CPU core. SIMD instructions are not just for math! They are tools for general purpose programming. When your program needs to make decisions based on data that does not contain obvious patterns, it is often a lot cheaper to compute both possible outcomes and have a data dependency than to have a branch. Instructions like pshufb or blendv or just movemask and then using a dang lookup table can replace branches. Often these instructions can replace 32 or 64 branches at a time[0]. Wojciech Muła's web site[1] is the best collection of notes about using SIMD instructions for general-purpose programming, but I have found some of the articles to be a bit terse or sometimes incorrect, and I have not yet done anything to fix the issue. "Using SIMD" ends up meaning that you choose the low-level layout of your data to be more suitable to processing using the instructions available.

Inside your single CPU core there is hardware for handling virtual -> physical address translation. This is a special cache called the translation lookaside buffer (TLB). Normally, chips other than recent Apple chips have a couple hundred entries of 1 4KiB page each in the TLB, and recent Apple chips have a couple hundred entries of 1 16KiB page each. Normal programs deal with a bit more than 1 meg of RAM today, and as a result they spend a huge portion of their execution time on TLB misses. You can fix this by using explicit huge pages on Linux. This feature nominally exists on Windows but is basically unusable for most programs because it requires the application to run as administrator and because the OS will never compact memory once it is fragmented (so the huge pages must be obtained at startup and never released, or they will disappear until you reboot). I have not tried it on Mac. As an example of a normal non-crazy program that is helped by larger pages, one person noted[2] that Linux builds 16% faster on 16K vs on 4K pages.

Inside your single CPU core is a small hierarchy of set-associative caches. With your help, it will have the data it needs in cache almost all the time! An obvious aspect of this is that when you need to work on some data repeatedly, if you have a choice, you should do it before you have worked on a bunch of other data and caused that earlier data to be evicted (that is, you can rearrange your work to avoid "capacity misses"). A less obvious aspect of this is that if you operate on data that is too-aligned, you will greatly reduce the effective size of your cache, because all the data you are using will go into the same tiny subset of your cache! An easy way to run into this issue is to repeatedly request slabs of memory from an allocator that returns pretty-aligned slabs of memory, and then use them all starting at the beginning. That this could cause problems at all seems relatively unknown, so I would guess lots and lots of software is losing 5-10% of its performance because of this sort of thing. Famous online good opinion haver Dan Luu wrote about this here[3]. The links included near the bottom of that post are also excellent resources for the topics you've asked about.

When coordinating between multiple CPU cores, as noted in TFA, it is helpful to avoid false sharing[4]. People who write trading systems have mostly found that it is helpful to avoid sharing *at all*, which is why they have work explicitly divided among cores and communicate over queues rather than dumping things into a concurrent hash map and hoping things work out. In general this is not a popular practice, and if you go online and post stuff like "Well, just don't allocate any memory after startup and don't pass any data between threads other than by using queues" you will generally lose imaginary internet points.

There are some incantations you may want to apply if you would like Linux to prioritize running your program, which are documented in the Red Hat Low Latency Performance Tuning guide[5] and Erik Rigtorp's web site[6].

Some other various resources are highload.fun[7], a web site where you can practice this sort of thing, a list of links associated with highload.fun[8], Sergey Slotin's excellent online book Algorithms for Modern Hardware[9], and Dendi Bakh's online course Perf Ninja[10] and blog easyperf[11].

> Off topic: What level of sophistication about modern CPUs is _good_ to have?

Probably none? These skills are basically unemployable as far as I can tell.

[0]: https://github.com/lemire/despacer

[1]: http://0x80.pl/articles/index.html

[2]: https://twitter.com/AtTheHackOfDawn/status/13338951151741870...

[3]: https://danluu.com/3c-conflict/

[4]: https://rigtorp.se/ringbuffer/

[5]: https://access.redhat.com/sites/default/files/attachments/20...

[6]: https://rigtorp.se/low-latency-guide/

[7]: https://highload.fun/

[8]: https://github.com/Highload-fun/platform/wiki

[9]: https://en.algorithmica.org/hpc/

[10]: https://github.com/dendibakh/perf-ninja

[11]: https://easyperf.net/notes/


All of this especially the end.

Funnily enough the part about not sharing cache feels a lot like erlang with less scheduling...


Hey, thanks for sharing! That was a lot of effort to type all that up, with references to boot.


If you’re just starting out, I suggest the introductory book by Patterson and Hennessey (not the Quantitative Approach which is a tome) - https://www.amazon.in/Computer-Organization-Design-MIPS-Inte...

Another one is Computer Systems: A Programmer’s Perspective: http://csapp.cs.cmu.edu/


May I ask if you think price stability is of any benefit for a given currency at all? If not, why not?

My intention is to justify inflation from this end.


(false statement about a buggy M1)


M1RACLES was a security flaw that was hyped as a joke, because it was such a weak bug, and yet it was hyped to oblivion. It totally does not deserve even a mention on the M1 Wikipedia page.

The flaw means that two malicious processes, already on the system, can potentially communicate without the OS being aware. Even though they already could through pipes, desktop icons, files, inter-process communication, screen grabbing each other, over the network, from a remote website, take your pick. Now, what are the odds of two malicious processes, being on a system, with a pre-agreed protocol for communication, going to need a weird processor bug to communicate over for? Absolutely nothing. It's not supposed to happen - but it's basically useless when you are twice-pwned already.

The other flaw that was found was that Pointer Authentication (PAC) could be defeated on the M1 with the PACMAN attack. However, PAC was actually an ARM standard added in ARMv8.4 that affects all ARMv8.4 implementers - the M1 just happens to be the most notable chip with that ARM version. Versions before ARMv8.4 didn't have PAC at all - so, even with that defeated, you aren't worse off than you were before ARMv8.4, so it's just a "sad, we tried, but oh well" thing from ARM's perspective.


Notably, almost every other Arm A series processor which supported PAC also was susceptible to the same attack [1], the issue is just that actually buying such processors is nigh impossible (up until this year it was actually impossible, now you just need to do research on a phone SoC) whereas anyone can buy an apple silicon device from a million different places.

[1] https://developer.arm.com/documentation/ka005109/


Thank you! I am learning something new every day.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: