This depends on the subject of the book, but there are enough books written pre-1970 (or some other year one is comfortable with, before the era of “book spinners”, AI etc) to last multiple lifetimes. I used to spend hours and hours in bookstores, but so many books these days (AI or otherwise) don’t seem that interesting. Many, many books could just be 3 page articles, but stretched to 150 page books.
So yeah, simply filtering by year published could be a start
I enjoy old fiction enough that sites like gutenberg.org has that covered and I barely bother with trying to find anything new that I want to read, and that began long pre-ai-slop so no real change for me.
For non-fiction it is a bit trickier. I buy DRM-free from some niche publishers, but I have no idea which ones can be trusted to not begin to mix in AI slop in their books.
corrupt politicians who think a politician going to jail is a dangerous precedent
There is a reason why administrations don't go after obvious, in-your-face crimes committed by previous administrations/politicians. They all hate each other, but they are also terrified that if they prosecute previous administrations (for legitimate crimes), they'll be the target when someone else is in power (even if they themselves didn't commit any crimes).
I suppose it might be easier to prevent misbehavior by highest officials of the land by having stricter scrutiny, laws etc than prosecuting them after the fact, but who watches the watchdogs? Who watches the judiciary? As an ordinary citizen, it is exhausting to just even follow the news.
And if it is this bad in democracies, imagine how it is like in countries like Russia.
> even if they themselves didn't commit any crimes
Does that still even exist? The problem I see in politics is that everyone has their hand in the cookie jar to some degree.
You don't get into politics unless you already have your hand in there, or are given the option to prove yourself where moving up the ranks involves helping someone getting their hand in there, with the unspoken assumption that they'll return the favor. And of course once you're in and have your hand in there, why rock the boat and waste all that effort?
I don't know. I suppose there is behavior that is illegal and behavior that is unethical. I guess there aren't that many politicians that are ethical, but there may be some (hopefully?) who don't do downright illegal things? Maybe, I dunno.
The fact that collectively we all have such low expectations and such low opinions about our politicians/government says a lot about the sorry state of affairs :(
Yeah, given the number affected a class action might be the way to go. It will take time but lawyers will eat the costs in exchange for a significant cut of the final payout.
Sure but that is expensive and will likely just get a page taken down. Most nonprofits are barely surviving and not able to spend on lawyers just to make a point.
But press coverage and public outrage is free, and many nonprofits are good at one or both.
I looked up charities I support and it looks like GoFundMe is pulling in the logos of some orgs. Assuming those are trademarked, that would be a pretty simple takedown letter to send.
You’d potentially recover misdirected funding and catch the attention of an attorney general.
You’d also likely get access to a list of duped donors who would have standing to write fraud complaint letters to their respective state attorneys general.
In my opinion, Elixir and Phoenix will give you a better experience with BEAM and OTP, excellent tooling, a more mature ecosystem, and one of the best web frameworks ever to exist. I think Gleam is cool, but I can't see trading these benefits in for static typing.
To be fair, I can't think of anything I care less about than static typing, so please keep that in mind when entertaining my opinion.
I also preferred dynamic typing, until my complex rails app grew to the point I didn't dare to do any refactoring.
But I didn't switch opinion until I discovered ML type systems, which really allow for fearless refactoring. At occasion there's some battling to satisfy the typesystem, but even with that I'm more productive once the app grows in complexity.
I thought I'd share my experience, not trying to convince anyone ; - )
My path is a little different. I have used Haskell, and I'm looking to get into OTP.
My original plan was either Elixir or vanilla Erlang depending on which one suits my sensibilities better. Reading about Gleam recently has me super, super excited. That's definitely going to be my path now.
I don't know if Gleam is the best entry into the world of rich types that you find in a language like Haskell--I'm yet to actually build something with it.
What I can tell you is that Haskell is a complete joy to use and it honestly ruins most other programming for me. So as a direction, I cannot recommend it enough, and I'm hoping, for my sake and yours, that Gleam offers a similarly stimulating sandbox.
Just a warning that it will take time to get used to "higher-kinded" types. It's an exercise in head scratching and frustration at first. The reward part arrives when you start thinking in types yourself and you know which ones to reach for and when you find the libraries you want by entering a signature on Hoogle.
I have a F# background, and thought to have read that some constructs I learned to appreciate are not available in Gleam (the one I can think of right now is currying, but I thought there were others).
The issue isn't that OTP isn't a priority for Gleam, but rather that it doesn't work with the static typing Gleam is implementing. This is why they've had to reimplement their own OTP functionality in gleam_otp. Even then, gleam_otp has some limitations, like being unable to support all of OTP's messages, named processes, etc. gleam_otp is also considered experimental at this point.
Having Erlang-style OTP support (for the most part) is very doable, I've written my own OTP layer instead of the pretty shoddy stuff Gleam ships with. It's not really that challenging of a problem and you can get stuff like typed processes (`Pid(message_type)`, i.e. we can only send `message_type` messages to this process), etc. out of it very easily.
This idea that static typing is such a massive issue for OTP style servers and messaging is a very persistent myth, to be honest; I've created thin layers on top of OTP for both `purerl` (PureScript compiled to Erlang) and Gleam that end up with both type-safe interfaces (we can only send the right messages to the processes) and are type-safe internally (we can only write the process in a type-safe way based on its state and message types).
I wholeheartedly agree with you that gleam_otp is janky. Still, actor message passing is only part of the picture. Here are some issues that make static typing difficult in OTP:
• OTP processes communicate via the actor model by sending messages of any type. Each actor is responsible for pattern-matching the incoming message and handling it (or not) based on its type. To implement static typing, you need to know at compile time what type of message an actor can receive, what type it will send back, and how to verify this at compile time.
• OTP's GenServer behaviour uses callbacks that can return various types, depending on runtime conditions. Static typing would require that you predefine all return types for all callbacks, handle type-safe state management, and provide compile-time guarantees when handling these myriad types.
• OTP supervisors manage child processes dynamically, which could be of any type. To implement static typing, you would need to know and define the types of all supervised processes, know how they are going to interact with each other, and implement type-safe restart strategies for each type.
These and other design roadblocks may be why Gleam chose to implement primitives, like statically typed actors, instead of GenServer, GenStage, GenEvent, and other specialized OTP behaviours, full supervisor functionality, DynamicSupervisor, and OTP's Registry, Agent, Task, etc.
OTP and BEAM are Erlang and Elixir's killer features, and have been battle-tested in some of the most demanding environments for decades. I can't see the logic in ditching them or cobbling together a lesser, unproven version of them to gain something as mundane as static typing.
EDIT: I completely missed the word "actor" as the second word in my second sentence, so I added it.
I suppose I was unclear. It is OTP-style `gen_server` processes that I'm talking about.
> OTP processes communicate via the actor model by sending messages of any type. Each actor is responsible for pattern-matching the incoming message and handling it (or not) based on its type. To implement static typing, you need to know at compile time what type of message an actor can receive, what type it will send back, and how to verify this at compile time.
This is trivial, your `start` function can simply take a function that says which type of message you can receive. Better yet, you split it up in `handle_cast` (which has a well known set of valid return values, you type that as `incomingCastType -> gen_server.CastReturn`) and deal with the rest with interface functions just as you would in normal Erlang usage (i.e. `get_user_preferences(user_preference_process_pid) -> UserPreferences` at the top level of the server).
Here is an example of a process I threw together having never used Gleam before. The underlying `gen_server` library is my own as well, as well as the FFI code (Erlang code) that backs it. My point with posting this is mostly that all of the parts of the server, i.e. what you define what you define a server, are type safe in the type of way that people claim is somehow hard:
It's not nearly as big of an issue as people make it out to be; most of the expected behaviors are exactly that: `behaviour`s, and they're not nearly as dynamic as people make them seem. Gleam itself maps custom types very cleanly to tagged tuples (`ThingHere("hello")` maps to `{thing_here, <<"hello">>}`, and so on) so there is no real big issue with mapping a lot of the known and useful return types and so on.
I read the code but I'm not sure I understood all of it (I'm familiar with Elixir, not with Gleam).
For normal matters I do believe that your approach works but (start returns the pid of the server, right?) what is it going to happen if something, probably a module written in Elixir or Erlang that wants to prove a point, sends a message of an unsupported type to that pid? I don't think the compiler can prevent that. It's going to crash at runtime or have to handle the unmatched type and return a not implemented sort of error.
It's similar to static typing a JSON API, then receiving an odd message from the server or from the client, because the remote party cannot be controlled.
> [...] start returns the pid of the server, right?
Yes, `start` is the part you would stick in a supervision tree, essentially. We start the server so that it can be reached later with the interface functions.
> [...] probably a module written in Elixir or Erlang that wants to prove a point, sends a message of an unsupported type to that pid? I don't think the compiler can prevent that. It's going to crash at runtime or have to handle the unmatched type and return a not implemented sort of error.
Yes, this is already the default behavior of a `gen_server` and is fine, IMO. As a general guideline I would advise against trying to fix errors caused by type-unsafe languages; there is no productive (i.e. long-term fruitful) way to fix a fundamentally unsafe interface (Erlang/Elixir code), the best recourse you have is to write as much code you can in the safe one instead.
Erlang, in Gleam code, is essentially a layer where you put the code that does the fundamentals and then you use the foreign function interface (FFI) to tell Gleam that those functions can be called with so and so types, and it does the type checking. This means that once you travel into Erlang code all bets are off. It's really no different to saying that a certain C function can call assembly code.
As someone who comes from Haskell/ML-like languages, I decided to opt for Elixir and Phoenix for my latest project, simply because of the maturity of the web framework and LiveView. If I weren't building a web app, I'd probably have gone with Gleam instead.
Edit: I do miss static typing, but it's worth it to not have to reinvent the web framework wheels myself.
I love Gleam, but I would start with Elixir if you're interested in learning about how powerful the BEAM & OTP are.
There's not much documentation/resources around OTP in Gleam. When I was playing around with it I often found myself referring to the Elixir docs and then 'translating' that knowledge to Gleam's OTP implementation.
Gleam is still very new so this is totally understandable, and both are great languages so you'll likely have a lot of fun learning either of them.
Erlang is a much better language to learn if you're interested in learning about the BEAM and OTP, and the book "Programming Erlang"[0] is an excellent resource for learning it.
I've used Elixir since 2015 and in fact learned it first. I still think "Programming Erlang" is a much better book than any other for actually learning Erlang and BEAM/OTP principles. Erlang as a language is simpler, leaving more time and energy for learning the actual important bits about OTP.
For me, gleam is a better fit for the reasons I mentioned, but elixir / phoenix is definitely more mature, so I guess it depends what you like and what you want out of it.
Gleam is cool but honestly for now the ecosystem is so much bigger in Elixir. And yes, you can use some libraries across and things like that, but then again you could also bring in the parts that you need from Gleam into Elixir and instead of vice versa. If you just want to learn a really cool language, I think Gleam is pretty cool. But if you want to learn a language that is more productive but still kind of cool, I would start really like here and then dip my toes into Gleam.
Or better options - just do the job you were hired for and go home or find that rare job (if possible) where engineering has a bigger role than politics. It is not pleasant to play a guessing game trying to please some manager, just because they’re your boss.
I've never seen two people fight to the death over something meaningless as much as some engineers do. Politics is the end product of multiple people working together with different views. Engineering doesn't save you from it. Engineers who think it does tend to cause more political turmoil in teams than anyone else.
If you think corporate politics is just a stupid game you'll never be happy with what you accomplish and lucky to keep your job. Awareness and understanding, being able to tie effort to outcomes, positioning and sales; these are not guessing games.
what possible harm could come to the people incidentally captured?
That is not the point. If someone doesn't want to be in someone else's photos or videos without consent, it is their choice. It is their face afterall. It doesn't matter why. They do not owe an explanation.
The polite thing to do would be to blur other people's faces (or remove people altogether) before adding our photos to the gajillion others already floating around on the internet.
Go to a restaurant with friends or family for dinner - someone has to request the waiter to take a photo of all our faces stuffed with food, can't even have a meal without modeling for stupid photos.
Go to any event - we have to take photos, we have to pose for photos. Back when meetup.com was a thing, every event people were more interested in taking photos than having meaningful conversations.
Go to any tourist place - flashes everywhere, photos everywhere. I used to live in Manhattan, you can't walk 10 feet without some tourist group posing for photos. You have grit your teeth and wait for their photo session to finish, or feel bad for interrupting it.
Couple of years ago (I forgot exactly when) I noticed the self checkout kiosks at WholeFoods had video cameras. I can't even buy half a pound of tomatoes without being on someone's camera/database. As if WholeFoods is some top secret nuclear facility...What crime am I gonna commit there? Steal onions?
And on and on and on...
Who even looks at these stupid photos anyway? Do we really need to document what we ate for breakfast along with our faces, as if we ate some exotic fruit that is only available once every 25 years? It is the same shitty toast and crappy coffee
Publishing someone’s photo online, without their consent, without another strong justification, just because they happen to be in view of one’s camera lens, feels wrong to me.
This makes perfect sense. I don't want to take anyone's photo (even those people I know very well, like family and friends) without their consent. Same way, I don't want anyone taking my photo either, and most certainly don't want anyone posting them online where it is gonna stay there forever.
If I'm honest I haven't thought about it fully yet but atm I felt comfortable enough to give access for 29.99 forever. And if users don't want to spend that much and maybe only want to try it for a couple of weeks then they can get on the weekly plan with a free trial
What you say is true for most companies/software, but YouTube can play a nasty game for a very long time before it withers into irrelevance (if at all). They have enormous moat, one would need enormous resources to take on YouTube, I don't think anyone has that kind of patience or resources to even attempt. Like it or not, we are stuck with YT for a while.
I have learned so much from YouTube - I wish it was more open and friendly to its creators and users :(
In the meantime, all we can do is support smaller alternatives like https://nebula.tv/
So yeah, simply filtering by year published could be a start
reply