Hacker Newsnew | past | comments | ask | show | jobs | submit | giornogiovanna's commentslogin

Hopefully this version's closer to the prompt.[0]

Anyway, the F# code, IIUC, lets you do exactly what you want it to do, although you'd have to replace the ifs with guards.

[0]: https://rosettacode.org/wiki/Amb#Monadic


I know these are extremely common suggestions, but…

• 3Blue1Brown has great introductory series on linear algebra and calculus.

• Khan Academy covers pretty much all of US high school mathematics, and you can go through it at whatever pace you want.

• I can send you a few Australian high school textbooks if you want.


How does amb "in all its glory" handle an endless loop? Any solution I can think of seems prohibitively expensive.


Besides GC and the lightweight syntax (relative to Rust), what is it about OCaml that makes it so wonderful for implementing complicated algorithms?


Well, the big one is GC.

Here are a few others:

- both polymorphic variants and GADTs are extremely powerful mechanisms to let you keep track of invariants, in a manner that cannot be replicated readably in Rust so far;

- OCaml's pattern-matching is a bit more flexible than Rust's ;

- (thanks to the GC) OCaml's closures are much more flexible than Rust closures;

- I guess OCaml's blindingly fast exceptions can be useful for some algorithms, although I haven't really experienced the need for them myself.

Again, that's not to say that Rust doesn't have great stuff for it (affine types 4eva!) or that OCaml doesn't have its own limitations – Rust is currently my main language and I do not regret it.

But there are some algorithms that I really don't want to write in Rust. Recently, for instance, I stumbled upon this: http://weblog.jamisbuck.org/2010/12/29/maze-generation-eller... . Trivial algorithm, trivial to write in OCaml, feasible but much less straightforward in Rust.


Exactly that, lock-free algorithms are specially hard to get a correct implementation without tracing GC support.


Why?


Here is an overview, read also the comments.

https://concurrencyfreaks.blogspot.com/2017/08/why-is-memory...


Honestly, if you care about longevity, Rust simply isn't the language yet. I'd at least wait until there's a specification and a GCC frontend.

Anyway, C does have some benefits as a programming language.

• "Legacy" projects like Linux and CPython will keep it alive for decades to come.

• It's extremely portable. Rust is less portable, and even when Rust supports a niche platform, it's relatively clunky to get things started.

• C libraries are always very easy to call from other languages. Rust isn't quite there yet, and translating things like traits can be a bit of a pain.


I thought this was going to be about bounding the time complexity of your algorithms; I don't know if any language does that yet. I suppose this title has a lot of interpretations.


The early versions of programmable vertex and pixel shaders did that by enforcing bounded loops and not having any function calls.

It turns out that that's far too restrictive as a programming model, so while modern shading languages still forbid recursion, they are otherwise unbounded.

It would be interesting if there's a more useful middle ground somewhere, analogous to what Rust does with lifetimes.



My understanding is that automatically determining time complexity is impossible in the general case due to the halting problem.

And getting a compiler to truly understand the time complexities of your data structures would involve either extremely difficult theorem proving or just forcing it.

I do think it would still be interesting if a language had support for determining time complexity. It seems like it's often possible even if it isn't in the general case.


Another issue is that algorithmic complexity only really 'composes' in a trivial and uninteresting way: Ok, you have a for loop over N items, multiply the complexity of the loop body by N.

Well, no not quite... it may be the case that one particular iteration of the loop makes all the subsequent steps completely trivial, so O(1)... and even further it may be that such a step is guaranteed to be reached in log(M) time, etc. etc. You see this type of thing a log in graph algorithms where they look like they might be O(n^3) or whatever, but leaving markers on nodes can avoid a lot of repeated work (by skipping marked node), so they end up as O(n + k) or whatever.

Upper bounds for worst case complexity are relatively easy... just assume the worst, but the interesting thing is proving TIGHT bounds for worst case complexity.


No one suggested that surveillance is as bad as cancer. All that was said is that blocking Facebook dating has a similar effect on surveillance (in kind, not in degree) as banning tobacco has on cancer.


Is that even true? Stopping smoking reduces the risk of lung cancer by like over 90%, blocking Facebook doesn't change the fact that the Chinese and American spy agencies are still doing everything they can to record all your emails and personal communication.

Degree matters, because the harm from censorship and restricting people's choices is a matter of degree, and if the harm avoided by the ban is a lower degree then it ends up as a net negative.


If it wasn't for companies like Facebook using and thus requesting that information to begin with, it would be a hell of a lot more difficult for American and Chinese agencies to get your data.


Nobody is going to suffer any noticeable or concrete harms because Facebook launch a dating site of all things. The reaction here is wildly out of proportion to the scale of the problem; assuming there's any problem at all (everyone with a Facebook profile made one, they wanted to be there, so there isn't).


› It's important to realise that variables in mathematics and variables in programming have a lot in common, but they are not the same thing!

I think the stated "differences" are actually similarities with programming languages.

› In mathematics, sometimes a "variable" is a place-holder for a value one may choose arbitrarily from a collection, and which is then used in some process,

That pretty much perfectly describes a function parameter: for a formal correspondence, universal quantification corresponds to dependent function types.

› In fact, sometimes we find that there is no value satisfying the requirements we have placed on the variable, so in a sense it doesn't exist at all!

That's what happens when your variable's type turns out to be uninhabited, like an empty enum.

› It seems to me, speculating idly and with no research to back me up, that recursion has similar conceptual challenges as Mathematical Induction and Proof By Contradiction.

The author used proof by contradiction as part of a proof that the well-ordering principle implies the principle of mathematical induction, but (1) that links recursion and proof by contradiction exactly as much as it links recursion and modus ponens, and (2) mathematical induction is usually taken to be the more fundamental rule anyway, so this bit doesn't make much sense to me.

› In mathematics a function doesn't even need to have a rule to tell you what it is,

It depends on your definition of a function. ;)

Related: the author lists rules such as "A finite path from an instance back to one of the simplest", which are formalized in the concept of a well-founded relation.


> But for numerical work 1-based indexing is usually easier to work with.

That's not really true in my experience, since all the nice properties of zero-based indexing transfer perfectly over to mathematics.

But you're right, this isn't anything to abandon an otherwise wonderful language over.


> Sounds more like a problem with the team, rather than a problem with the language.

That's the same reply that people have to "C++ has too large a surface area". Granted, Swift isn't quite at C++ levels of complexity yet, but ideally, I shouldn't have to cordon off sections of the language I'm using just so my team can stay sane.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: