Hacker Newsnew | past | comments | ask | show | jobs | submit | more _0w8t's commentslogin

Yet Austral returns optional types from any memory allocation function rather than calling abort.

And stack overflow is a memory allocation failure, so why is the discrepancy? I.e. for the language focusing on correctness this is an unfortunate omission.

On the other hand none of popular or semi-popular system languages allows to explicitly control stack consumption. Zig has some ideas, but I am not sure if those will be implemented.


There is discrepancy because things the programmer can prevent are handled differently from things the programmer cannot prevent.

The programmer can always statically ensure that the program doesn't experience a trapped overflow, and that the stack size is not exceeded. All the information to do that is available when the programmer runs the compiler.

But there is no way to prevent a memory allocation failure when using `calloc`, since the information required to do that is not available when the programmer writes the code. In fact, when running on POSIX systems it's not even possible to check _in advance_ at runtime whether a memory allocation will succeed.

This is why Austral's allocateBuffer(count: Index): Address[T] returns an Address[T], which you have to explicitly null-check at run-time (and the type system ensures that you can't forget to do this).

Of course, on some non-POSIX systems such as seL4, the programmer can know at compile-time that memory allocations (untypedRetype) will not fail. When you use Austral on such systems, you don't have to use calloc/allocateBuffer at all.


To statically ensure a stack overflow does not happen requires that recursion is rejected by the type system. Austral does not do that so the stack overflow is a dynamic condition similar to memory allocation failures.


Maximum stack usage can be calculated in the presence of recursion. Tail calls can be handled as branches instead of nested call frames, but also non-tail calls are tolerable if you have (or can infer) some measure to determine maximum call stack depth.

It's a pain, and the type system rejecting any recursion is certainly simpler, but that's not a strict requirement.


Inferring the number of loop integrations or recursion levels is in practice impossible when the number depends on the user input.

For a system language I would like to see that when the compiler cannot infer the bound on the stack size or when that static bound exceeds some static limit, a function call is treated as fallible.


One never writes such expression in a serious code. Even with move semantic and lazy evaluation proxies it is hard to avoid unnecessary copies. Explicit temporaries make code mode readable and performant:

auto t = minus(vec1, vec2); mul_by(t, 0.5/0.3); add(t, vec3); mul_by(t, 0.3); v4 = std::move(t);


I think there may be a misunderstanding here regarding the use case. If the vectors are large and allocated on the heap/on an accelerator, then yes, writing out explicit temporaries may be faster. Of course, this does not preclude operator overloading at all: You could write the same code as auto t = vec1 - vec2; t *= 0.5/0.3; t += vec3; t *= 0.3;

However, if the operands are small (e.g. 2/3/4 element vectors are very common), then "unnecessary copies" or move semantics don't come into play at all. These are value types and the compiler would boil them down to the same assembly as the code you post above. Many modern C++ codebases in scientific computing, rendering, or the game industry make use of vector classes with operator overloading, with no performance drawbacks whatsoever; however, code is much more readable, as it matches actual mathematical notation.


Thank you for putting this so eloquently!

> Many modern C++ codebases in scientific computing, rendering, or the game industry make use of vector classes with operator overloading, with no performance drawbacks whatsoever

I guess these people are all not writing "serious code" :-p


TIL Box2D must not be serious code because it doesn't use copious amounts of explicit temporaries[0].

And just for the record, I'm very glad Erin Catto decided to use operator overloading in his code. It made it much easier for me to read and understand what the code was doing as opposed to it being overly verbose and noisy.

[0]: https://github.com/erincatto/box2d/blob/main/src/collision/b...


> One never writes such expression in a serious code.

Oh please, because you know exactly which kind of code I write? I'm pretty sure that with glm::vec3 the compiler can optimize this just fine. Also, "vec" could really be anything, it is just a placeholder.

That being said, if you need to break up your statements, you can do so with operators:

    auto t = vec1 - vec2;
    t *= 0.5/0.3;
    t += vec3;
    t *= 0.3;
Personally, I find this much more readable. But hey, apparently there are people who really prefer free functions. I accept that.


The notion of uncountable set assumes certain axioms and classical logic. If instead one assumes rules of constructive mathematics, then one cannot construct uncountable set. Simplifying only computable numbers can exist.


Note that there are multiple versions[1] of “constructive mathematics”, so for example what Bishop’s constructive real analysis book considers constructive is not what Markov’s “constructive = computable” tradition does.

You are also going to encounter other funny things. One of De Morgan’s laws failing could seem troubling but ultimately theoretical—how, then, about the distinction about countable and constructively countable[3]? The set of all programs is countable; the set of all terminating programs is a subset of it, but can’t constructively be proven to itself be countable—after all, that would imply a solution to the halting problem! (Neither can it constructively be proven to not be such, as it is such in a consistent extension, namely classical mathematics.) Thus you get the new and exciting notion of subcountable sets. And that’s one of the mildest things that you’re going to encounter in your shiny new theory of cardinals.

(Anybody who can explain to me what the hell constructive Hahn—Banach[4] is about gets a beverage of their choice if we are ever in the same city. Without HB, you cannot do any functional analysis worth a damn, and without that you can’t really talk about Green’s functions / fundamental solutions to linear PDEs, which are in turn a basic tool for any electodynamics or quantum mechanics worth speaking about. Unless you want to forgo proof or the natural sciences, you need it, is my point.)

None of this is to say that I think intuitionistic logic is useless, mind you. It’s interesting and useful[5]! But so far it seems that you need to learn at least some logic (as a piece of mathematics and not mere terminology, so including scary words like “metalogic”) before you can really understand how it comes together.

[1] It seems categorical logic[2] says useful things about what the One True Definition should be? But I don’t understand it well enough to tell, and in any case the constructivist tradition predates not only it but category theory in general.

[2] https://mikeshulman.github.io/catlog/catlog.pdf

[3] https://ncatlab.org/nlab/show/constructive+mathematics

[4] https://www.cse.chalmers.se/~coquand/hahn1.pdf

[5] https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016...


Markov’s variety of the constructive mathematics is just what Bishop did plus the assumption that Church-Markov-Turing thesis holds. One does not need to assume that to recover any result of classical mathematics applicable in the real world. Also all those weird results of Markov’s approach that apparently “contradict” the classical mathematics is a consequence of Church thesis. Without that one cannot prove them. So Bishop’s approach is always compatible with classical mathematics.

As for Hahn—Banach I can only site the book “Techniques of Constructive Analysis” by Douglas S. Bridges:

For example, our proof of the Hahn-Banach theorem (Theorem 5.3.3) is, as it stands, a valid algorithmic proof of the classical Hahn Banach theorem. Moreover and this is one advantage of a constructive proof in general our proof embodies an algorithm for the construction of the functional whose existence is stated in the theorem. This algorithm can be extracted from the proof, and, as an undeserved bonus, the proof itself demonstrates that the algorithm is correct or, in computer science parlance, "meets its specifications". …

The Hahn-Banach theorem enables us to extend a normed linear functional, with an arbitrarily small increase in norm, from a subspace of a normed space to the entire space. This fundamental result has numerous applications throughout functional analysis.

In the constructive context we deal only with the extension of linear functionals on subspaces of a separable normed space. The standard classical proofs extending the theorem to nonseparable normed spaces depend on Zorn's lemma and are therefore nonconstructive. … The classical Hahn-Banach theorem says that we can extend a bounded linear functional v from Y to a functional u on the whole space X with exact preservation of norm. In general, as Exercise 5 shows, we cannot do this constructively. But, as we shall see, if we impose extra conditions on the norm of X, then we can make the extension norm-preserving.


> the distinction about countable and constructively countable? The set of all programs is countable; the set of all terminating programs is a subset of it, but can’t constructively be proven to itself be countable—after all, that would imply a solution to the halting problem!

I thought this had to do with the difference between being computable and computably enumerable.


Also spot prices reflect the storage availability. The storage will be filled by October according to the existing contracts. I.e. there is no much room to receive more gas if it will have to be brought to Europe in the next few months.

A really cold winter without Russian gas may bring the spot prices back to record levels as then the supply lines will be the bottleneck.

Long term Europe needs bigger storage to smooth price volatility.


The E4X spec was just bad. There were too many corner cases with very unintuitive behavior or just plain spec bugs. I wish it was E4H focusing on needs of HTML with no xml namespace nonsense. It could have a chance then.


QT is used for embedded systems that may run for 10 and more years. If a device requires certification, one does not want to upgrade to a newer QT as that may require to certify the software again, while for security fixes the requirements are much more simpler.


Spectre and other hardware bugs are not going to be fixed for code running in the same address space. The moment DB allows untrusted WebAssembly to run, then that code can read anything the DB server keeps in memory.


I don't see how Spectre or hardware address spaces are relevant here. WASM code cannot access memory that is not explicitly provided to the sandbox, by design. Each load/store opcode is bounds-checked by the WASM engine to ensure that. Wouldn't be much of a sandbox, otherwise.


You can still have Spectre attacks, check out this blot post

https://blog.cloudflare.com/mitigating-spectre-and-other-sec...


It all depends on price details. In Norway one needs to pay about 20 Euro/month for a phone subscription with amount of data that allows, for example, watch YouTube or do video calls occasionally outside WiFi. Yet for 35 Euro/month one can get a mobile subscription with 1 TB of traffic with 10 MBit/s speed. That is enough to watch movies and do many kinds of remote work.

Yet the fiber costs at least 50 Euro/month. So it only makes sense to get broadband if it is shared by 4 people.


Sounds like it makes sense to get fiber as soon as 2+ people are sharing it, since it's faster and less than 70 EUR/mo.


You still want a cheap mobile plan, so it's not technically cheaper until you have 4 people.

But that's comparing fiber to 10Mbps cellular, and even with 2 people the fiber is dramatically faster and only €20 more expensive. If you can live with 10Mbps then there is presumably some broadband option at no worse than that speed for less than €30 and then you're already ahead with 2 people.

And most people aren't going to be happy with speeds that low, but then you'd be paying more for each cellular plan.


In Norway there is no realistic option to have fiber cheaper than 45-50 Euro per month.

Based on personal experience 10 MBit/s is enough for video including full HD options. In fact it worked better than 100 MBit/s that was shared with several users and when somebody was downloading big gaming files.


In Julia with multiple dispatch there is no problem to adding more things to the global namespace. But it does not work for Python, so it’s developers must be very conservative with global names.


Naming intermediate steps require some non-trivial efforts. It can even distract from the main task of getting the results.

In programming the code will be read multiple times and good names will help the future readers. But in data science the calculation will be most likely will not be reused. So efforts to name things will be waste of time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: