Hacker Newsnew | past | comments | ask | show | jobs | submit | more galimaufry's commentslogin

One thing that I miss coming from python to Julia, is typing-as-documentation. In python I often write type annotations that are much more restrictive than they need to be. They work as checkable documentation of how a function is expected to be used (and under what circumstances do I guarantee it will work), without restricting how it can be used.


I think this is a real conflict. In my opinion (not just mine), the only reason to write type constraints on a method definition in Julia is to control dispatch. Adding types to method arguments for the purposes of documentation is counterproductive to generic programming.


I used to think this was true (as a developer of a lot of generic Julia code and small data analysis applications).

But now as a developer of larger amounts of "application style" code, I'm not so sure. In an application, you've got control of the whole stack of libraries and a fairly precise knowledge of which types will flow through the more "business logic" parts of the system. Moreover, you'd really like static type checking to discover bugs early and this is starting to be possible with the likes of the amazing JET.jl. However, in the presence of a lot of duck typing and dynamic dispatch I expect static type inference to fail in important cases.

Static type checking brings so much value for larger scale application work that I'm expecting precise type constraints to become popular for this kind of non-generic code as the tooling matures.


You're not wrong. I guess what I'd like is the ability to apply (up to) two type annotations, with the second one a subtype of the first, and use the first for dispatch and the second for documentation/testing/static analysis...

... which actually seems like it might be doable with some macros? Those are beyond my ken right now, but the goal would be

  @doubly_typed f(x::Number|Integer) = x
  f(6.0)            # Same as g(x::Number) = x; g(6.0)
  @strictify f(6.0) # Same as g(x::Integer) = x; g(6.0)
Then you would use @strictify when running tests to ensure that the stricter types in your codebase are all compatible. But you'd still need to figure out what to do about return types and the help command...


As I understand it the issue is

* Currently the US welfare agencies will provide ~any FDA-approved drug for someone with the illness for which the drug was approved

* In this case, they probably won't - in the worst case scenario it would double their spending on prescription drugs and seems unlikely to help anyone

* So this is going to sever the link between FDA approval and medicaid spending.

* Now you have an awkward situation where agencies that not really equipped to make judgements on dug effectiveness, and which are not insulated from politics, have to make those decisions.

This could all work out for the best in the long term but may also be very awkward in the short term.


GPL is also a possible alternative. Big companies pay you back by contributing to the project, insofar as it is worth it to them to maintain a fork and add features for their own use.


Sadly, GPL doesn't give you that. It only gives users the modified source code. No history, no time of fork. And nothing to the developer.

I think it might be time to upgrade GPL to the age of modern internet, and have licenses requiring that modification are actually PR-ed (or sent by mail or whatever) to the author.

Hell I'd even think that there could be licenses where you are required to use mainline for anything remotely looking like production, and you are not allowed to fork, you're only allowed to use as-is, and to contribute. This one would definitely not be considered FLOSS, but some components really would benefit from not having an infinite number of forks, like the Linux kernel.


This unfortunately worked much better before it was feasible to hide everything behind a server, having no releases so to speak at all


Is this not the purpose of AGPL?


The purpose of AGPL is to have a bunch of people harass you about you daring to have the temerity of not letting them run their business on your code for free.


But isn’t it cathartic when you tell those people to fuck off?


> They don’t run corrections based on the number of simulations they run, they don’t take into account other variables, etc

I think this looks like a bigger problem specifically because you are in AutoML.

Suppose you are training a GAN. There's notoriously a certain amount of luck involved in traditional GAN training, because you need the adversary and the generator to balance each other just right. So people try many times until they succeed. Probably they were not even recording each attempt, so they do not report how many times they had to run before getting good results.

From an AutoML point of view, this is BS work - the training procedure cannot be automated, and (apart from using the actual seeds) the work cannot be reproduced.

But from the point of view of everyone else, maybe it is fine. They get a generator model at the end, it works, other people can run it.


>But from the point of view of everyone else, maybe it is fine

I think from a practical perspective, it is fine. You want results and you have a black box algorithm that produces them, fine.

From an academic perspective, AI research is a mess. The reason you try something is not from a logical theory, but due from a "hunch" or replicatinga similar algorithm applied in a parallel area. If it does not work, you change some parameters and run it more times. Still not working so maybe you extend the network to include some more inputs and hope for better results.

I did my thesis in machine learning and was very disappointed with the state of the field.


I don't think there's necessarily a problem with trying things on a hunch, some of the best results in science have been due to a hunch or even an accident. The problem comes from trying a dozen hunches and only writing up one, or like you say completely cherry picking hyperparameters.


> "At graduate level, you should cultivate [peers and a concentration] such that your intellectual correspondence is publishable." ... I did not take it at the time as immediately translatable into a [simply translated?] phrase I have heard since: "friendship corruption."

This seems misguided, and I certainly hope that this 'friendship corruption' concept never catches on. There are great papers that started as letters and were later completed by the sender, recipient or both. No one should feel ashamed about that, and no one should feel ashamed of developing friendships with their colleagues.


Well said. My comment was phrased in response to its parent comment looking askance at "amassing enormous citations." There is love of truth and not all is corruption.


I'm not sure this answers the question. The wikipedia page is about the entropy of a probability distribution. But the information speed limit is supposed to apply even if everything is totally deterministic.

If I write a single, 100% certain message and put it in a spaceship it still cannot go faster than light, even though there is no information transfer (entropy of my message (a constant random variable) is 0).

(I'm not saying you are wrong, I am asking to be corrected)


The spaceship itself represents a huge chunk of information (such as the atomic arrangement of the metal atoms making up the bolts).


I agree that there is "information" in the colloquial sense there, or even in the Kolmogorov sense. I don't understand how there is information in the entropy sense, because I do not see a random variable anywhere in this story.


I believe we are talking about Shannon's direct analogy between information and entropy here. The low probabliity of the atoms of the spaceship being arranged as they are - a specific design of a spaceship - out of all their possible arrangements, is a state of low entropy and high information content.

https://en.wikipedia.org/wiki/Entropy_(information_theory)


That was what I was talking about, yes.


Entropy isn't necessarily random. I think a better description of entropy might be unimportant states. For the rocket, we might care about the total mass and we might even care about the temperature of those bolts. Those parameters represent information.

But there's even more information in the rocket, specifically the momentum*position of each of the atoms within the bolt. That quantity for each atom is measurable/knowable and represents information.


It's kind of the anti-nethack. Explicitly not simulation-y, big effort to be playable without spoilers, character building is de-emphasized.



Text link: http://www.lightspeedmagazine.com/wp-content/uploads/2014/06...

(Haven't had a chance to read it yet)


> you also don’t have the right to tell a community to change its character or compensation.

Sometimes you do though. There are real cases where the acid rain produced by City A falls entirely on City B. Surely in that case City B should be able to tell City A to change its character? Local government alone will never solve a problem like this.

The people who pay the price for SF's character are not people who live in SF, but people who commute a long way from outside the city limits, or who don't live in the Bay Area at all but would if they could. The SF government should care less about them than about current SF citizens, but their interests aren't zero. Which is why you need a higher level polity like the CA government to overrule the local government .


It's a bit of a chicken-or-the-egg problem. If more people move to SF then more brunch places will open and the city will be able to afford to run more busses.

Maybe SF could credibly commit to adding more density in the near future, but not immediately? That way speculators open brunch places before the customers arise, and there isn't a brunch shortage?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: