I run a company that provides Minecraft server hosting for businesses (i.e. after school programs, summer camps, Code Ninjas, E-Sports leagues etc) and produces fun and educational events using Minecraft (i.e. 10-15 elementary school kids in an event space at a library learning physics by building roller coasters in the game).
We've switched entirely over to Paper for everything because it works so much better than vanilla, and it also enables us to put it behind a Velocity proxy (Minecraft Java Application layer proxy also developed by the same group that develops Paper) for better scalability, more secure infrastructure, and some cool features like enabling any version of Java edition to join the same server (mad props to the ViaVersion & ViaBackwards plugin teams that make this possible!). This is impossible to do with Vanilla. We do all of our own content development creating the activities the kids do during the events, and the plugin ecosystem that someone else mentioned is hugely helpful for this. I especially want to call out how awesome the Geyser and Floodgate plugins are — they make it possible for Java and Bedrock clients to play together in the same world, which makes our customers lives so much easier.
We're hiring part time / contract developers, event hosts, and technical support personnel. If this sounds interesting, please reach out. My contact info is in my profile.
Descriptively, yes, the shape of the language determines how you can structure an interactive compiler. There are at least three wildly different ways to do that:
Which one can work depends on the language in question. Java, Rust, C++ lead to different answers, and that’s not because of the “interesting” differences like borrowchecker vs gc, but rather due to the “boring” differences in name resolution and module system.
Prescriptively, whether we _should_ do this (or, rather, how much), is unclear.
My biased answer is that, while language designers talk a lot about making languages tooling friendly, there’s little of that actually happening (at least with “current” languages, “next” ones seem to fare better). Like, it was said that “rust macros are designed with tooling friendliness in mind”, but overall the language is pretty tooling-hostile, mostly for accidental reasons. It seems to me that if we _actually_ co-design a tooling-first language, without trying to innovate too much, but by just ensuring that existing techniques work robustly, we might arrive at a close, but meaningfully different point in the design space.
Specifically, here’s my version of IDE-friendliness diff for language design:
Push conditional compilation far further in the pipeline, such that it is done after all semantic analysis. This is a pre-requisite for making automated refactors which are _guaranteed_ to work.
Similarly, push meta programming further down, such that code analysis doesn’t invoke user-defined code (which might be arbitrary slow), but, at the same time, meta parts can fully reflect on existing code, including resolved types. C#-style source generators are an interesting design here.
Have strong signatures on micro and macro level. Macro, have well-defined compilation units with explicitly specified dag of dependencies and signature files. Micro, annotate types of the functions. Use tooling to reduce double-annotation burden.
Ensure that each source file can be somewhat deeply analyzed in complete isolation. The last two points should unlock both embarrassingly parallel (distributed) compilation, and snappy completion.
I have ADHD and though I love having a tidy home I struggle to make it happen. Still, I find chores to be a low stakes daily dojo to practice prioritization, executive function lol, "good enough" nonjudgemental thinking and all those values.
Two books helped me greatly and they both call out the exact distinction between perfection and efficiency you do. Consciously giving up on "efficiency" has helped me finish a lot of projects I otherwise would have put off for far too long!
The books are "How to Manage Your Home Without Losing Your Mind" by Dana White for ADHD folks, "How to Keep House While Drowning" by KC Davis for when life is f'ed. Heartily recommend both.
I read this one during a period of 4 years when I saw myself neck deep into consulting services managing both development projects and implementations. The book was so much aligned to the reality of things in the field that I started to check mark paragraphs so I could be counting them later.
I understand why you're asking. I have often felt that way myself and it's why I put off getting treatment for so long. I was worried I would just end up being some 30 year old tech dude addicted to adderall who didn't actually need it. I want to address those questions in others since getting a diagnosis was so helpful for me. I want to educate around it.
The short answer is that I was diagnosed independently by 2 professionals, I have a family history of ADHD, and I exhibit all of the symptoms of ADHD.
Anecdotally, I know what stimulant abuse looks like. I have seen friends abuse adderall and other stimulants. I react totally different to it than they do.
Since starting: I'm less angry. I'm less annoyed. I have the ability to listen to my spouse talk to me. I'm not jittery or jumpy any more. I am not always singing, tapping, humming, talking over people, talking at yelling volume in normal conversations. I know when to stop talking. I can actually take naps now and go to sleep before my body almost literally shuts down like I used to not be able to.
I'd be happy to give you or anyone else a more in depth walk through how I got here and how I'm sure of what I have, but I understand that's not what you asked for.
I'm currently writing a book on the subject of bad hiring practices, and other management mistakes. Allow me to share some of what I wrote:
---------------------
Homework assignments and personality quizzes won’t give you excellence
This year I was once again hiring junior level developers, and the same dynamic was at work, but I got a surprising reaction from the person I spoke to. I’ll call her Zareen, who had just come through the Grace Hopper program.
Zareen had been interviewing at a few different places, but I assumed she was still open to hearing about the specific job that I was hiring for, so we arranged a phone call. We chatted for 15 minutes, and then I suggested she should come by the office and meet the whole team. She was just absolutely stunned.
"Wow, I didn't expect things to move this fast!" she exclaimed.
Let's think about that for a minute. She wants a job, I might want to hire her, I ask her to come by the office for an interview, and so she is stunned. It says a lot about how broken our hiring processes have become if what used to be the absolutely boring, dull, and standard process now provokes the response, "Wow, I didn't expect things to move this fast!"
Apparently other companies were giving her homework assignments and personality quizzes and phone interviews. “Please go to this website and take this test, we are trying to figure out what your skills are.”
At two other companies, she had already done 20 minute interviews, but never with anyone on the tech team. Instead she got a call from someone in the HR department, who read a checklist of words, which the HR person did not understand, but they were all requirements: “Have you ever heard of HTML? Do you know how to use that? What about CSS? Do you have that? How many years of skill do you have with CSS? And Java? I mean, Javascript? Are those different? Yes? Okay, I think we want Javascript. Do you have that? Yes? How many years?”
What an empty ritual; reading words that are not understood.
This post is completely and totally wrong. At least you got to ruin my day, I hope that's a consolation prize for you.
There is NO meaningful connection between the completion vs polling futures model and the epoll vs io-uring IO models. comex's comments regarding this fact are mostly accurate. The polling model that Rust chose is the only approach that has been able to achieve single allocation state machines in Rust. It was 100% the right choice.
After designing async/await, I went on to investigate io-uring and how it would be integrated into Rust's system. I have a whole blog series about it on my website: https://without.boats/tags/io-uring/. I assure you, the problems it present are not related to Rust's polling model AT ALL. They arise from the limits of Rust's borrow system to describe dynamic loans across the syscall boundary (i.e. that it cannot describe this). A completion model would not have made it possible to pass a lifetime-bound reference into the kernel and guarantee no aliasing. But all of them have fine solutions building on work that already exists.
Pin is not a hack any more than Box is. It is the only way to fit the desired ownership expression into the language that already exists, squaring these requirements with other desireable primitives we had already committed to shared ownership pointers, mem::swap, etc. It is simply FUD - frankly, a lie - to say that it will block "noalias," following that link shows Niko and Ralf having a fruitful discussion about how to incorporate self-referential types into our aliasing model. We were aware of this wrinkle before we stabilized Pin, I had conversations with Ralf about it, its just that now that we want to support self-referential types in some cases, we need to do more work to incorporate it into our memory model. None of this is unusual.
And none of this was rushed. Ignoring the long prehistory, a period of 3 and a half years stands between the development of futures 0.1 and the async/await release. The feature went through a grueling public design process that burned out everyone involved, including me. It's not finished yet, but we have an MVP that, contrary to this blog post, does work just fine, in production, at a great many companies you care about. Moreover, getting a usable async/await MVP was absolutely essential to getting Rust the escape velocity to survive the ejection from Mozilla - every other funder of the Rust Foundation finds async/await core to their adoption of Rust, as does every company that is now employing teams to work on Rust.
Async/await was, both technically and strategically, as well executed as possible under the circumstances of Rust when I took on the project in December 2017. I have no regrets about how it turned out.
Everyone who reads Hacker News should understand that the content your consuming is usually from one of these kinds of people: a) dilettantes, who don't have a deep understanding of the technology; b) cranks, who have some axe to grind regarding the technology; c) evangelists, who are here to promote some other technology. The people who actually drive the technologies that shape our industry don't usually have the time and energy to post on these kinds of things, unless they get so angry about how their work is being discussed, as I am here.
Awhile ago I wrote a Python library called LiveStats[1] that computed any percentile for any amount of data using a fixed amount of memory per percentile. It uses an algorithm I found in an old paper[2] called P^2. It uses a polynomial to find good approximations.
The reason I made this was an old Amazon interview question. The question was basically, "Find the median of a huge data set without sorting it," and the "correct" answer was to have a fixed size sorted buffer and randomly evict items from it and then use the median of the buffer. However, a candidate I was interviewing had a really brilliant insight: if we estimate the median and move it a small amount for each new data point, it would be pretty close. I ended up doing some research on this and found P^2, which is a more sophisticated version of that insight.
For static sets (where you construct the filter once and then use it for lookup), blocked Bloom filters are the fastest, for lookup. They do need a bit more space (maybe 10% more than Bloom filters). Also very fast are binary fuse filters (which are new), and xor filters. They also save a lot of space compared to others. Cuckoo filters, ribbon filters, and Bloom filters are a bit slower. It's a trade-off between space and lookup speed really.
For dynamic sets (where you can add and remove entries later), the fastest (again for lookup) are "Succinct counting blocked Bloom filter" (no paper yet for this): they are a combination of blocked Bloom filters and counting Bloom filters, so lookup is identical to the blocked Bloom filter. Then cuckoo filters, and counting Bloom filters.
I know Cisco is using core.logic, which is David Nolen's Clojure variant of miniKanren, in their ThreatGrid product. I think the Enterprisey uses of mediKanren are a bit different than the purely relational programming that I find most interesting, though.
Having said that, we are now on our second generation of mediKanren, which is software that performs reasoning over large biomedical knowledge graphs:
mediKanren is being developed by the Hugh Kaul Precision Medicine Institute at the University of Alabama at Birmingham (HKPMI). HKPMI is run by Matt Might, who you may know from his work on abstract interpretation and parsing with derivatives, or from his more recent work on precision medicine. mediKanren is part of the NIH NCATS Biomedical Data Translator Project, and is funded by NCATS:
Greg Rosenblatt, who sped up Barliman's relational interpreter many order of magnitude, has been hacking on dbKanren, which augments miniKanren with automatic goal reordering, stratified queries/aggregation, a graph database engine, and many other goodies. dbKanren is the heart of mediKanren 2.
I can imagine co-writing a book on mediKanren 2, and its uses for precision medicine...
We've switched entirely over to Paper for everything because it works so much better than vanilla, and it also enables us to put it behind a Velocity proxy (Minecraft Java Application layer proxy also developed by the same group that develops Paper) for better scalability, more secure infrastructure, and some cool features like enabling any version of Java edition to join the same server (mad props to the ViaVersion & ViaBackwards plugin teams that make this possible!). This is impossible to do with Vanilla. We do all of our own content development creating the activities the kids do during the events, and the plugin ecosystem that someone else mentioned is hugely helpful for this. I especially want to call out how awesome the Geyser and Floodgate plugins are — they make it possible for Java and Bedrock clients to play together in the same world, which makes our customers lives so much easier.
We're hiring part time / contract developers, event hosts, and technical support personnel. If this sounds interesting, please reach out. My contact info is in my profile.