Hacker Newsnew | past | comments | ask | show | jobs | submit | fmjrey's commentslogin

OOP certainly has some early roots in trying to be more efficient with code reuse, organization, and clarity of intent. Later on Java tried to alleviate serious productivity and security issues with garbage collection and cross platform portability. It certainly increased the distance between the hardware and the developer because there are more levels of indirection that can now degrade performance.

However with hardware progress, performance is not the only critical criteria when systems grow in size, in variety of hardware, with internet volumes, in the number of moving parts, and of people working on them. Equally if not more important are: maintainability, expressivity so less lines of code are written, and overall the ability to focus on essential complexity rather than the accidental one introduced by the langage, framework, and platform. In the world of enterprise software Java was welcomed with so much cheers that indeed a "code culture" started that grew to an unprecedented scale, internet scale really, on which OO rode as well.

However not all control is lost as you say. The JVM that also runs more advanced langages with a JIT that alleviates some of the loss of performance due to the levels of indirections. GC are increasingly effective and tunable. Also off-heap data structures such as ring buffers exist to achieve performance comparable to C when needed. See Martin Thompson video talks on mechanical sympathy, which he gave after working on high frequency trading on the JVM, and check his later work on Aeron (https://aeron.io/). As usual it's all about trade-offs.


Here is an example of a 2006 rant that qualifies: https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...

OO conflates many different aspects that are often orthogonal but have been conflated together opportunistically rather than by sound rigor. Clearly most languages allow for functions outside classes. It's clearly the case today especially with FP gaining momentum, but it's also clear back then when Java and the JVM were created. I think smalltalk was the only other language that had this limitation.

Like others in this thread, I can only recommend the big OOPS video: https://youtu.be/wo84LFzx5nI


OO fatigue is a healthy symptom of readiness to move to clojure, where data and functions are free to live without encapsulation. No king of nouns, no king of execution!

The article reads like a story of trying to fit a square peg in a round hole, discussing pros and cons of cutting the square corners vs using a bigger hole. At some point one needs to realize we're using the wrong kind of primitives to build the distributed systems of today. In other words, we've reached the limit of the traditional approach based on OO and RDBMS that used to work with 2 and 3-tier systems. Clearly OO and RDBMS will not get us out of the tar pit. FP and NoSQL came to the rescue, but even these are not enough to reduce the accidental complexity of building distributed systems with the kind of volume, data flows, and variability of data and use cases.

I see two major sources of inspiration that can help us get out of the tar pit.

The first is the EAV approach as embodied in databases such as Datomic, XTDB, and the like. This is about recognizing that tables or documents are too coarse-grained and that entity attribute is a better primitive for modeling data and defining schemas. While such flexibility really simplifies a lot of use cases, especially the polymorphic data from the article, the EAV model assumes data is always about an entity with a specific identity. Once again the storage technology imposes a model that may not fit all use cases.

The second source of inspiration, which I believe is more generic and promising, is the one embodied in Rama from Red Planet Labs, which allows for any data shape to be stored following a schema defined by composing vectors, maps, sets, and lists, and possibly more if custom serde are provided. This removes the whole impedance mismatch issue between code and data store, and embraces the fact that normalized data isn't enough by providing physical materialized views. To build these, Rama defines processing topologies using a dataflow language compiled and run by a clustered streaming engine. With partitioning being a first-class primitive, Rama handles the distribution of both compute and data together, effectively reducing accidental complexity and allowing for horizontal scaling.

The difficulty we face today with distributed systems is primarily due to the too many moving parts of having multiple kinds of stores with different models (relational, KV, document, graph, etc.) and having too many separate compute nodes (think microservices). Getting out of this mess requires platforms that can handle the distribution and partitioning of both data and compute together, based on powerful primitives for both data and compute that can be combined to handle any kind of data and volumes.


I mean this particular problem would be resolved if the database let you define/defend a UNIQUE constraint across tables. Then you could just do approach #2 without the psychotic check constraint.


So many comments are based on different understanding of local-first. For some it means no data on the server, allowing some claim it's better for data privacy (but what about tracking?). For others it means it works offline but data is also on the server and there is some smart syncing (e.g. with CRDT). Others speak of apps requiring no remote data and no network needed, though I find box-product to not be very explicit in describing such category.

Also there does not seem to be any commonly agreed definition for local-first or even offline-first. I would assume the -first suffix means there are other ways but one is favored. So offline-first would mean it works online and offline, while local-first means it stores data locally and also remotely, meaning some syncing happens for any overlapping area. However syncing requires network connection, so is there really a difference between local-first and offline-first?

Personally I would use local-only or offline-only for apps that do not require respectively access to remote data or network, the latter being a subset of the former. With these -only terms in mind, I then see no difference between local-first and offline-first.


I get your point and would reformulate as: over time a beginner's environment is mostly the top layer of the tech stack and leaving that state of beginner is a lot more challenging.

In the 80s I was dabbling in Basic on Amstrad CPC computers and things were reasonably simple indeed. When needed I could revert to z80 assembly language and peek and poke my way around. And that's it, there weren't many layers between you and the hardware.

In the 90s however Windows made things a lot more opaque, though it did not prevent VisualBasic success. Instead of hardware generated interrupts you had events, mostly related to GUI, for which you needed to code some scripts. No more poking around in memory, it's all abstracted away from you. Enthusiasm for this way of working motivated the creation of a (non-compatible) VB equivalent on Linux [1] which includes an IDE with drag and drop GUI building, and that's been used to create an ERP for small businesses in France [2].

So yes the programming environment has now a lot more layers, however it just means that only the last layers are needed to get your way around. This reduced cognitive load makes things easier and increased the reach. The trade-off is that most programmers have little understanding of the lower levels: compiler optimisation, memory and processor allocation, etc. And since abstractions are inevitably leaking...

[1] https://en.m.wikipedia.org/wiki/Gambas

[2] https://www.laurux.fr/


A good alternative to jira and other tools used in application lifecycle management is tuleap [1]. The PM/agile part is very configurable and it is also offering integrated wiki, gitolite, and hooks to integrate with some CI/CD. Also LDAP and OpenID integration is available in the community edition [2].

[1] https://www.tuleap.org/

[2] https://docs.tuleap.org/administration-guide/users-managemen...


Unless it's my tiny phone screen I'm surprised this video did not surface yet https://youtu.be/ShEez0JkOFw (Tim Ewald - Clojure: Programming with Hand Tools)


The topic of star forts is very very interesting, though it raises more questions than satisfying answers.

For example, central Asia has many such forts in the middle of nowhere ([1] [2] [3] [4] [5]), some barely noticeable except from above, and even the surrounding landscape looks so scorched one wonders what really happened for the land to look like this and for these forts to be so erased. If you think natural erosion you imagine millennia, but focus on the forts and you imagine centuries, and it's hard to combine both scales.

Also you can find star forts all over the globe.The kmz file from this site [6] lists about a thousand locations across the globe, it's mind boggling to navigate around them. Who or what civilization managed to propagate this style so far beyond the reaches of what history taught us?

Finally the star forts are most often associated to the finest water canal in the world, and it's also mind boggling to imagine the amount of work involved in modelling the earth at such scale and precision without any engine.

Wild theories abound, I personally prefer focusing on the things one can see or visit and abstain from believing any (hi)story, but even then it's captivating.

[1] https://goo.gl/maps/ZUsQsocutLTXiodm7 [2] https://goo.gl/maps/pkTMLFN7fxYofhkw8 [3] https://goo.gl/maps/SwogTFKo1uuidPNn9 [4] https://goo.gl/maps/1aFRdVHujvL8jLNTA [5] https://goo.gl/maps/gXTCPXjkSvbFnT2B9 [6] http://starforts.org/locations.html


> Who or what civilization managed to propagate this style so far beyond the reaches of what history taught us?

... European colonists, mostly. Star forts are what they built to protect themselves from the locals, who typically outnumbered them by some absurd margin. Because of this, you can find them wherever colonization was happening.

The Central Asian forts you linked are mostly part of the Siberian Line, a line of forts the Russians built where the northern forest turned to the southern steppe mostly in the early 18th century to protect their southern border from Kazakh raids, and to extend their influence eastwards. The land around them always looked like that as they were built on the periphery of the more fertile areas, to protect them. The forts themselves are so eroded mostly because they were not really built to last in the first place. As they were in the middle of nowhere, with no cheap transport available, they were built with what was locally available, which was often just packed earth with maybe a brick wall to keep the outer walls in shape.


Would you be able to share your recipe? I've seen many variants on the net. Using ripe/full yellow fruits I did many soaks in water, boiled the fruits and even the boiling juice had to be discarded due to bitterness. After that process I added sugar and a couple oranges and ended up with an acceptable bitterness, but the flavor is still very sharp. One of the recipe on the net talked about harvesting fruits when they still have shades of green and are not fully ripe. Maybe that's a way to prevent too much bitterness? I also did not remove the skin and I hear it's where most of the bitterness is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: