There wasn't a need to have 10s of different wayland compositors. There is not a need to endlessly bikeshed over extentions instead of delivering user value. These are failures of leadership in driving the replacement of X.
Just compare this to Windows and how they made this rearchitecture of making their compositor more modern without splitting into 10s of compositors and breaking a ton of apps.
Here you're just comparing proprietary closed source development to open source development. In the proprietary version the goal is to improve a product. The OSS goals are much harder to pin down and can be different person to person, but it wouldn't be unreasonable to have a goal of "make it so that other devs can make their own compositors easily" and therefore you're describing an obvious success.
Short term this might be a far slower and worse approach. It's not clear that's the case long term though, making things easier to try out different ideas and then finding a winning compositor project could be better than being stuck with one.
It isn't particularly easier to make your own compositor either, as you now also have to bring your own window manager. What made the X architecture much more interesting is that it avoided coupling the window manger to the compositor. Hell: there even are multiple popular compositors for X, as they also managed to avoid coupling the compositor to the display server (which would be the one part of the system that you don't find too many of -- though there were multiple implementations over the years! -- but that's not really much different than Wayland where everyone is using the same library to implement the behaviors as part of their coupled-together balls of mud.)
But why would I ever want to have a separate compositor and window manager? Like the display stack benefits from "vertical integration", being modular is a tradeoff, often of performance and significant complexity.
Why not just make a display server (which handles everything rendering related, compositing included), and then add a window manager as a plugin/extension on top? Window managers are not that complicated.
I dunno: I have never ever wanted to make a compositor -- which, to me, feels like a really boring piece of graphics infrastructure -- and yet I have used multiple window managers over the years and have absolutely wanted to make my own window manager? In X, making your own window manager was so popular of an activity that it honestly felt kind of unreasonable just how many window managers existed, and yet everyone used one of a handful of compositors and I'm honestly not sure why anyone in their right mind would bother making their own display server?
(Same... I know people use them to get some pretty effects; but, they add a frame of latency I do not want and require lots of memory and assume acceleration I don't need.)
There is no way to avoid a frame of latency without "racing the beam", which AFAIK quite complicated and not compatible with most GUI frameworks. That is, if you don't want tearing.
One frame of latency and adding a frame of latency are different things. The first is required (without tearing) the second should be avoided at all cost (athough high display refresh rates reduce the problem of "long" swapchains quite a bit).
It does not matter if the devs alone worked on it in isolation but at what point there was public use and how it has evolved. The earliest you could argue it was being used by user on a distro would be 2016 in fedora. Actual mainstream use in ubuntu around 2021 but optional and default literally only just this year.
Perhaps proprietary closed source development is better for making operating systems. Is it a coincidence that Google was able to scale Linux to billions of devices while open source development ones weren't? Open source development should take some lessons if they want to be successful and not aggrevate developers writing apps for your platform like what happened in the article, forcing them to do extra work.
If development for X is ceasing now, there isn't time to experiment on finding the true successor.
I think the hard part about the Linux desktop ecosystem and its development pattern is the cobbled-up-parts nature of the system, where different teams and individuals work on different subsystems with no higher leadership directing how all of these parts should be assembled to create a cohesive whole. We have a situation where GUI applications depended on X.org, yet the X.org developers didn't want to work on X.org any more. If the desktop Linux ecosystem were more like FreeBSD in the sense that FreeBSD has control over both the kernel and its bundled userland, there'd be a clearer transition away from X.org since X.org would have been owned by the overall Linux project. However, that's not how development in the Linux ecosystem works, and what we ended up with is a very messy, dragged-out transition from X to Wayland, complete with competing compositors.
Bazaar-style development seems to work for command-line tools, but I don't think it works well for a coherent desktop experience. We've had so much fragmentation, from KDE/Qt vs GNOME/GTK, to now X11 vs Wayland. Even X11 itself didn't come from the bazaar, but rather from MIT, DEC, and IBM (https://en.wikipedia.org/wiki/X_Window_System).
> Perhaps proprietary closed source development is better
Perhaps...
> Open source development should take some lessons if they want to be successful
A lot of people who write the gui stuff for Linux do it because they want to. Success is not necessarily the same metric as a company making a product.
There are companies working within the space and I doubt the licensing really makes much difference to the outcome (i.e. your Google example)
> If development for X is ceasing now, there isn't time to experiment on finding the true successor.
Why? Again, the people working on it because they want to don't need to do anything, they can experiment. Someone can still fix up issues in X. Some companies will fund the development of things that are important to them. You make it sound like the oss community should be acting like one entity to achieve something, but there is no overarching goal nor a reason for there to be one. People will continue pulling in different directions.
Linux (the kernel) is also open source and doesn't suffer from the fragmentation problem. It's pretty much unique to the Linux desktop because there are too many cooks involved.
But even if there's only one cook, it could be worse (if that cook is the gnome team). At least with multiple cooks we can pick kde instead of gnome.
Phones are a different market from computers, even though they're technically the same thing. A large segment of people own "phones" but not a computer. Linux runs a large chunk of the internet. I think it's used quite well at scale.
Even in the server market the success of having a stable app platform can be attributed to Linux, the kernel, solely for having a policy to never break userspace. The base of the app platform was already figured out a long time ago, and if you look at the bulk of Linux contributions you will see that they are coming from companies using Linux commercially.
Certainly not in a high-productivity environment. Google has to swap out most of the runtime components with distributed alternatives to make it compelling in a corporate (distributed) environment.
How can you compare the Cathedral with a bazaar? This is not a technical difference at all.
Apple/Microsoft can do whatever they want, just break compatibility at any point and everyone else wanting to have their programs supported on their platform will adapt.
Meanwhile for Linux network effect has a much bigger role to play, you can't tell anyone else what to do, but protocols can only emerge from working together.
Also, I wouldn't bring up Microsoft's display stack as a positive example at all.
Those two are worlds apart when it comes to backward compatibility.
> Also, I wouldn't bring up Microsoft's display stack as a positive example at all.
Why not? It's doing exactly what it's supposed to do, and has been since the late 90s. There's tons of fundamental improvements since then, but they're all under the hood without affecting user-facing features. I'd say the Windows display stack modernization is an excellent example of how it should be done (a real shame though that Microsoft is actively ruining Windows by adding user-hostile features on top of the pretty good technical base).
User experience and developer experience for an OS are real things. It's easy to make a bad experience, people have to actually care about being able to deliver a good experience. Even if you can't tell people what to do, it should be possible to align on something that can deliver a good experience for users and developers.
>This is not a technical difference at all.
Which is why I said it was a problem with leadership than with the technical merrits.
Actually, that'd probably be a better outcome. But as it is, Red Hat & Ubuntu et al pay people to work on Wayland and those people follow corporate priorities rather than centralized priorities.
I think Red Hat wants a working desktop but I don't think they have strong official opinions on how to get there. I think individual people are responsible for the GNOME/Wayland/Freedesktop messes.
Windows gets to completely rearchitecture their compositor because they only provide one stable ABI to get pixels on the screen: link to USER32.DLL, create the necessary objects to represent a window of your application's class, then create and pump a message queue for it. It's ancient, but it works, and more specifically will never change. Even the higher level toolkits Windows ships ultimately are creating USER windows, and USER has been the only UI ABI since version 1.
macOS is the same way, except Carbon (a light modification to the procedural Toolbox API) and Cocoa (the Mac's first OOP toolkit) were "toll-free bridged" to each other rather than, say, writing Cocoa in terms of Carbon.
In contrast, X11 is a protocol anyone can implement and speak. There is no blessed library that you must use. No, Xlib doesn't count. Servers have to take their clients as they come. And Wayland, while very much deliberately stripped down from X, still retains this property of "the demarc point is a protocol" while every proprietary OS (and Android) went with "the demarc point is a library".
Just compare this to Windows and how they made this rearchitecture of making their compositor more modern without splitting into 10s of compositors and breaking a ton of apps.