Hacker Newsnew | past | comments | ask | show | jobs | submit | willsher's commentslogin

They are, or should be, entirely self contained such that whatever segregation is employed - be it hardware via a VM or in kernel with apparmor or SELinux provides sufficient segregation for the work load. V8s problem is JavaScript and NPM, but limiting the blast radius with hardware virtualisation is a win for segregation and v8 will win, at least for front end, because it’s got the mindset. As long as the library ecosystem cleans up.


"In kernel with apparmor or SELinux" can't possibly provide sufficient workload isolation, because it implies workloads share a kernel. It's easy to rattle off relatively recent kernel LPEs that no mandatory access control configuration would have prevented.

The Linux kernel simply wasn't designed to provide the kind of isolation "naive" containers want them to. Actually, generalize that out: Unix kernels in general weren't designed this way. It just doesn't work.


End game then is LittleKernel/Zircon on Fly.io? When do we get to play with those?


Whether software based access control is sufficient depends on the workload and where in the stack the workload runs. I agree though, hardware virtualisation based is more secure and less complex. It also requires access to bare metal, so a providers service or run it yourself, which is a trade off.


Humanities - history, geography, politics, religion and partially are economics, are our legacy; what we are and therefore what we can become. They are arts not sciences. Oral and written tradition for generations before technology & science. Of course this is important to humans, and as a species we should embrace it. Computers should augment us, not replace is.


This reminded me of Robert Wilson's congressional testimony on Fermilab. [0]

From the link:

Despite the key role physicists played in ending World War II, some members of Congress were skeptical of paying a hefty price tag for a machine that did not seem to directly benefit the U.S. national interest.

During Wilson’s testimony, then-senator John Pastore bluntly asked, "Is there anything connected with the hopes of this accelerator that in any way involves the security of the country?"

"No, sir, I don’t believe so," Wilson replied.

"It has no value in that respect?"

"It has only to do with the respect with which we regard one another, the dignity of man, our love of culture. It has to do with: Are we good painters, good sculptors, great poets? I mean all the things we really venerate in our country and are patriotic about. It has nothing to do directly with defending our country except to make it worth defending."

[0] https://www.aps.org/publications/apsnews/201804/history.cfm


> "It has nothing to do directly with defending our country except to make it worth defending."

Thank you for sharing. This is one of the most thought provoking lines I have ever read. People and politicians often forget that making a country worth defending is as important as the act of defense.


I mean, it's obvious.

Or maybe it's a proof of the intelligence of a elected official?


These topics are incredibly important, and yet it's extremely difficult to make a living studying them, and people are routinely mocked for attempting to do so. One day we will realize that an entire society made up of engineers and managers is not a healthy one. How can we fix this when "stuff that people will pay for" is pretty much the only meaningful measure of value?


People get paid for engaging in them- i.e. doing them. Studying the arts will not, however, make you an artist.

It will make you, at best, a keen observer. Some people can make a living writing books about their observations and assertions formed from them. Most, however, will not.

How much extra value does one more person writing about Locke or Rawles really give society? It depends entirely on how many we already have.


The arts and the humanities are not identical.

The people doing history, for example, are the grad students and faculty who dig into archives and write books. They, largely, are shit on by society and are working in a field that pays virtually all of them like crap.

And they don't repeat themselves. The Nth person to write a book on Topic X isn't just saying what has already been said. They are in dialog with all of the other authors. They reinterpret and recast the history. They view it through different methods or they create new methods that others will use in the future.


There is a value that transcends money. Value to society as a whole. Sure people need to live and have basic needs met, but to transcend finance is perhaps a necessity. One can make do with a lot less when the soul is enriched, when we take solace from the learning of humanity.


And how much do you think artists get paid?


For independent artists, everywhere from "not at all" to "obscene amounts", with most on the lower end of the scale or nothing.

For commercial artists- designers and such for companies- it is usually a good living.


It’s weird that we don’t throw more money at the social sciences, in particular. People often laugh at the social sciences because of things like the reproducibility crisis. But if we’re finding it difficult to research fundamental questions about humans, why don’t we throw billions of dollars at the challenge, like we did with the Large Hadron Collider?


Humanities aren't science but still important in making rational decisions. "History repeats itself" at least partially, and history/politics/religion help explain human psychology, if you look into the motives behind them.


The Humanities are _essential_ to making rational decisions outside (and sometimes within) the sciences. Thomas Kuhn's work used to be viewed as a "bridge" between them (history + sociology of science). But in fact modern science owes its existence to the humanists of the Renaissance. Of course the really hard problems aren't scientific, or amenable to scientific solutions: war and peace, politics and crime, social movements, religion, economic exploitation and the ultimate questions of "why?" and "what's next?" Those require answers from history, literature, philosophy, art and culture. That's why Newton only spent the first part of his life studying the physical world, the latter part had him searching for meaning in the metaphysical.


It seems the nature of this and VR that they come to boom for a while and then bust, having stagnated. They then wait for the next alignment of underpinning technology, knowledge and culture to emerge again. Last last one I'm aware of was mid-late 1990s, where VRML was gaining traction and new ways of thinking about AI were emerging.

IBM Watson or similar (if I recall IBM was still calling their business AI system Watson back then) seems to be promenant in these two booms and both times the results it gives haven't matches it's marketing hype.

The technology, having been significantly furthered fades into the day to day of computing somewhat until the next boom that drives more short burst innovation and awareness.

Conscious AI and realistic VR is some way off, if we ever see it. Culturally and ethically we are not ready to answer the questions it poses and the cyclical nature gives us more time to digest the latest raft of questions in light of the progress.


My understanding from my Solaris days was that CPU load is the number of processes queued for CPU time over the given period. It doesn't related to usage directly at all. (Other than higher cpu speed and io through out leads to a reduction in load because tasks finish quicker)


That's correct.


I though of seven. Is there any significance to the number one Kubernetes?


Yes, the previous projects preceding Kubernetes have a history of being Star Trek references, specifically to the Borg [1].

As to Heptio’s name, Beda explained: “When we were helping create Kubernetes, we pitched it as ‘Seven of Nine,’ a ‘Star Trek: Voyager’ character who’s a former Borg drone. That was a reference to the Borg, a code name for Google’s internal version of Kubernetes, the thing that runs its search and apps and ads. It’s total geek culture. We wanted a friendlier Borg. That name turned into ‘Project Seven.’ When we went public with Kubernetes, we didn’t want to lose track of the ‘seven.’ The Kubernetes logo has seven sides. ‘Hept’ is the Greek prefix for ‘seven,’ and it’s a way to pull the ‘seven’ through.”

[1] http://www.geekwire.com/2016/ever-come-kooky-kubernetes-name...


If they wanted to keep the reference to seven and make it relevant to containers they could have named it "whatsinthebox"


That's part of the point of doing it to me. They can, hopefully, use the data in a useful way. Sure, they may make it private and paid for the data set they have, but I also have a copy of the data I can upload to other more publicly minded companies.

From a selfish point of view, perhaps my DNA will contribute to solving some of the illnesses I'm more prone to to or already have, and help my children if they treatment. From a more altruist view, it may help others too.

Do I trust Big Pharma? I'm not sure I do, but what I do know is medical advances are making many people's lives more bearable and longer, both in developed and developing nations.


There are still a few gotchas on the low level stuff, especially networking, but also other areas that involve kernel drivers more directly.

The security around the containers still isn't as good as a hardware VM, but I think hardware segregation is on the road map.

It's ideal for running container inside VMs, though, and offers nice management segregation on those larger VMs.


Can you name some of these networking gotchas you mentioned? And elaborate on the security remark?


The major security issue is that containers share the same kernel as the host, with the whole kernel syscall interface as attack vector. VMs run separate kernels that only interface with the host via the hypervisor.


Well, there's also the issue that "Linux containers" are not a kernel primitive. So there's plenty of syscalls that aren't even namespaced (the keyring ones come to mind). Docker handles this by having a default seccomp profile that disables a bunch of syscalls that aren't namespaced.

However, the development of Zones and Jails (on SunOS/Solaris/illumos and FreeBSD) was much more security focused, with the default being "that's not allowed in a (Jail|Zone) until we can make sure it's safe". I really wish Linux had just ported Jails or inspired their security model on them.

I do a lot of work with the internals of Docker and runC at SUSE. Trust me, it's not pretty how you have to set up "Linux containers" and there's 1001 gotchas.


And we haven't seen many hypervisor security holes in the past... :) Especially with Xen.


From research around the Internet, D3 is generally seen as safe (http://www.mayoclinic.org/drugs-supplements/vitamin-d/safety... is pretty typical on the safety information) Personally I take a D3 supplement of 2000IU/day during the dark months (along with some other supplements due to a mostly vegan diet, some of which contain low D2/D3 amounts, so my actual supplemented intake is around 2400 IU).


Config management, as it currently stands with Puppet/Chef etc./ ideally has no place _in_ containers iff the issue of how applications are deployed there can be solved in a suitably generic and straight forward way. A container becomes and immutable item - logs get shipped off container, config comes from service discovery, failing containers get taken out of service.

There is a need for shims like confd, though perhaps we'll see configuration information libraries emerge that can go straight to a service discovery/config lookup endpoint such as etcd or zookeeper.

All of the config management tools have some sort of overhead and all affect the ability for a container (bar its data in the case of databases) to be immutable.

Building the container should be as simple as installing the package for the given app into whatever container systems is being used, be it Docker, a more bare bones container, or something that comes from the standarisation effort.

The base 'OS' in the container then becomes irrelevant, as the base OS is really the containing, host OS, with the containers just containing the apps and not additional overhead.

In terms of where configuration management fits, it potentially is at the orchestration level, but that said there are already other, more specialised tools emerging in that space too.


There is also pkgsrc, which from memory can be build to be separate from the underlaying OS. FreeBSD's ports is similar. Slackware's packaging is based off simple tar files.

It does seem that this software build/deploy is a key problem that needs solving and the decentralised, easy to grok nature of Docker is key to the software delivery system. Nix is a really clever bit of engineering and design, but is also hard to grasp. Could it be made more simple? I suspect it's 'functional' nature is the hard part.

The nature of containers being essentially immutable, at least from a base software stance, with packages not being upgraded so much as newly installed avoids the problem of upgrading running services. Most (all?) software would run as its own user, so no root level daemons.

Configuration files are built from service discovery (e.g. via Kelsey's confd in lieu of the apps themselves deriving config), so even config need not be preserved if a roll back to prior to the package layering is done.

Just some thoughts, but I agree there is a need to better manage dependencies. Heck, why not build static linked binaries?


Thanks for the pointers -- I'll dig in to those to get more ideas.

Also agree that config-as-package is part of this too.

As for statically linked binaries -- this solves some of it but not all. Still hard to figure out which version of openssl is actually running in production. Also falls down in the world of dynamic languages where you app is a bunch of rb/php/py files.


Which version of X library can be introspected via which binary is linked, rather than which package is installed - inspecting the actuality rather that an meta-data wrapper in the form of a package may be preferable.

As for static vs dynamic binaries, is dynamic less memory intensive even in containers - that is, do library version get shared across containers in RAM or are they separate? If not separate, static may be much easier to manage in general, though there maybe cases where they can't be provided. Upgrading the app involves a recompile, but that's a container building exercise. Versioning can then become tied explicitly to the container version, or by inspecting the static binary.


> Still hard to figure out which version of openssl is actually running in production

You could check the version of openssl in a specific image id and check if that image is used on the cluster.

If one of your image has a package with known security updates it is just a matter of re-building it with the newer package and re-deploying.


Agreed -- but how do you figure out which packages are in an image without cracking that image?

Quick -- what version of OpenSSL is in the golang Docker images? (https://registry.hub.docker.com/_/golang/). Short of downloading them and poking around the file system, I can't tell.


That's a fair point, but the alternative of always compiling the packages at the point in time negates that, at the risk of regressions and other 'new version' bugs, but that may be endemic in this anyway - if the author of the golang container built it at a point in time against version X, then went away and left the container alone, upgrading versions may become risky in any case.

Those kind of risks, and perhaps the larger container risks look like CI pipeline issues - how is the container tested a given new versions? As an adjunct, how do we so container component integration testing? Is that part of this packaging & building system?


pkgsrc is brilliant for containers!

Not using docker but zones as a container solution here, but what we do is bake images containing: pkgsrc + static config files + small scripts.

To provision we take the image + metadata containing dynamic configuration values (network details, keys & certs, etc.) and execute that.

This allows us to make very stable releases containig all our software. Pkgsrc is the most important part of this. It already contains very recent versions of packages as it is released quarterly. But sometimes we need to run a very specific version in production, or maybe add a patch to fix some bugs. This is super easy with pkgsrc.

I've given a talk last year about some parts of this at a local meetup: http://up.frubar.net/3165/deploy-zone.pdf


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: