The app in the pod is the client (of a DBMS server). The client's IP gets changed. A service in k8s is a network node with an address, but it is used for inbound connections, outbound connections (like from the app to a DBMS server, which may be outside of k8s cluster) usually do not use services (as it gives no benefits).
I'm not 100% sure, but I think it didn't display the outcome of the dice in this case. And it would be nice to have some hint that shows that this game ends in a Tie.
This method is a little bit more complicated than you describe it. I’m also not sure how easy it is to scale.
You still need to destroy parts of the beehive, let all the bees hunger for a few days and disinfectant the parts of the beehive you want to reuse. I don’t know if all of this is possible with that many beehives mentioned in the post.
As far as I know this method is done by commercial beekeepers. Well at the end you are have to choose to burn it all and face big investment or to invest work. It's right that the artificial swarm has to "hunger" a bit.
At the end you get no honey from the bees and have to invest in new frames and sugar. It's no visit in the candy store if you have AFB but burning everything seem to be a last resort solution which isn't nowadays necessary anymore.
The other problem you are facing are the beekeepers around you ... they have to do the same and than there are some black sheep which are not known to the veterinary administration. They can also contribute to a never ending AFB problem.
At the end hygiene (torching unused hives and frames) is the first thing everybody can do to prevent such situation.
An Ansible playbook is usually the main entrypoint, it consists of a list of plays. Each play has hosts and a list of tasks to run at them.
Because people wanted to reuse their code, the abstraction of roles was created. A role is something like „setup basic OS stuff“, „create this list of users“, „setup a Webserver“, „setup a database“. The association, which roles to run on which machine still happens in the playbook.
LOL, I feel like the author mix up the two things even in the title. There are HTML attributes, there is the DOM API (which for example exists also in other languages, but not always refers to HTML but mostly XML instead) and there are JavaScript Object properties. And because the DOM api uses JavaScript objects you can access properties. But only attributes are serialized / deserialised. And some frameworks blur this, so you get and set both as a „convenience“.
Apparently I worded my first sentence wrong and you didn't bother to read longer than that. Apologies.
I just want to reiterate: DOM objects do have properties, because they are JavaScript objects. You make it sound like they have properties because this is something the DOM API set.
And unfortunately this is only the case for some special HTML attributes, not for all of them.
I think you missed a couple of bits in the article:
> the above only works because Element has an id getter & setter that 'reflects' the id attribute … It didn't work in the example at the start of the article, because foo isn't a spec-defined attribute, so there isn't a spec-defined foo property that reflects it.
This is where the distinction is made between merely setting a property on a JavaScript object, and cases where you're actually calling an HTML-spec'd getter that has side effects.
The whole "reflection" section of the article is dedicated to how these HTML-spec'd getters and setters change the behaviour from the basic JavaScript property operations shown at the start of the article, and how it differs from property to property.
Food for thought: have a look at this paper[0] about structural regular expressions. The author (Rob Pike) sketches in the last section an awk-support. I remember using such regexps a while ago to tweak indented JSON and JSON-like data (the indentation allowed to loop on hashes easily).
An awk with json support would for the most part need to be able to loop on hashes and arrays, and provide ways to travel in-depth. So far regular awk can travel through arrays (line-separator), and "in-depth" (e.g. nested "arrays") via regular loops & cie. Probably easier to think about it with a few concrete examples though.
I also think that open source is better than closed source. Nothing to argue about.
What I was wondering when I read the same sentence you quoted: how many really serious security bugs like Heartbleed, CVE-2008-0166, or the zx drama are happening without people finding out about it and publishing their findings?
In open source there are only two likely outcomes when someone notices a security issue: either they plan to hoard it for their on gain, or the tell the world about it and earn the kudo's. The ability to earn kudo's is a right proper pain in the arse because a lot of things that have little to no impact on security are loudly touted as security bugs and so a lot of time is wasted on triage of non-security issues in open source.
The problem with closed source is there is a third possibility: ignore the problem and save on the cost of fixing it. The responsible disclosure regime we have now is because companies almost always chose this option, ie denied it was a problem and refused to invest to fix it. When the discoverer then released the bug anyway they we so enamoured this this approach tried solving the disclosure problem by suing the researcher.
If you think companies still don't ignore security issues when they are given a choice, you are kidding yourself. The problem compounds because when you do find a bug open source makes it easy to see if you can use it to create a security issue. In proprietary code that's much harder, so I'm 100% certain a fair number of potential security issues don't get patched because it isn't obvious how to exploit them. Nonetheless they are chinks in the armour, so they give the bad guys a excellent set of starting places to start looking.
Usually clients would connect to a Kubernetes svc to not have the problem with changing IPs. Even for just a single pod I would do that.