I've used Popeye because it comes shipped with k9s, which is the best k8s "dashboard" in my opinion. I really like it but it tends to bug out a bit on my terminal.
What's the point of requiring the control plane to be locked down to authorized networks (IP address ranges)? Isn't Google responsible for DDoS protection, enforcing authentication controls (i.e. logging in with a Google account in the right Google group), patching the control plane ASAP for any security vulnerabilities?
If you have a VPN, if you have heavy-duty network monitoring on your VPN endpoint, sure, limit it to the VPN. For the rest of us? Is every startup running GKE without heavy-duty VPN / network monitoring fundamentally insecure? That doesn't sound right to me. Security is supposed to be a spectrum, and it seems like black-and-white automated config checkers like these are more likely to provoke arguments internally ("but the tool said it's bad!!") than to help reach a nuanced understanding of why tradeoffs are made. No?
Likely security in layers. Why expose your control plane to attacks directly from the internet if you don't have to? Cuts down login attempts noise in logs since anything would have to be coming from the VPC. Other than initial setup of a bastion -- that's the tradeoff -- sounds like less to worry about for a small shop or a startup. Same for Cloud SQL or any other managed service.
I wish they would reuse the pattern of Cloud SQL where you can get temporary access without manually handling the Authorized Networks setting. The Cloud SQL API lets you exchange your API access token for a short lived TLS client certificate. This is done client side by things like cloudsql-proxy[1] and the cloud-sql-jdbc-socket-factory java library[2]. This way, I can access my Cloud SQL instance from my IDE, even though my list of authorized networks is empty.
I feel like the gke-gcloud-auth-plugin cloud do something very similar.
But setting up a bastion has its own issues. Now I need to monitor and patch it regularly, which with a 100% Kubernetes shop means I need a completely separate setup for it. How do I protect my bastion against DDoS?
Managed VPN services have their own costs and worries. Don't want to accidentally pay for tunneled Netflix or YouTube? More configuration and maintenance.
What kind of attacks from the Internet need to be worried about here? Basically just zero-days? But that's true for any bastion or VPN as well, and again, Google is managing the control plane so they're empowered to roll out the patch faster than a small customer could.
You probably already have CI/CD so a scheduled action on something like running Terraform would notice that your VM OS image version just got updated and automatically replace your bastion. That image seems to update every few days. You don't lose any data since persistent disk(s) are attached. I'd be surprised if anyone with automation still manually logs into machines to run apt-get update and upgrade -- cloud-init and/or crontab should do that for you.
If there aren't any DNS entries pointing to my bastion host(s), then I'd find it unlikely that a DDoS would ever specifically be directed to them. Pretty easy to recreate them in another region, and/or put them behind something like Cloudflare.
The point is to limit the attack surface, following zero trust and least privilege principles. This particular rule follows also GKE product networking best practices [1]. However we understand that not all rules apply to all environments. That is why it is possible to exclude selected rules in a tool or provide set of own, custom policies. If Security Command Center is used, it is also possible to mute findings there.
Because you don't need to, plus you're covered against 0days. If you're creating infra on GCP then I'm sure it's not too much of an effort to use Google's osn Cloud VPN or create a bastion interface to ssh/wireguard into (only needs to be very small).
Would you feel more comfortable if you had an in cluster agent that dial into a Google Managed API and receives command from it ? This way your control plane can be private and you don't have to handle Managed Authorize Networks.
That is good point but also long way ahead. Sometimes such tools are merging with a core product. The tool and policy was created by Google engineers that are not part of GKE product development teams.
For now I suggest to use the tool in a scheduled, serverless manner and configure evaluation output to Security Command Center. By that, processes will be fully automated and the results will be visible in a webconsole (as findings in Security Command Center).
The tool is developed by Google Cloud engineers working in professional services. We create such tools to quickly help our customers (and often ourselves) in some particular challenges. Sometimes such initiatives are merging with a core product.
The disclaimer is there to say, that this tool is maintained by the community, not the official product support.
Google also has (or had? it's been 7 years since I was there) pretty strict rules around working on your own OSS even on your free time, so the easiest way is (was?) for Google to own the IP and apply an open source license.
I somehow got the impression that kind of restriction was illegal in California, although I guess it's like a lot of things in the legal domain: those who have the most lawyers wins
California applies limits to such restrictions, but does allow them when the side projects in question relate to the employer's actual or demonstrably anticipated business, among other exceptions. A tool specific to Google Kubernetes Engine, as we're discussing here, plainly relates to Google's business.
My memory of Google US employment legalese - note I have not worked for them for over 7 years and am not speaking for them here - is that they acknowledge limits to their IP assignment provisions which are consistent with California law. Any Googler who is confident that those limits protect their ownership by default of their side project does not have to seek Google's approval in order to own it, even according to the contract wording.
But Google's business is so broad that it's often legitimately debatable (and sometimes beyond the knowledge of the Googler doing the work) as to whether something would be in scope. So getting their approval, which can come either with explicit assignment of rights back to the Googler or explicit permission to release under Google copyright, is often the prudent approach to minimize undesired risks.
As many people predicted years ago (I can't claim to know k8s that well, in fact I suck at it), eventually we'll go full circle and k8s config will just become its own specialized programming language.
Maybe we should stop moving these things so agonizingly slow and through the path of natural (and did I mention slow as molasses) evolution and just skip to the endgame that most of us know will inevitably come?
Apparently not.
I'd like to see Google being more brave here. They are on the forefront of k8s in many ways (or so it seems, maybe I am wrong?). Just make a specialized programming language with a good compiler / linter and let's all collectively be better for it.
There's no reason to create an entirely new language. Since k8s manifests can be represented in JSON, you can use existing templating languages like Jsonnet [1] in order to generate it for you. All we need is an official library.
There was a tool called ksonnet for this (jsonnet + a default library for k8s configs), but the company behind it got bought and it's no longer maintained: https://github.com/ksonnet/ksonnet
JSON is terrible to write by hand. YAML is terrible too. There should be an official and sane language which will map to YAML 1:1. We don't write Java code in YAML for a reason.
Why do you need a 'specialized programming language' when kube for most people, is just waiting for various objects via api?
I mean, sure, ATM it can be a bit painful to generate your 'Deployment' object in json or yaml or whatever, but then it's just a post to the kubeapi and it's done.. You can do that in any language you want..
You're missing the part where people do this periodically and can never truly remember all the gotchas so they make small mistakes with time that at one point make your cluster fall over with cryptic error messages. I've seen it (but as I said above I am by no means a k8s pro).
That's why there are these linters.
Having a small super-specialized language that catches errors before "compiling" your configuration to an YAML will help hugely.
Red Hat has a similar tool called Red Hat Insights Advisor for OpenShift, which they provide as a free service with OpenShift subscriptions, so you don't need to install anything:
https://github.com/derailed/popeye