Hacker Newsnew | past | comments | ask | show | jobs | submit | xcq1's commentslogin


Another relevant paper: https://arxiv.org/abs/1612.03238


Thanks for your helpful insight.

Since everyone agrees on this point I now absolutely consider that a fair argument. I just don't want to believe without a little research first. In fact I think I learned more than I expected from everyone's responses.


That's what we're here for, glad I could help!


The VPN access works over remote desktop. Should've probably made that clearer from the start.

I don't know if this is specific to here, but you'd have to toggle the VPN explicitly on and off and with another password, separate from your user account. Along with the usual drill to have another password to access the machine and lock it when you're away. I agree it ultimately comes down to trust however.


They weren't entirely unmanaged devices as they had to fulfill additional criteria.


Do you mean 100% remote or only occasionally?

If it's the former, I'd understand, if it's the latter that sounds like a lot of additional effort.


100% remote. And I agree, my setup would be a huge pain for occasional stints at home.


I don't question the security benefit. I think you're absolutely right that the users always come first. The production system and its data was never running inside the company network and is protected additionally.

I feel it'll be a loss of usability since they want to have a one-size-fits-all laptop. The model I've seen is noisy and a bit heavy. Suddenly having to carry one every single day irks me a bit. Having to (un)plug monitors and periphery at home is going to be additional effort (but explicitly allowed). Not saying it's not worth it (and somewhat complaining on a high level), but it is a loss of comfort.


A few suggestions that might help:

1) Get docks for home and work, so it's just one step to connect peripherals. It's actually a lot more convenient than having separate machines for work and home.

2) Find out if you can use a virtual desktop setup, where everything is running on your work machine, but you can use RDP to control it. A competent IT dept should be able to set that up in a way that's not less secure.

3) If you're in the US, your company can't force you to carry a heavy laptop if you have any issues with strength or mobility. If you want to exploit this, you can ask your doctor for a note saying that you shouldn't carry a laptop to/from work. This is actually probably true for the many people who have issues with back pain.


1) Thanks, docks at work are provided, but I'll check whether they will also provide one for working at home.

2) This is more or less the way it's already done. The plan now is to replace every desktop PC with only one laptop per employee company-wide. Which is why I was asking if this is such a common practice, especially since the company tries hard to come off as modern and hip in other regards.

3) Very good point, I'll look into that. I'm not in the US, but similar regulations probably apply here.


For the RDP solution, can't you just log in from a home computer?

To be clear, I'm not just talking about logging into a VPN. I'm talking about streaming the display output from a work machine. No programs or data from your work machine would be running at home.


Yes, that is precisely what I do right now. But I need to log into the VPN since the RDP server is only available inside the company network and not in the public internet. Unless you're talking about in my home wifi to avoid the dock.


I should clarify, it was not expected, it was a possibility. If you wanted to get a laptop instead, this was no problem. Several colleagues already have some, although in varying quality.


Thanks for your input. Speaking for myself, I've always tried to keep everything separate though not such a deep level. Sometimes it can be very practical to just switch to a VM or fire up a different browser in order to take a look at something.

Up until now I haven't noticed any restrictive bloatware on company machines, so that's a plus.


Didn't Gödel explicitly proof that there are some statements that cannot be proven this way? Is it just assumed that these will not be relevant for all the hard problems that humans cannot solve or am I missing something here?


1. rarely does that come up in practical problems 2. for many reasons (and completeness is one) the machine cannot prove the next version is better, thus must discard it


Anyone here actually use Java containers in production?

Sadly the article mentions very little in terms of practical advice. We've tried running some small Java 8 Spring Boot containers in Kubernetes which are configured to use max ~50M heap and ~150M total off-heap yet decide to use in excess of double that memory so we end up with either a lot of OOMKills or overly large memory limits.


Yes, we do. It's actually a rather small setup (13 dedicated server, about 100 containers).

The absolutely most basic advice is probably: "-Xmx" does not represent the actual upper limit for memory usage. We actually most often only set 50% of the assigend memory for the jvm.

Somebody mentioned https://github.com/cloudfoundry/java-buildpack-memory-calcul... which seems pretty interesting.


You may be experiencing the same bug as we did: memory freed by the GC of a containerised JVM was not able to be returned to the host machine.

IIRC it was due to a bug in glibc >= 2.1. Something about how mallocs are pooled. IIRC you need to tune it to be <= num of physical threads. Usually people advise 4 or 2.

  # openjdk 8 bug: HotSpot leaking memory in long-running requests
  # workaround:
  # - Disable HotSpot completely with -Xint 
  # - slows the memory growth, but does not halt it:
  MALLOC_ARENA_MAX=4
So, ensure that your java process is launched with that environment var (so, export it in the same shell, or precede your java command with it).

If you happen to be using Tomcat, I recommend putting:

    export MALLOC_ARENA_MAX=4
into:

    /usr/local/tomcat/bin/setenv.sh
As for how much memory you allocate to your containers: as of JRE 8u131 you can make this far more container-friendly:

    -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
This is equivalent to saying:

    -XX:MaxRAM=$(cat /sys/fs/cgroup/memory/memory.limit_in_bytes)
https://github.com/moby/moby/issues/15020 https://github.com/docker-library/openjdk/issues/57 https://bugs.openjdk.java.net/browse/JDK-8170888


We use Java containers in production on a very large scale system. We are actively migrating stuff away from Java to Go, because the JVM is a nightmare in containers. Sure we could make do and configure the JVM to hell and hack things to get it to work...but why bother? We have to allocate hundreds of MB of memory for a simple HTTP server. The tons of configuration and hacks are maintenance nightmares. It's been a terrible experience all around. Java is bloated, it's a resource hog, and the ecosystem doesn't care about cold start times. It's just a terrible fit for a containerized environment.


Spring Boot is, by design, eager to load everything it sense you might need. Most of the memory usage is not Spring itself, it's the libraries it's pulling in on your behalf.

In Spring Boot 2 I'm told you can can use functional beans to cut down on RAM usage. Not sure how it works.

But really, it comes down to deciding if you need a dependency or not.

As is normal for questions about Spring Boot performance, Dave Syer has done a lot of investigating: https://github.com/dsyer/spring-boot-memory-blog


Where I work most of our dockerized services are Spring Boot in Kubernetes, they do need more memory than what you've posted and generally run with about 300M ~ 600M usage depending on what they need to do.

You can also use smaller frameworks (Vertx? Javalin? possibly Spring Boot 2). I hope that with Java 9 we won't see this amount of memory usage anymore, however our organisation isn't there yet though.


Yeah, we deploy containerized Jenkins environments for almost 100 teams on VM's running Docker. These are massive heap containers (20+GB in some cases). Probably not the best use of Docker, but we actually are doing pretty good. Working towards migrating to an OpenShift environment and then evaluating some new tech from CloudBees in this area.


Honestly I don't feel there is any need to run Java in containers. The war/jar file is its own container with its own dependencies. The JVM still makes the same syscalls as it would inside a Docker/Kubernetes container.

In fact I would rather look at serverless architecture before considering docker/Kubernetes.


when you run a polyglot stack with java/python/go/node on top of a cluster of machines, you will love to have them containerized and uniform. It makes scripting and CI so much easier.

or, when you have a legacy app that relies on java 6, but you want everything else to run on java 8, the ability to drop everything into a container with its runtime is a life saver.

source: I'm the devops person that's responsible for making this work


We already run a polyglot stack at our company, and we use Docker(nvidia-docker) for our Python environment. With Java there is no need, and it is a lot less work updating and upgrading the JVM and our Java applications. I would use Docker for Java 6 though.


cgroup limits can still be beneficial, consider for example contraining the (by default unlimited) metaspace.


We run in production with conservative memory limits.

If you want to be more clever, you could try this: https://github.com/cloudfoundry/java-buildpack-memory-calcul...


If you ditch Spring Boot and wire Spring yourself the startup-time and footprint should be smaller. But 250M may be a bit optimistic.


What are the other common JVM Memory offenders ?!

(Electron i guess being the obvious JS-to-native example)

(And isn't Sprint Boot the light-weight solution to the Sprint memory problem? /s)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: