Skip to content
This repository has been archived by the owner on May 27, 2024. It is now read-only.

Jetty server is occasionally freezing #399

Open
barbeau opened this issue Mar 14, 2017 · 2 comments
Open

Jetty server is occasionally freezing #399

barbeau opened this issue Mar 14, 2017 · 2 comments
Labels

Comments

@barbeau
Copy link
Member

barbeau commented Mar 14, 2017

Summary:

Twice in the last few weeks I've seen Jetty on both mobullity2.forest.usf.edu and mobullity3.forest.usf.edu (i.e., the two production servers behind the load balancer) become unresponsive. This manifests as visiting http://maps.usf.edu and the app just hanging and eventually timing out with no response. Each of these times mobullity.forest.usf.edu (our dev server) was unaffected (i.e., it was still responsive).

I'm guessing this may have something to do with the automate graph and/or app updates configured via Jenkins, which would explain why both servers stopped working simultaneously.

So far the workaround each time has been to manually restart mobullity2.forest.usf.edu and mobullity3.forest.usf.edu. When the server restarts, Jetty works fine again and everything is back to normal.

Steps to reproduce:

Not sure - maybe something related to Jenkins updates?

Expected behavior:

Production site should be stable and always responsive

Observed behavior:

mobullity2.forest.usf.edu and mobullity3.forest.usf.edu both freeze and become unresponsive, bringing down the production site.

Platform:

mobullity2.forest.usf.edu and mobullity3.forest.usf.edu

cc @anilreddy1

@barbeau barbeau added the bug label Mar 14, 2017
@barbeau
Copy link
Member Author

barbeau commented Mar 15, 2017

It's possible the log rotation (see #397) might have something to do with this. If the log files are being locked, and Jetty is trying to write to them, it could potentially cause it to freeze.

@anilreddy1
Copy link
Contributor

USF Maps freezing because of out of memory error
java.lang.OutOfMemoryError: Java heap space

From the JVM options it seems like we are not using any Garbage Collector. It not collecting the dead objects, so eventually we are out of heap space.

Setting the JVM options to use Generational Garbage Collector may solve the problem.

Options need to set for G1 GC:

-XX:+UseG1GC
Use the Garbage First (G1) Collector

-XX:MaxGCPauseMillis=n
Sets a target for the maximum GC pause time. This is a soft goal, and the JVM will make its best effort to achieve it.

-XX:InitiatingHeapOccupancyPercent=n
Percentage of the (entire) heap occupancy to start a concurrent GC cycle. It is used by GCs that trigger a concurrent GC cycle based on the occupancy of the entire heap, not just one of the generations (e.g., G1). A value of 0 denotes 'do constant GC cycles'. The default value is 45.

-XX:NewRatio=n
Ratio of old/new generation sizes. The default value is 2.

-XX:SurvivorRatio=n
Ratio of eden/survivor space size. The default value is 8.

-XX:MaxTenuringThreshold=n
Maximum value for tenuring threshold. The default value is 15.

-XX:ParallelGCThreads=n
Sets the number of threads used during parallel phases of the garbage collectors. The default value varies with the platform on which the JVM is running.

-XX:ConcGCThreads=n
Number of threads concurrent garbage collectors will use. The default value varies with the platform on which the JVM is running.

-XX:G1ReservePercent=n
Sets the amount of heap that is reserved as a false ceiling to reduce the possibility of promotion failure. The default value is 10.

-XX:G1HeapRegionSize=n
With G1 the Java heap is subdivided into uniformly sized regions. This sets the size of the individual sub-divisions. The default value of this parameter is determined ergonomically based upon heap size. The minimum value is 1Mb and the maximum value is 32Mb.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants