About an year back I wrote a post "Does Tomcat bite more than it can chew" about how Tomcat accepts more TCP connections that it can handle, and resets them later on. I also wrote a follow-up post "How to stop biting, when you cant chew more.." and explained how the UltraESB does it using Apache HttpComponents NIO underneath. I raised the issue on the Tomcat mailing list too, but found that most of the folks over there did not like what I had to say, and refused to change Tomcat.
This week one of our clients was running a load test using the UltraESB in front of a Tomcat instance, and wanted to measure the performance of the UltraESB to use WS-Security signatures over a Tomcat service. However, at 1000 concurrent users, the UltraESB started to report an XML parse exception with a "Premature end of file" for the responses.
After detailed investigations to reproduce the issue in our environment, we found the culprit to be Tomcat. However, proof of the issue was not that simple, and we used a standalone Java client to issue fixed sized asynchronous HTTP post requests to Tomcat, and measured the size of the responses received. The message size used was 5120 bytes, and thus a response of equal size was expected.
However, under load, Tomcat accepted a 5K request, but replied with an HTTP status code of 200, but without any payload. It sent the Content-Length header value as "0". Here is what Wireshark reported:
Here are the response headers that Tomcat sent back:
This week one of our clients was running a load test using the UltraESB in front of a Tomcat instance, and wanted to measure the performance of the UltraESB to use WS-Security signatures over a Tomcat service. However, at 1000 concurrent users, the UltraESB started to report an XML parse exception with a "Premature end of file" for the responses.
After detailed investigations to reproduce the issue in our environment, we found the culprit to be Tomcat. However, proof of the issue was not that simple, and we used a standalone Java client to issue fixed sized asynchronous HTTP post requests to Tomcat, and measured the size of the responses received. The message size used was 5120 bytes, and thus a response of equal size was expected.
However, under load, Tomcat accepted a 5K request, but replied with an HTTP status code of 200, but without any payload. It sent the Content-Length header value as "0". Here is what Wireshark reported:
Here are the response headers that Tomcat sent back:
HTTP/1.1 200 OKIt's also interesting to note that Tomcat sometimes returns error responses which states that an internal error occurred; but with a HTTP status code of 200, instead of the expected 500 error code. Here is an example of such a payload returned:
Server: Apache-Coyote/1.1
Content-Length: 0
Date: Thu, 07 Nov 2013 10:15:14 GMT
Connection: close
<html><head><title>Apache Tomcat/7.0.28 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 500 - </h1><HR size="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u></u></p><p><b>description</b> <u>The server encountered an internal error () that prevented it from fulfilling this request.</u></p><HR size="1" noshade="noshade"><h3>Apache Tomcat/7.0.28</h3></body></html>It would have been so much better for many, if Tomcat did not bite fast, when it cannot chew!
2 comments:
Asankha,
Have you found work around? Will increasing httpConnections solve the issue?
Thanks,
Dimitri
Have you tried to increase httpConnections or any other workaround?
Dimitri
Post a Comment