« pickle.dumps not suitable for hashingSimplify, maaaaan »

CherryPy 3 request_queue_size

10/02/07

Permalink 12:55:58 am, by fumanchu Email , 159 words   English (US)
Categories: CherryPy, WSGI

CherryPy 3 request_queue_size

Well, that was instructive. Leaving server.request_queue_size at the default 5:

C:\Python24\Lib\site-packages>python cherrypy\test\benchmark.py
Starting CherryPy app server...
Started in 1.10800004005 seconds

Client Thread Report (1000 requests, 14 byte response body, 10 server threads):

threads | Completed | Failed | req/sec | msec/req | KB/sec |
     10 |      1000 |      0 |  736.81 |    1.357 | 119.36 |
     20 |      1000 |      0 |  436.07 |    2.293 |  70.64 |
     30 |      1000 |      0 |  348.38 |    2.870 |  56.44 |
     40 |      1000 |      0 |  233.10 |    4.290 |  37.76 |
     50 |      1000 |      0 |  296.77 |    3.370 |  48.08 |
Average |    1000.0 |    0.0 | 410.226 |    2.836 | 66.456 |

Client Thread Report (1000 requests, 14 bytes via staticdir, 10 server threads):

threads | Completed | Failed | req/sec | msec/req | KB/sec |
     10 |      1000 |      0 |  421.73 |    2.371 |  87.30 |
     20 |      1000 |      0 |  374.87 |    2.668 |  77.60 |
     30 |      1000 |      0 |  306.71 |    3.260 |  63.49 |
     40 |      1000 |      0 |  240.08 |    4.165 |  49.70 |
     50 |      1000 |      0 |  170.03 |    5.881 |  35.20 |
Average |    1000.0 |    0.0 | 302.684 |    3.669 | 62.658 |

Size Report (1000 requests, 50 client threads, 10 server threads):

    bytes | Completed | Failed | req/sec | msec/req |   KB/sec |
       10 |      1000 |      0 |  187.98 |    5.320 |    29.70 |
      100 |      1000 |      0 |  207.45 |    4.820 |    51.45 |
     1000 |      1000 |      0 |  186.89 |    5.351 |   210.81 |
    10000 |      1000 |      0 |  228.12 |    4.384 |  2262.07 |
   100000 |      1000 |      0 |  245.60 |    4.072 | 24022.01 |
100000000 |      1000 |     10 |   20.83 |   48.001 | 20358.12 |

Upping server.request_queue_size to 128:

C:\Python24\Lib\site-packages>python cherrypy\test\benchmark.py
Starting CherryPy app server...
Started in 1.10700011253 seconds

Client Thread Report (1000 requests, 14 byte response body, 10 server threads):

threads | Completed | Failed | req/sec | msec/req |  KB/sec |
     10 |      1000 |      0 |  745.38 |    1.342 |  120.75 |
     20 |      1000 |      0 |  772.32 |    1.295 |  125.12 |
     30 |      1000 |      0 |  654.11 |    1.529 |  105.97 |
     40 |      1000 |      0 |  929.02 |    1.076 |  150.50 |
     50 |      1000 |      0 |  641.03 |    1.560 |  103.85 |
Average |    1000.0 |    0.0 | 748.372 |   1.3604 | 121.238 |

Client Thread Report (1000 requests, 14 bytes via staticdir, 10 server threads):

threads | Completed | Failed | req/sec | msec/req |  KB/sec |
     10 |      1000 |      0 |  547.89 |    1.825 |  113.41 |
     20 |      1000 |      0 |  588.10 |    1.700 |  121.74 |
     30 |      1000 |      0 |  704.42 |    1.420 |  145.82 |
     40 |      1000 |      0 |  547.89 |    1.825 |  113.41 |
     50 |      1000 |      0 |  516.96 |    1.934 |  107.01 |
Average |    1000.0 |    0.0 | 581.052 |   1.7408 | 120.278 |

Size Report (1000 requests, 50 client threads, 10 server threads):

    bytes | Completed | Failed | req/sec | msec/req |   KB/sec |
       10 |      1000 |      0 |  622.35 |    1.607 |    98.33 |
      100 |      1000 |      0 |  604.74 |    1.654 |   149.37 |
     1000 |      1000 |      0 |  667.74 |    1.498 |   752.54 |
    10000 |      1000 |      0 |  890.31 |    1.123 |  8837.25 |
   100000 |      1000 |      0 |  728.44 |    1.373 | 71247.09 |
100000000 |      1000 |    202 |   12.81 |   78.094 |     None |

3 comments

Comment from: Sylvain Hellegouarch [Visitor] · http://trac.defuze.org

Oh my... This is quite impressive. I had never looked too much into the socket queue size but I will certainly do from now on.

Thanks Robert.

10/02/07 @ 08:20
Comment from: Dirk Rothe [Visitor]

Since most pages are usually DB/Templating-constrained around 100-200msec in my case (except static content) this will be only a theoretical speedup.

But I would be interested in some technical background - what is this request_queue_size stuff?

10/02/07 @ 17:07
Comment from: fumanchu [Member] Email

The short answer is:

self.socket.listen(self.request_queue_size)

It's the 'backlog' for the listening socket; that is, how many connections will queue up on the socket waiting for the server to accept them.

10/02/07 @ 17:36

Leave a comment


Your email address will not be revealed on this site.

Your URL will be displayed.

Please enter the phrase "I am a real human." in the textbox above.
(Line breaks become <br />)
(Name, email & website)
(Allow users to contact you through a message form (your email will not be revealed.)
April 2017
Sun Mon Tue Wed Thu Fri Sat
 << <   > >>
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30            

Search

The requested Blog doesn't exist any more!

XML Feeds

powered by b2evolution free blog software