Category: WHELPS

07/11/08

Permalink 04:16:50 pm, by fumanchu Email , 228 words   English (US)
Categories: WHELPS

Writing High-Efficiency Large Python Systems--Lesson #3: Banish lazy imports

Lazy imports can be done either explicitly, by moving import statements inside functions (instead of at the global level), or by using tools such as LazyImport from egenix. Here's why they suck:

> fetchall (PgSQL:3227)
--> __fetchOneRow (PgSQL:2804)
----> typecast (PgSQL:874)
... 26703 function calls later ...
----< typecast (PgSQL:944): 
      <mx.DateTime.DateTime object for
       '2005-08-15 00:00:00.00' at 2713120>
    3477.321ms

Yes, folks, that single call took 3.4 seconds to run! That would be shorter if I weren't tracing calls, but...ick. Don't make your first customer wait like this in a high-performance app. The solution if you're stuck with lazy imports in code you don't control is to force them to be imported early:

mx.DateTime.Parser.DateFromString('2001-01-01')

Now that same call:

> fetchall (PgSQL:3227)
--> __fetchOneRow (PgSQL:2804)
----> typecast (PgSQL:874)
... 7 function calls later ...
----< typecast (PgSQL:944): 
      <mx.DateTime.DateTime object for
       '2005-08-15 00:00:00.00' at 27cf360>
    1.270ms

That's 1/3815th the number of function calls and 1/2738th the run time. I am not missing decimal points.

Not only is this time-consuming for the first requestor, but lends itself to nasty interactions when a second request starts before the first is done with all the imports. Module import is one of the least-thread-safe parts of almost any app, because people are used to expecting all imports in the main thread at process start.

I'm trying very hard not to rail at length about WSGI frameworks that expect to start up applications during the first HTTP request...but it's so tempting.

07/03/08

Permalink 05:37:31 pm, by fumanchu Email , 319 words   English (US)
Categories: WHELPS

Writing High-Efficiency Large Python Systems--Lesson #2: Use nothing but local syslog

You want to log everything, but you'll find that even in the simplest requests with the fastest response times, a simple file-based access log can add 10% to your response time (which usually means ~91% as many requests per second). The fastest substitute we've found for file-based logging in Python is syslog. Here's how easy it is:

import syslog
syslog.syslog(facility | priority, msg)

Nothing's faster, at least nothing that doesn't require you telling Operations to compile a new C module on their production servers.

"But wait!" you say, "Python's builtin logging module has a SysLogHandler! Use that!" Well, no. There are two reasons why not. First, because Python's logging module in general is bog-slow--too slow for high-efficiency apps. It can make many function calls just to decide it's not going to log a message. Second, the SysLogHandler in the stdlib uses a UDP socket by default. You can pass it a string for the address (probably '/dev/log') and it will use a UNIX socket just like syslog.syslog, but it'll still do it in Python, not C, and you still have all the logging module overhead.

Here's a SysLogLibHandler if you're stuck with the stdlib logging module:

class SysLogLibHandler(logging.Handler):
    """A logging handler that emits messages to syslog.syslog."""
    priority_map = {
        10: syslog.LOG_NOTICE, 
        20: syslog.LOG_NOTICE, 
        30: syslog.LOG_WARNING, 
        40: syslog.LOG_ERR, 
        50: syslog.LOG_CRIT, 
        0: syslog.LOG_NOTICE, 
        }

    def __init__(self, facility):
        self.facility = facility
        logging.Handler.__init__(self)

    def emit(self, record):
        syslog.syslog(self.facility | self.priority_map[record.levelno],
                      self.format(record))

I suggest using syslog.LOCAL0 - syslog.LOCAL7 for the facility arg. If you're writing a server, use one facility for access log messages and a different one for error/debug logs. Then you can configure syslogd to handle them differently (e.g., send them to /var/log/myapp/access.log and /var/log/myapp/error.log).

Permalink 05:02:59 pm, by fumanchu Email , 189 words   English (US)
Categories: WHELPS

Writing High-Efficiency Large Python Systems--Lesson #1: Transactions in tests

Don't write your test suite to create and destroy databases for each run. Instead, make each test method start a transaction and roll it back. We just made that move at work on a DAL project, and the test suite went from 500+ seconds to run the whole thing down to around 100. It also allowed us to remove a lot of "undo" code in the tests.

This means ensuring your test helpers always connect to their databases on the same connection (transactions are connection-specific). If you're using a connection pool where leased conns are bound to each thread, this means rewriting tests that start new threads (or leaving them "the old way"; that is, create/drop). It also means that, rather than running slightly different .sql files per test or module, you instead have a base of data and allow each test to add other data as needed. If your rollbacks work, these can't pollute other tests.

Obviously, this is much harder if you're doing integration testing of sharded systems and the like. But for application logic, it'll save you a lot of headache to do this from the start.

October 2014
Sun Mon Tue Wed Thu Fri Sat
 << <   > >>
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31  

Search

The requested Blog doesn't exist any more!

XML Feeds

powered by b2evolution