Archives for: August 2007

08/23/07

Permalink 09:11:05 am, by fumanchu Email , 83 words   English (US)
Categories: Python

Simplify, maaaaan

Glyf said:

I've experienced quite a bit of confusion when talking to other python programmers about this stuff, but I'm not sure if it was my awkward explanation of Twisted's plugin system or some inherent issue in Python's path management.

If the implementation is hard to explain, it's a bad idea.

And if you would allow me to comment on your blog without becoming a LiveJournal user, I'd really appreciate it.

08/18/07

Permalink 07:00:00 pm, by fumanchu Email , 2186 words   English (US)
Categories: Python, Dejavu

Storm, SQLAlchemy, and Geniusql

As Mike Bayer mentioned, I recently did some cross-ORM comparisons. The charts below all show "number of requests" on the X axis and "number of seconds to complete those requests" on the Y axis.

Update: Mike found some problems in my methodology and posted a very thorough "revisiting" post. Definitely read that if you read this.

Methodology

I used trunk versions of Geniusql (1.0) and SQLAlchemy (0.4, without using the orm subpackage), and the latest stable distro of Storm (0.10). All test runs were done on my Win2k laptop, which has a 2GHz CPU and 512MB of RAM. PostgreSQL 8.0.0rc1 was the DB engine, and psycopg2 was the driver.

Scripts and a CSV are here: http://www.aminus.org/rbre/python/ormcompare/ I'd be happy to be notified I did something wrong in the SQLAlchemy or Storm code (especially if you can patch it to do better ;) ).

Results and Analysis

As with all benchmarks, take this with a heavy grain of salt. The numbers say far more about the values and designs of each ORM than they do about the performance of particular subsystems in each ORM. To wit:

Step 2: Insert

Inserting single objects: Benchmark chart for step 2: insert

SQLAlchemy is doing something that Geniusql and Storm aren't doing on every insert. A trace of SQLAlchemy and Geniusql shows Geniusql making 13 Python function calls for a single simple insert, whereas SQLAlchemy makes over 400! What could it be doing?

Apparently, SQLAlchemy is spending a lot of time generating bound parameters. That is, it generates the SQL INSERT INTO "Animal" ("ID", "Name", "Species", "Legs") VALUES (:ID, :Name, :Species, :Legs), which Geniusql doesn't do. Is it ostensibly doing this for speed reasons? If so, the benchmark results seem to argue that it's a losing battle; the auto-generation of optimized code is costing more than the savings gained from the optimization.

It also spends far too long fetching the autoincremented ID's, using 73 Python function calls; Geniusql manages it in 4: fetch, log, a __getattr__, and a pull method to coerce the new ID to the proper Python type. But then, SQLAlchemy is also populating the new object with other (non-autoincrementing) default values, in Python, which Geniusql doesn't do (it relegates that to its ORM layer, Dejavu). I'd bet SQLAlchemy would be better off just fetching the new row (which the DB has populated with default values already anyway).

Here's a trace of Geniusql retrieving the autoincremented ID:

> _grab_new_ids (postgres:520)
  > fetch (psycopg:94)
    > log (objects:794)
    < log (objects:795) 0.047ms
    > __getattr__ (conns:23)
    < __getattr__ (conns:24): <built-in method cursor of psycopg2._psycopg.connection object at 0x00ADB020> 0.052ms
  < fetch (psycopg:120): ([(13L,)], (('last_value', 20, 2, 8, None, None, None),)) 1.232ms
< _grab_new_ids (postgres:527): {'ID': 13L} 1.601ms

and here's an SQLAlchemy trace:

> get_column_default (postgres:616)
  > __get__ (util:191)
  < __get__ (util:195): True 0.062ms
  > get_column_default (base:1530)
    > traverse_single (sql:893)
      > visit_sequence (postgres:633)
        > <lambda> (base:1528)
        < <lambda> (base:1528): <sqlalchemy.databases.postgres.PGDialect object at 0x00A23A30> 0.049ms
        > format_sequence (ansisql:1026)
          > __generic_obj_format (ansisql:1004)
            > _get_case_sensitive (schema:110)
            < _get_case_sensitive (schema:119): True 0.058ms
          < __generic_obj_format (ansisql:1010): 'animal_id_seq' 0.289ms
        < format_sequence (ansisql:1027): 'animal_id_seq' 0.462ms
        > execute (base:712)
          > _execute_text (base:722)
            > __distill_params (base:728)
            < __distill_params (base:735) 0.080ms
            > __create_execution_context (base:761)
              > <lambda> (base:932)
              < <lambda> (base:932): <sqlalchemy.databases.postgres.PGDialect object at 0x00A23A30> 0.051ms
              > create_execution_context (postgres:231)
                > __init__ (default:146)
                  > __encode_param_keys (default:184)
                    > supports_unicode_statements (default:50)
                    < supports_unicode_statements (default:52): False 0.049ms
                    > proc (default:189)
                    < proc (default:193) 0.065ms
                  < __encode_param_keys (default:198) 0.794ms
                  > supports_unicode_statements (default:50)
                  < supports_unicode_statements (default:52): False 0.048ms
                  > create_cursor (postgres:183)
                    > _is_server_side (postgres:180)
                    < _is_server_side (postgres:181): False 0.051ms
                    > _get_connection (base:532)
                    < _get_connection (base:534): <sqlalchemy.pool._ConnectionFairy object at 0x00BDA950> 0.059ms
                    > cursor (pool:308)
                      > __init__ (pool:391)
                      < __init__ (pool:393) 0.061ms
                    < cursor (pool:311): <sqlalchemy.pool._CursorFairy object at 0x00BDAE90> 0.271ms
                  < create_cursor (postgres:190): <sqlalchemy.pool._CursorFairy object at 0x00BDAE90> 0.832ms
                < __init__ (default:174) 2.301ms
              < create_execution_context (postgres:232): <sqlalchemy.databases.postgres.PGExecutionContext object at 0x00BDAD70> 2.487ms
            < __create_execution_context (base:762): <sqlalchemy.databases.postgres.PGExecutionContext object at 0x00BDAD70> 2.860ms
            > __execute_raw (base:764)
              > is_info_enabled (logging:71)
                > isEnabledFor (logging/__init__:1139)
                  > getEffectiveLevel (logging/__init__:1125)
                  < getEffectiveLevel (logging/__init__:1135): 30 0.129ms
                < isEnabledFor (logging/__init__:1145): False 0.334ms
              < is_info_enabled (logging:72): False 0.530ms
              > __execute (base:774)
                > do_execute (default:103)
                  > __getattr__ (pool:404)
                  < __getattr__ (pool:405): <built-in method execute of psycopg2._psycopg.cursor object at 0x00B94978> 0.049ms
                < do_execute (default:104) 2.892ms
              < __execute (base:781) 3.186ms
              > _autocommit (base:685)
                > in_transaction (base:617)
                < in_transaction (base:618): False 0.049ms
                > match (sre:126)
                  > _compile (sre:213)
                  < _compile (sre:218): <_sre.SRE_Pattern object at 0x00BDC020> 0.087ms
                < match (sre:129) 0.270ms
              < _autocommit (base:689) 0.608ms
            < __execute_raw (base:772) 4.783ms
            > result (default:233)
              > get_result_proxy (postgres:192)
                > _is_server_side (postgres:180)
                < _is_server_side (postgres:181): False 0.051ms
                > __init__ (base:1126)
                  > <lambda> (default:176)
                    > <lambda> (default:180)
                      > _branch (base:538)
                        > __init__ (base:524)
                        < __init__ (base:530) 0.114ms
                      < _branch (base:547): <sqlalchemy.engine.base.Connection object at 0x00BD1350> 0.298ms
                    < <lambda> (default:180): <sqlalchemy.engine.base.Connection object at 0x00BD1350> 0.475ms
                    > <lambda> (base:549)
                    < <lambda> (base:549): Engine(postgres://postgres:djvpg@localhost/geniusql_bench) 0.048ms
                  < <lambda> (default:176): Engine(postgres://postgres:djvpg@localhost/geniusql_bench) 0.888ms
                  > is_debug_enabled (logging:68)
                    > isEnabledFor (logging/__init__:1139)
                      > getEffectiveLevel (logging/__init__:1125)
                      < getEffectiveLevel (logging/__init__:1135): 30 0.130ms
                    < isEnabledFor (logging/__init__:1145): False 0.340ms
                  < is_debug_enabled (logging:69): False 0.541ms
                  > is_select (default:219)
                    > match (sre:126)
                      > _compile (sre:213)
                      < _compile (sre:218): <_sre.SRE_Pattern object at 0x00BAB7F0> 0.086ms
                    < match (sre:129): <_sre.SRE_Match object at 0x00BC49F8> 0.261ms
                  < is_select (default:222): True 0.431ms
                  > _init_metadata (base:1151)
                    > _create_key_cache (base:1182)
                      > __init__ (util:51)
                      < __init__ (util:52) 0.049ms
                    < _create_key_cache (base:1204): {} 0.321ms
                    > __getattr__ (pool:404)
                    < __getattr__ (pool:405): (('nextval', 20, 2, 8, None, None, None),) 0.053ms
                    > dbapi_type_map (default:30)
                    < dbapi_type_map (default:34): {} 0.048ms
                    > decode_result_columnname (default:25)
                      > decode (utf_8:15)
                      < decode (utf_8:16): (u'nextval', 7) 0.053ms
                    < decode_result_columnname (default:28): u'nextval' 0.234ms
                    > dialect_impl (types:49)
                    < dialect_impl (types:51): NullType() 0.060ms
                  < _init_metadata (base:1179) 1.758ms
                < __init__ (base:1135) 4.304ms
              < get_result_proxy (postgres:196): <sqlalchemy.engine.base.ResultProxy object at 0x00BD1E30> 4.737ms
            < result (default:234): <sqlalchemy.engine.base.ResultProxy object at 0x00BD1E30> 4.917ms
          < _execute_text (base:726): <sqlalchemy.engine.base.ResultProxy object at 0x00BD1E30> 13.304ms
        < execute (base:715): <sqlalchemy.engine.base.ResultProxy object at 0x00BD1E30> 13.544ms
        > scalar (base:1318)
          > _fetchone_impl (base:1284)
            > __getattr__ (pool:404)
            < __getattr__ (pool:405): <built-in method fetchone of psycopg2._psycopg.cursor object at 0x00B94978> 0.051ms
          < _fetchone_impl (base:1285): (13L,) 0.250ms
          > _process_row (base:1291)
            > __init__ (base:1431)
            < __init__ (base:1436) 0.072ms
          < _process_row (base:1292): (13L,) 0.308ms
          > __getitem__ (base:1462)
            > _get_col (base:1280)
              > __getitem__ (util:53)
              < __getitem__ (util:55): (NullType(), NullType(), 0) 0.063ms
              > convert_result_value (types:193)
              < convert_result_value (types:194): 13L 0.048ms
            < _get_col (base:1282): 13L 0.503ms
          < __getitem__ (base:1467): 13L 0.694ms
          > close (base:1206)
            > close (pool:398)
            < close (pool:400) 0.063ms
            > <lambda> (base:1140)
              > <lambda> (default:182)
              < <lambda> (default:182): <sqlalchemy.engine.base.Connection object at 0x00BDACF0> 0.048ms
            < <lambda> (base:1140): <sqlalchemy.engine.base.Connection object at 0x00BDACF0> 0.229ms
            > <lambda> (base:552)
            < <lambda> (base:552): False 0.050ms
          < close (base:1219) 0.796ms
        < scalar (base:1327): 13L 2.721ms
      < visit_sequence (postgres:635): 13L 17.369ms
    < traverse_single (sql:896): 13L 17.584ms
  < get_column_default (base:1532): 13L 17.773ms
< get_column_default (postgres:631): 13L 18.196ms

Step 3: Properties

Retrieving rows/objects by ID: Benchmark chart for step 3: properties

Again, Storm and Geniusql are close but SQLAlchemy is not. There must be some things that SQLAlchemy is doing that the others are not.

Geniusql uses the pattern: parse expression, write SQL, fetch, iterate over result rows (and coerce inbound values as they are yielded).

Storm does it a little differently. The first time you call store.find (at least, I'm assuming it's only the first time), Storm pre-constructs SQL for you, including INSERT INTO, autoincrement ID getters, SELECT by ID, etc. It's the actual find().one() call that does the traditional work. The only significant difference from Geniusql in this step is that Storm's inbound type adapters seem to be slower (by a factor of 10 or so).

There's no single thing I can point to that SQLAlchemy does in this case that the others don't, it just seems to generally take more steps to do each phase. The correlations seem to be taking the most time; SQLAlchemy takes about 5 times longer to generate the SQL than Geniusql does. And, like in the insert step, I'm not sure the bind params are helping SQLAlchemy. But then, Storm does parameter binding and manages to stay fast.

Step 4: Expressions

Retrieving rows/objects with various WHERE clauses: Benchmark chart for step 4: expressions

Oh, dear. It looks like Storm needs some work in this department. Those pre-calculated fetch-by-ID SQL statements are handy, but can't apply to every situation. But let's look at SQLAlchemy first:

SQLAlchemy does a lot of traversal of tree-like structures. Some of this could/should be cached. For example, Geniusql stores the quoted names of each column, so only a single attribute lookup is needed for each one in a SELECT clause; fetching all the qnames and assembling them into the final SQL takes about 1 ms in my trace log. SQLAlchemy, on the other hand, makes about 24 Python function calls (4.5 ms) for each column. The actual fetch-iterate-and-coerce step is practically the same for both libraries.

Storm takes about 3.4ms to generate each column name in a SELECT clause. It also does some odd things with inbound coercion: it gets a decimal.Decimal back for a column that was declared to be Float (?! Zoo.Admission), and coerces that plus the unicode Zoo.Name, but doesn't run a coercion function at all for the date fields. Guess it knows/trusts psycopg will return valid date objects. Ah, so then it wraps up the decimal scalar value in a storm.variables.FloatVariable object. Are you allowed to use binary floating-point "all the way down" in Storm?

At any rate, it takes Storm about 60ms in my trace log to return the first object for each row, and then about 20ms per row/object thereafter. Geniusql takes about 20ms for the first row, and under 2ms for each row thereafter. It's a good bet that that's the bottleneck, and is most likely due to slow inbound adapters.

Step 5: Crazy stuff

Retrieving rows with certain fields only (projection), use of builtin functions (today/now) and use of distinct: Benchmark chart for step 5: aggregates

This one's a little crazy for several reasons. First, it tests some dissimilar things:

  • The 'Legs' attribute of the 'Animal' table is returned for all rows.
  • The 'Species' and 'Lifespan' columns are then retrieved for all rows.
  • The 'Zoo.Founded' date attribute is compared using an SQL function like TODAY or NOW.
  • A DISTINCT query is run on the Animal table.

Second, I couldn't find a way to get Storm to write TODAY into the SQL, so that test cheats a bit and just sticks datetime.datetime.today into the initial expression. I'm pretty convinced that's why the Storm times are so low on this test.

But let's break down the trace log times for the first test, fetching SELECT Legs FROM Animal. Here's SQLAlchemy:

view = select([Animal.c.Legs]).execute().fetchall()
legs = [x[0] for x in view]

[Animal.c.Legs] =  1.3 ms
select()        =  2.9 ms
execute()       = 63.6 ms
fetchall()      = 15.5 ms
listcomp        = 10.4 ms

Here's the Geniusql equivalent:

view = db.select((Animal, ['Legs']))
legs = [x[0] for x in view]

select()        = 13.2 ms
+- fetch()      =     1.9 ms
listcomp        =  4.2 ms

And the Storm timings:

legs = getall(Animal.Legs)

getall()        = 13.8 ms
+- execute()    =    10.0 ms
+- get_all()    =     2.7 ms

I just noticed I forgot the listcomp in the Storm test. Meh.

The third query is more complicated than the ones we've seen so far, so it takes up most of the time in this entire step. Geniusql takes 74.455ms to write the SQL and fetch the data. SQLAlchemy takes 17 ms just parsing all of the overridden operators in the expression (Storm overrides operators, too, but typically does each one in 1/3 the time). Then select() is 3 ms, execute is 115 ms (!), and fetchall is 11 more. I've been thinking lately of adding compound SELECTs to Geniusql recently, but if that means a doubling of execution time for SELECTs without any correlated subqueries, perhaps I'll wait a while. Just kidding--I'm sure it can be done more simply; if SQLAlchemy could short-circuit the traversal of ClauseVisitors in the common case (one table, no compounded SELECTs), it might improve dramatically.

Step 6: Editing

Updating rows/objects: Benchmark chart for step 6: editing

No surprises here. SQLAlchemy parses the WHERE clause of the UPDATE statements faster than Geniusql, but here come those correlated subquery traversals and bind params again to slow the whole thing down.

I just noticed something else: Geniusql uses the following sequence of events to grab a connection:

> get (conns:241)
    > id (conns:271)
    < id (conns:273): 0.049ms
> get (conns:269): 0.280ms

Here's the SQLAlchemy system:

> contextual_connect (base:1036)
  > __init__ (base:524)
    > raw_connection (base:1086)
      > connect (pool:157)
        > __init__ (pool:262)
          > __init__ (pool:256)
          < __init__ (pool:257) 0.055ms
          > get (pool:171)
            > do_get (pool:533)
              > get (queue:124)
                > acquire (threading:93)
                  > currentThread (threading:672)
                  < currentThread (threading:674): <_MainThread(MainThread, started)> 0.058ms
                  > _note (threading:44)
                  < _note (threading:45) 0.050ms
                < acquire (threading:109): True 0.518ms
                > _empty (queue:181)
                < _empty (queue:182): False 0.049ms
                > _get (queue:193)
                < _get (queue:194): <sqlalchemy.pool._ConnectionRecord object at 0x00C0F1F0> 0.050ms
                > notify (threading:236)
                  > _is_owned (threading:143)
                    > currentThread (threading:672)
                    < currentThread (threading:674): <_MainThread(MainThread, started)> 0.054ms
                  < _is_owned (threading:144): True 0.232ms
                  > _note (threading:44)
                  < _note (threading:45) 0.047ms
                < notify (threading:243) 0.649ms
                > release (threading:111)
                  > currentThread (threading:672)
                  < currentThread (threading:674): <_MainThread(MainThread, started)> 0.055ms
                  > _note (threading:44)
                  < _note (threading:45) 0.048ms
                < release (threading:119) 0.487ms
              < get (queue:157): <sqlalchemy.pool._ConnectionRecord object at 0x00C0F1F0> 2.532ms
            < do_get (pool:536): <sqlalchemy.pool._ConnectionRecord object at 0x00C0F1F0> 2.743ms
          < get (pool:172): <sqlalchemy.pool._ConnectionRecord object at 0x00C0F1F0> 2.914ms
          > get_connection (pool:219)
          < get_connection (pool:234): <connection object at 0x00B94E58;
              dsn: 'dbname=geniusql_bench host=localhost user=postgres password=*****', closed: 0> 0.079ms
          > __get__ (logging:86)
            > _get_instance_name (logging:56)
            < _get_instance_name (logging:60): 'sqlalchemy.pool.QueuePool.0x..50' 0.061ms
            > getLogger (logging/__init__:1225)
              > getLogger (logging/__init__:832)
                > _acquireLock (logging/__init__:171)
                  > acquire (threading:93)
                    > currentThread (threading:672)
                    < currentThread (threading:674): <_MainThread(MainThread, started)> 0.056ms
                    > _note (threading:44)
                    < _note (threading:45) 0.048ms
                  < acquire (threading:109): True 0.496ms
                < _acquireLock (logging/__init__:181) 0.687ms
                > _releaseLock (logging/__init__:183)
                  > release (threading:111)
                    > currentThread (threading:672)
                    < currentThread (threading:674): <_MainThread(MainThread, started)> 0.056ms
                    > _note (threading:44)
                    < _note (threading:45) 0.048ms
                  < release (threading:119) 0.481ms
                < _releaseLock (logging/__init__:188) 0.658ms
              < getLogger (logging/__init__:862): <logging.Logger instance at 0x00B96F58> 1.775ms
            < getLogger (logging/__init__:1232): <logging.Logger instance at 0x00B96F58> 1.994ms
            > getEffectiveLevel (logging/__init__:1125)
            < getEffectiveLevel (logging/__init__:1135): 30 0.134ms
          < __get__ (logging:90): False 2.731ms
        < __init__ (pool:273) 6.473ms
        > checkout (pool:319)
        < checkout (pool:325): <sqlalchemy.pool._ConnectionFairy object at 0x00C0FF50> 0.092ms
      < connect (pool:159): <sqlalchemy.pool._ConnectionFairy object at 0x00C0FF50> 6.878ms
    < raw_connection (base:1089): <sqlalchemy.pool._ConnectionFairy object at 0x00C0FF50> 7.059ms
  < __init__ (base:530) 7.308ms
< contextual_connect (base:1042): <sqlalchemy.engine.base.Connection object at 0x00C0F910> 7.486ms

Houston, we have a problem. Don't log if you don't have to (especially when using the Python stdlib's extremely-nested logging module, which is slow even when turned off), and use a map of conns to threads instead of going to the Queue for every request. Geniusql manages this by using weakref wrappers on the conns, so when a thread no longer references its mapped connection, it's returned to the pool. Storm gets around this by not having a pool, or even threadsafe Store objects--each Store has its own connection, so it "maps conns to threads" by limiting you to one Store+conn per thread.

That's enough analysis for now. :) Hope you enjoyed the ride.

08/10/07

Permalink 08:18:51 pm, by fumanchu Email , 528 words   English (US)
Categories: Dejavu

Looks like I reinvented the wheel again

I just followed a link from LtU to a paper on Safe Query Objects. The content of the paper was immediately familiar to me, since Geniusql does exactly what is described therein: use native language (Python) semantics to construct remote queries.

Notes while I read:

Despite their usefulness, call level interfaces have a number of significant problems. First, the embedded database programs are not checked until they are passed to the CLI at runtime. As a result, the syntax and types of database programs are not checked statically, but instead result in runtime errors. This is true despite the fact that the database structure is almost always static and known when the client program is compiled.

Geniusql deals with this by obtaining and checking a complete model of the database types. For example, an object with a unicode attribute would be mapped to a Table object with (say) an instance of the geniusql.providers.postgres.TEXT class.

Second, programs that use call level interfaces are difficult to write and maintain. There are important classes of queries that must be constructed dynamically at runtime. Manipulating programs as strings is complex and error-prone due to the complex rules for nesting expressions, quoting constants, and the interplay between embedded and host languages.

Which is why Geniusql eschews strings in favor of Python lambdas.

Concepts that are relatively straightforward to express in a language, like query parameters, are awkward to specify and invoke via an API. Query results are represented as untyped objects that are accessed by string names. There are many subtle but unchecked dependencies between the query being executed and the code that decodes its results.

Geniusql determines both the database type and the Python type of each result column, and provides adapters between them (which you can override if needed).

Finally, call level interfaces make it difficult to reuse queries; doing so involves complex manipulation of programs at runtime, while avoiding name conflicts and ensuring consistency of the resulting query.

True, but reuse can manifest itself at many levels. Quite frankly, I've found the gains from reuse of DB query plans to often be completely offset by the cost of libraries constructing and maintaining the requisite prepared statements. Geniusql caches the generated SQL for each AST instead.

In its simplest form, a safe query object is just an object containing a boolean method that can be used to filter a collection of candidate objects. ...syntax and types are checked at compile time: if manager is misspelled, a compile time error is produced.

Geniusql uses Python lambdas in a similar fashion. Of course, Python is dynamically typed, so we only get syntax checks at compile-time, but that's a huge benefit all by itself. And, because we're using Python, we can skip all the metaprogramming in the Java solution. Instead of writing:

class SalaryLimit instantiates RemoteQueryJDO
                  extends SafeQuery<Employee>
{
    double limit; /* parameter */
    SalaryLimit(double limit) {
        this.limit = limit;
    }
    boolean filter(Employee employee) {
        return employee.salary > limit;
    }
}

we can get it all done in one line:

Employee.select(lambda e: e.salary > limit)
August 2007
Sun Mon Tue Wed Thu Fri Sat
 << < Current> >>
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31  

Search

The requested Blog doesn't exist any more!

XML Feeds

powered by b2evolution