Category: Dejavu

Pages: 1 2 3 >>

07/14/09

Permalink 08:10:47 pm, by fumanchu Email , 43 words   English (US)
Categories: IT, Python, Dejavu, CherryPy

The Ronacher Manifesto

Link: http://lucumr.pocoo.org/2009/7/14/free-vs-free

I heartily agree with the bold bits at least:

So dear users: Use my stuff, have fun with it. And letting me know that you're doing is the best reward I can think of. And if you can contribute patches, that's even better.

04/07/09

Permalink 10:06:00 am, by fumanchu Email , 132 words   English (US)
Categories: IT, Python, Dejavu, CherryPy

PyCon Presentations

For PyCon 2009, I'm giving two talks! One on extending CherryPy and one on the innards of Dejavu/GeniuSQL. I think I've finally reduced my talks to the required time slots (I could easily have made 4-hour talks for each ;) and posted my presentations:

Use the arrow keys or mouse-click to proceed through them. The images don't load as fast over the network as they will when I present, so be patient if you preview them yourself. Also, try to use 1024 x 768 fullscreen--they're laid out specifically for that resolution.

Update: video is now available thanks to the great people who put on PyCon:

04/22/08

Permalink 11:58:01 am, by fumanchu Email , 105 words   English (US)
Categories: IT, Python, Dejavu

LINQ in Python

Chui's counterpoint pines:

There are some interesting ideas raised in LINQ that even Python developers ought to explore and consider adopting in a future Python.

Python had all this before LINQ in Dejavu and now Geniusql, and more pythonically, to boot. Instead of:

var AnIQueryable = from Customer in db.Customers where
    Customer.FirstName.StartsWith("m") select Customer;

you can write:

m_names = Customer.select(
    lambda cust: cust.FirstName.startswith("m"))

and instead of:

var AverageRuns =(from Master in this.db.Masters
    select Master.Runs).Average()

you can write:

avgruns = Masters.select(lambda m: avg(m.Runs))

11/12/07

Permalink 11:30:19 am, by fumanchu Email , 1791 words   English (US)
Categories: IT, Python, Dejavu

Memcached indices

It seems lots of people are using memcached to cache both a set of objects (each with their own key), and also various lists of those objects using different keys. For example, a retailer might cache Item objects, but also want to cache the list of Items in a given Category. The SQL before memcached might look like this:

SELECT * FROM Item WHERE CategoryID = 5;

..whereas with memcached mixed in, the I/O tends to look like this (against an empty cache):

get Item:CategoryID=5
END
SELECT ID FROM Item WHERE CategoryID = 5;
set Item:CategoryID=5 1 300 19
1111,2222,3333,4444
STORED
get Item:1111
END
get Item:2222
END
get Item:3333
END
get Item:4444
END
SELECT * FROM Item WHERE ID IN (1111, 2222, 3333, 4444)
set Item:1111 1 300 58
STORED
set Item:2222 1 300 58
STORED
set Item:3333 1 300 54
STORED
set Item:4444 1 300 80
STORED

That is, fetch the list of ID's from the cache; if not present, fetch it from the DB and store it in the cache (the "300" in the above examples means, "expire in 300 seconds"). Then iterate over the list of ID's and try to fetch each whole object from cache; if any miss, ask the DB for them (in as few operations as possible) and store them in the cache.

Once both the objects and the list-of-id's are cached, subsequent calls to a hypothetical get_items_by_category function should look like this:

get Item:CategoryID=5
sending key Item:CategoryID=5
END
get Item:1111
sending key Item:1111
END
get Item:2222
sending key Item:2222
END
get Item:3333
sending key Item:3333
END
get Item:4444
sending key Item:4444
END

But what happens when you move Item 3333 from Category 5 to Category 6? There are three possibilities:

  1. Your timeouts are reasonable. In this case, you're OK with some ambiguity; specifically, you're OK that all clients will continue to see Item 3333 in the old Category list for up to 300 seconds. If you ask the Item object directly for its Category, you'll get "6", but if you ask for all Items in Category 5, it'll still show up in the "wrong" category. It might even show up in both Category lists, if you refetch Item:Category=6 from the DB before the cached Item:Category=5 list expires.
  2. Your timeouts are unreasonable. You thought 300 seconds was a good expiration time because that's the minimum you could get away with to sufficiently reduce load on your poor single-point-of-failure DB. In this case, you're not comfortable with the ambiguity, so you start adding invalidation calls to clean out the cached lists sooner. Lots of them. Spread out all over your application.
  3. Your timeouts are 0, meaning "never expire". This is like 2, but means your clients will probably never hit the DB on their own--you MUST explicitly add code to update both the cached objects and the cached lists whenever an object changes, or expect an unusable site.

If you're happy with option 1, great! The rest of this discussion probably isn't for you. I'm going to explore three solutions (only one of which I'm happy with) for cases 2 and 3.

Database Indices

So you're not happy with the expiration time of your cached lists, so you've built all that invalidation code. What you may not realize is that you've just reinvented (badly) something databases have had for decades: indices.

An index is usually implemented with a B+-tree, most of the details of which are unimportant for us. What is important is that 1) an index covers a subset of the columns in the table (often a single column), which from now on I'm going to call the index criteria, and 2) each distinct combination of values for the index criteria has its own leaf node in the tree, which contains/points to a list of rows in the table that match that combination. What a mouthful. Another way to say it is that the leaf nodes in the tree look like this for an index over Item.CategoryID:

(2,): [9650, 2304, 22, 50888]
(3,): [323, 3000, 243246, 87346, 6563, 8679]
(5,): [1111, 2222, 3333, 4444]
(6,): [18]

When you ask the database for all Items with a CategoryID of 5, the database traverses the "CategoryID" index tree, finds the leaf node for "5", grabs the list stored at that node, then iterates over the list and yields each full object mentioned therein. This is called an "index scan":

# EXPLAIN SELECT * FROM Items WHERE CategoryID = 5;
                                            QUERY PLAN
--------------------------------------------------------------------------------------------------
 Index Scan using items_categoryid on Items  (cost=0.00..29.49 rows=609 width=24)
   Index Cond: (CategoryID = 5)

Sound familiar? It's exactly what we're doing by hand with memcached.

Memcached Indices

Okay, it's not exactly like our memcached example. There are some striking differences.

First, a database index is sparse, in the sense that it doesn't contain leaf nodes for every potential index criteria value, just the concrete values in the table. Our memcached indexing is even sparser: so far it only contains leaf nodes (lists of ID's) for each index scan we've run. If we've only asked for Items in Categories 2 and 5, memcached will only contain nodes for Item:CategoryID=2 and Item:CategoryID=5.

Second, a database index is a full tree, with a root node. What we've done so far in memcached is only store leaf nodes. This will bite us in a moment.

Third, a database index is naturally transactional. When you move Item 3333 from Category 5 to 6, you might execute the SQL statement "UPDATE Item SET CategoryID = 6 WHERE ID = 3333;". The database, being the sole arbiter of Truth, can lock the row, read the old value, lock the index leaf nodes, remove the row from the old leaf node and add it to the new one, write the new value to the table, and unlock, all within a fully-logged transaction (although it can be a lot more complicated than that with ranged index locks, log schemes, and page reallocation schemes). Our memcached implementation so far can't do any of that.

Combining the above differences, we get...a real mess. Specifically, we have to find a way to do all of those index updates.

One way would be to invalidate everything. Call flush_all and be done with it. This can work (poorly) if you've partitioned your data well over multiple memcached servers, but Woe Unto Thee if you're storing, say, cached HTML on the same node.

Another, narrower, solution would be to try to delete all known cached lists for the given criteria. One way to do that would be to attempt to maintain the whole index tree in memcached, not just the leaf nodes. This turns out to be a thorny problem because of the transitory nature of memcached data--what happens when you lose an intermediate node in the index tree? or the root node? You'll fail to update the now-unreachable leaf nodes, but clients will still fetch them and get the stale results.

An even narrower solution would be to try to update just two index leaf nodes for each updated object. For example, if we move Item 3333 from Category 5 to 6, we could try to remove the Item from leaf node "5" and add it to leaf node "6". This can work, but requires keeping the old values around for all indexed columns in your application layer, which a lot of modern DALs and ORMs don't do by default.

I was stuck at this point for a couple of days, until I had an epiphany:

Add eagerly, remove lazily

Recall if you will what I said above about the transactional nature of database indices. The DB can remove the row from the old index node(s) and add it to the new index node(s) in one step. We already know we can't do that with memcached, since the "get" and "set" operations aren't atomic anyway (although check-and-set can work around this; see 'cas' in protocol.txt.)

Transactions exist to maintain data integrity; to move The Data from one valid state to another valid state. But don't confuse the technique for the phenomena that technique is trying to prevent. In the case of indices, we use transactions on indices to avoid both 1) reading an object that does not meet the index criteria for the given node, or 2) failing to read an object that does meet the criteria. Databases avoid both scenarios by adding and removing rows from the index atomically.

When dealing with index nodes in memcached, however, the best approach is to separate the two phenomena by adding eagerly and removing lazily:

  • Add eagerly: when an object (row) is flushed from the session, add it to any indices for that class (table). That is, whenever you "UPDATE Item SET CategoryID = 6 WHERE ID = 3333", you also cache the object (you were doing that already anyway), but you also append the ID to the list stored at Item:CategoryID=6.
  • Remove lazily: when you UPDATE you do not try to remove the item from the old index. Instead, you let the next client that reads Item:CategoryID=5 do that when it iterates over the objects in the list. If any objects no longer meet the index node criteria (CategoryID=5), they are removed from the list, and the client sets the revised index node back into the cache.

Shortcomings

There are several:

  1. A client might recall the index node but not the objects in its list. For example, a web page might just need hyperlinks to each object, which it tries to generate without recalling the entire set of objects. This technique succeeds in the fact that it doesn't fetch more data than the app asked for, but fails if stale objects are still listed in the index node. The only solutions I can see are to either disallow this sort of index read or to fall back on sane timeouts again.
  2. This doesn't address order, limit, or offset explicitly. Simple order queries should be no problem, but if you want the top N items you must either include the 'order by' columns in your index, or read in all items in the index, filter out the stale objects, sort them, and perform the limit/offset yourself.
  3. Functionally decomposing SQL (or whatever QL your ORM might use) into potential index criteria is not a simple task. If you're using a single table.column = value expression, it maps well, but using joins, boolean operators (like and, or, not) or other arithmetic operators makes it difficult. But then, the major databases have the same issues.
  4. Whatever you all leave in the comments.

Conclusion

I think this is worth a shot. I'm adding it to Dejavu in a branch, but it's complicated enough that it may not be done for a bit. Comments and suggestions on the approach welcome.

08/18/07

Permalink 07:00:00 pm, by fumanchu Email , 2186 words   English (US)
Categories: Python, Dejavu

Storm, SQLAlchemy, and Geniusql

As Mike Bayer mentioned, I recently did some cross-ORM comparisons. The charts below all show "number of requests" on the X axis and "number of seconds to complete those requests" on the Y axis.

Update: Mike found some problems in my methodology and posted a very thorough "revisiting" post. Definitely read that if you read this.

Methodology

I used trunk versions of Geniusql (1.0) and SQLAlchemy (0.4, without using the orm subpackage), and the latest stable distro of Storm (0.10). All test runs were done on my Win2k laptop, which has a 2GHz CPU and 512MB of RAM. PostgreSQL 8.0.0rc1 was the DB engine, and psycopg2 was the driver.

Scripts and a CSV are here: http://www.aminus.org/rbre/python/ormcompare/ I'd be happy to be notified I did something wrong in the SQLAlchemy or Storm code (especially if you can patch it to do better ;) ).

Results and Analysis

As with all benchmarks, take this with a heavy grain of salt. The numbers say far more about the values and designs of each ORM than they do about the performance of particular subsystems in each ORM. To wit:

Step 2: Insert

Inserting single objects: Benchmark chart for step 2: insert

SQLAlchemy is doing something that Geniusql and Storm aren't doing on every insert. A trace of SQLAlchemy and Geniusql shows Geniusql making 13 Python function calls for a single simple insert, whereas SQLAlchemy makes over 400! What could it be doing?

Apparently, SQLAlchemy is spending a lot of time generating bound parameters. That is, it generates the SQL INSERT INTO "Animal" ("ID", "Name", "Species", "Legs") VALUES (:ID, :Name, :Species, :Legs), which Geniusql doesn't do. Is it ostensibly doing this for speed reasons? If so, the benchmark results seem to argue that it's a losing battle; the auto-generation of optimized code is costing more than the savings gained from the optimization.

It also spends far too long fetching the autoincremented ID's, using 73 Python function calls; Geniusql manages it in 4: fetch, log, a __getattr__, and a pull method to coerce the new ID to the proper Python type. But then, SQLAlchemy is also populating the new object with other (non-autoincrementing) default values, in Python, which Geniusql doesn't do (it relegates that to its ORM layer, Dejavu). I'd bet SQLAlchemy would be better off just fetching the new row (which the DB has populated with default values already anyway).

Here's a trace of Geniusql retrieving the autoincremented ID:

> _grab_new_ids (postgres:520)
  > fetch (psycopg:94)
    > log (objects:794)
    < log (objects:795) 0.047ms
    > __getattr__ (conns:23)
    < __getattr__ (conns:24): <built-in method cursor of psycopg2._psycopg.connection object at 0x00ADB020> 0.052ms
  < fetch (psycopg:120): ([(13L,)], (('last_value', 20, 2, 8, None, None, None),)) 1.232ms
< _grab_new_ids (postgres:527): {'ID': 13L} 1.601ms

and here's an SQLAlchemy trace:

> get_column_default (postgres:616)
  > __get__ (util:191)
  < __get__ (util:195): True 0.062ms
  > get_column_default (base:1530)
    > traverse_single (sql:893)
      > visit_sequence (postgres:633)
        > <lambda> (base:1528)
        < <lambda> (base:1528): <sqlalchemy.databases.postgres.PGDialect object at 0x00A23A30> 0.049ms
        > format_sequence (ansisql:1026)
          > __generic_obj_format (ansisql:1004)
            > _get_case_sensitive (schema:110)
            < _get_case_sensitive (schema:119): True 0.058ms
          < __generic_obj_format (ansisql:1010): 'animal_id_seq' 0.289ms
        < format_sequence (ansisql:1027): 'animal_id_seq' 0.462ms
        > execute (base:712)
          > _execute_text (base:722)
            > __distill_params (base:728)
            < __distill_params (base:735) 0.080ms
            > __create_execution_context (base:761)
              > <lambda> (base:932)
              < <lambda> (base:932): <sqlalchemy.databases.postgres.PGDialect object at 0x00A23A30> 0.051ms
              > create_execution_context (postgres:231)
                > __init__ (default:146)
                  > __encode_param_keys (default:184)
                    > supports_unicode_statements (default:50)
                    < supports_unicode_statements (default:52): False 0.049ms
                    > proc (default:189)
                    < proc (default:193) 0.065ms
                  < __encode_param_keys (default:198) 0.794ms
                  > supports_unicode_statements (default:50)
                  < supports_unicode_statements (default:52): False 0.048ms
                  > create_cursor (postgres:183)
                    > _is_server_side (postgres:180)
                    < _is_server_side (postgres:181): False 0.051ms
                    > _get_connection (base:532)
                    < _get_connection (base:534): <sqlalchemy.pool._ConnectionFairy object at 0x00BDA950> 0.059ms
                    > cursor (pool:308)
                      > __init__ (pool:391)
                      < __init__ (pool:393) 0.061ms
                    < cursor (pool:311): <sqlalchemy.pool._CursorFairy object at 0x00BDAE90> 0.271ms
                  < create_cursor (postgres:190): <sqlalchemy.pool._CursorFairy object at 0x00BDAE90> 0.832ms
                < __init__ (default:174) 2.301ms
              < create_execution_context (postgres:232): <sqlalchemy.databases.postgres.PGExecutionContext object at 0x00BDAD70> 2.487ms
            < __create_execution_context (base:762): <sqlalchemy.databases.postgres.PGExecutionContext object at 0x00BDAD70> 2.860ms
            > __execute_raw (base:764)
              > is_info_enabled (logging:71)
                > isEnabledFor (logging/__init__:1139)
                  > getEffectiveLevel (logging/__init__:1125)
                  < getEffectiveLevel (logging/__init__:1135): 30 0.129ms
                < isEnabledFor (logging/__init__:1145): False 0.334ms
              < is_info_enabled (logging:72): False 0.530ms
              > __execute (base:774)
                > do_execute (default:103)
                  > __getattr__ (pool:404)
                  < __getattr__ (pool:405): <built-in method execute of psycopg2._psycopg.cursor object at 0x00B94978> 0.049ms
                < do_execute (default:104) 2.892ms
              < __execute (base:781) 3.186ms
              > _autocommit (base:685)
                > in_transaction (base:617)
                < in_transaction (base:618): False 0.049ms
                > match (sre:126)
                  > _compile (sre:213)
                  < _compile (sre:218): <_sre.SRE_Pattern object at 0x00BDC020> 0.087ms
                < match (sre:129) 0.270ms
              < _autocommit (base:689) 0.608ms
            < __execute_raw (base:772) 4.783ms
            > result (default:233)
              > get_result_proxy (postgres:192)
                > _is_server_side (postgres:180)
                < _is_server_side (postgres:181): False 0.051ms
                > __init__ (base:1126)
                  > <lambda> (default:176)
                    > <lambda> (default:180)
                      > _branch (base:538)
                        > __init__ (base:524)
                        < __init__ (base:530) 0.114ms
                      < _branch (base:547): <sqlalchemy.engine.base.Connection object at 0x00BD1350> 0.298ms
                    < <lambda> (default:180): <sqlalchemy.engine.base.Connection object at 0x00BD1350> 0.475ms
                    > <lambda> (base:549)
                    < <lambda> (base:549): Engine(postgres://postgres:djvpg@localhost/geniusql_bench) 0.048ms
                  < <lambda> (default:176): Engine(postgres://postgres:djvpg@localhost/geniusql_bench) 0.888ms
                  > is_debug_enabled (logging:68)
                    > isEnabledFor (logging/__init__:1139)
                      > getEffectiveLevel (logging/__init__:1125)
                      < getEffectiveLevel (logging/__init__:1135): 30 0.130ms
                    < isEnabledFor (logging/__init__:1145): False 0.340ms
                  < is_debug_enabled (logging:69): False 0.541ms
                  > is_select (default:219)
                    > match (sre:126)
                      > _compile (sre:213)
                      < _compile (sre:218): <_sre.SRE_Pattern object at 0x00BAB7F0> 0.086ms
                    < match (sre:129): <_sre.SRE_Match object at 0x00BC49F8> 0.261ms
                  < is_select (default:222): True 0.431ms
                  > _init_metadata (base:1151)
                    > _create_key_cache (base:1182)
                      > __init__ (util:51)
                      < __init__ (util:52) 0.049ms
                    < _create_key_cache (base:1204): {} 0.321ms
                    > __getattr__ (pool:404)
                    < __getattr__ (pool:405): (('nextval', 20, 2, 8, None, None, None),) 0.053ms
                    > dbapi_type_map (default:30)
                    < dbapi_type_map (default:34): {} 0.048ms
                    > decode_result_columnname (default:25)
                      > decode (utf_8:15)
                      < decode (utf_8:16): (u'nextval', 7) 0.053ms
                    < decode_result_columnname (default:28): u'nextval' 0.234ms
                    > dialect_impl (types:49)
                    < dialect_impl (types:51): NullType() 0.060ms
                  < _init_metadata (base:1179) 1.758ms
                < __init__ (base:1135) 4.304ms
              < get_result_proxy (postgres:196): <sqlalchemy.engine.base.ResultProxy object at 0x00BD1E30> 4.737ms
            < result (default:234): <sqlalchemy.engine.base.ResultProxy object at 0x00BD1E30> 4.917ms
          < _execute_text (base:726): <sqlalchemy.engine.base.ResultProxy object at 0x00BD1E30> 13.304ms
        < execute (base:715): <sqlalchemy.engine.base.ResultProxy object at 0x00BD1E30> 13.544ms
        > scalar (base:1318)
          > _fetchone_impl (base:1284)
            > __getattr__ (pool:404)
            < __getattr__ (pool:405): <built-in method fetchone of psycopg2._psycopg.cursor object at 0x00B94978> 0.051ms
          < _fetchone_impl (base:1285): (13L,) 0.250ms
          > _process_row (base:1291)
            > __init__ (base:1431)
            < __init__ (base:1436) 0.072ms
          < _process_row (base:1292): (13L,) 0.308ms
          > __getitem__ (base:1462)
            > _get_col (base:1280)
              > __getitem__ (util:53)
              < __getitem__ (util:55): (NullType(), NullType(), 0) 0.063ms
              > convert_result_value (types:193)
              < convert_result_value (types:194): 13L 0.048ms
            < _get_col (base:1282): 13L 0.503ms
          < __getitem__ (base:1467): 13L 0.694ms
          > close (base:1206)
            > close (pool:398)
            < close (pool:400) 0.063ms
            > <lambda> (base:1140)
              > <lambda> (default:182)
              < <lambda> (default:182): <sqlalchemy.engine.base.Connection object at 0x00BDACF0> 0.048ms
            < <lambda> (base:1140): <sqlalchemy.engine.base.Connection object at 0x00BDACF0> 0.229ms
            > <lambda> (base:552)
            < <lambda> (base:552): False 0.050ms
          < close (base:1219) 0.796ms
        < scalar (base:1327): 13L 2.721ms
      < visit_sequence (postgres:635): 13L 17.369ms
    < traverse_single (sql:896): 13L 17.584ms
  < get_column_default (base:1532): 13L 17.773ms
< get_column_default (postgres:631): 13L 18.196ms

Step 3: Properties

Retrieving rows/objects by ID: Benchmark chart for step 3: properties

Again, Storm and Geniusql are close but SQLAlchemy is not. There must be some things that SQLAlchemy is doing that the others are not.

Geniusql uses the pattern: parse expression, write SQL, fetch, iterate over result rows (and coerce inbound values as they are yielded).

Storm does it a little differently. The first time you call store.find (at least, I'm assuming it's only the first time), Storm pre-constructs SQL for you, including INSERT INTO, autoincrement ID getters, SELECT by ID, etc. It's the actual find().one() call that does the traditional work. The only significant difference from Geniusql in this step is that Storm's inbound type adapters seem to be slower (by a factor of 10 or so).

There's no single thing I can point to that SQLAlchemy does in this case that the others don't, it just seems to generally take more steps to do each phase. The correlations seem to be taking the most time; SQLAlchemy takes about 5 times longer to generate the SQL than Geniusql does. And, like in the insert step, I'm not sure the bind params are helping SQLAlchemy. But then, Storm does parameter binding and manages to stay fast.

Step 4: Expressions

Retrieving rows/objects with various WHERE clauses: Benchmark chart for step 4: expressions

Oh, dear. It looks like Storm needs some work in this department. Those pre-calculated fetch-by-ID SQL statements are handy, but can't apply to every situation. But let's look at SQLAlchemy first:

SQLAlchemy does a lot of traversal of tree-like structures. Some of this could/should be cached. For example, Geniusql stores the quoted names of each column, so only a single attribute lookup is needed for each one in a SELECT clause; fetching all the qnames and assembling them into the final SQL takes about 1 ms in my trace log. SQLAlchemy, on the other hand, makes about 24 Python function calls (4.5 ms) for each column. The actual fetch-iterate-and-coerce step is practically the same for both libraries.

Storm takes about 3.4ms to generate each column name in a SELECT clause. It also does some odd things with inbound coercion: it gets a decimal.Decimal back for a column that was declared to be Float (?! Zoo.Admission), and coerces that plus the unicode Zoo.Name, but doesn't run a coercion function at all for the date fields. Guess it knows/trusts psycopg will return valid date objects. Ah, so then it wraps up the decimal scalar value in a storm.variables.FloatVariable object. Are you allowed to use binary floating-point "all the way down" in Storm?

At any rate, it takes Storm about 60ms in my trace log to return the first object for each row, and then about 20ms per row/object thereafter. Geniusql takes about 20ms for the first row, and under 2ms for each row thereafter. It's a good bet that that's the bottleneck, and is most likely due to slow inbound adapters.

Step 5: Crazy stuff

Retrieving rows with certain fields only (projection), use of builtin functions (today/now) and use of distinct: Benchmark chart for step 5: aggregates

This one's a little crazy for several reasons. First, it tests some dissimilar things:

  • The 'Legs' attribute of the 'Animal' table is returned for all rows.
  • The 'Species' and 'Lifespan' columns are then retrieved for all rows.
  • The 'Zoo.Founded' date attribute is compared using an SQL function like TODAY or NOW.
  • A DISTINCT query is run on the Animal table.

Second, I couldn't find a way to get Storm to write TODAY into the SQL, so that test cheats a bit and just sticks datetime.datetime.today into the initial expression. I'm pretty convinced that's why the Storm times are so low on this test.

But let's break down the trace log times for the first test, fetching SELECT Legs FROM Animal. Here's SQLAlchemy:

view = select([Animal.c.Legs]).execute().fetchall()
legs = [x[0] for x in view]

[Animal.c.Legs] =  1.3 ms
select()        =  2.9 ms
execute()       = 63.6 ms
fetchall()      = 15.5 ms
listcomp        = 10.4 ms

Here's the Geniusql equivalent:

view = db.select((Animal, ['Legs']))
legs = [x[0] for x in view]

select()        = 13.2 ms
+- fetch()      =     1.9 ms
listcomp        =  4.2 ms

And the Storm timings:

legs = getall(Animal.Legs)

getall()        = 13.8 ms
+- execute()    =    10.0 ms
+- get_all()    =     2.7 ms

I just noticed I forgot the listcomp in the Storm test. Meh.

The third query is more complicated than the ones we've seen so far, so it takes up most of the time in this entire step. Geniusql takes 74.455ms to write the SQL and fetch the data. SQLAlchemy takes 17 ms just parsing all of the overridden operators in the expression (Storm overrides operators, too, but typically does each one in 1/3 the time). Then select() is 3 ms, execute is 115 ms (!), and fetchall is 11 more. I've been thinking lately of adding compound SELECTs to Geniusql recently, but if that means a doubling of execution time for SELECTs without any correlated subqueries, perhaps I'll wait a while. Just kidding--I'm sure it can be done more simply; if SQLAlchemy could short-circuit the traversal of ClauseVisitors in the common case (one table, no compounded SELECTs), it might improve dramatically.

Step 6: Editing

Updating rows/objects: Benchmark chart for step 6: editing

No surprises here. SQLAlchemy parses the WHERE clause of the UPDATE statements faster than Geniusql, but here come those correlated subquery traversals and bind params again to slow the whole thing down.

I just noticed something else: Geniusql uses the following sequence of events to grab a connection:

> get (conns:241)
    > id (conns:271)
    < id (conns:273): 0.049ms
> get (conns:269): 0.280ms

Here's the SQLAlchemy system:

> contextual_connect (base:1036)
  > __init__ (base:524)
    > raw_connection (base:1086)
      > connect (pool:157)
        > __init__ (pool:262)
          > __init__ (pool:256)
          < __init__ (pool:257) 0.055ms
          > get (pool:171)
            > do_get (pool:533)
              > get (queue:124)
                > acquire (threading:93)
                  > currentThread (threading:672)
                  < currentThread (threading:674): <_MainThread(MainThread, started)> 0.058ms
                  > _note (threading:44)
                  < _note (threading:45) 0.050ms
                < acquire (threading:109): True 0.518ms
                > _empty (queue:181)
                < _empty (queue:182): False 0.049ms
                > _get (queue:193)
                < _get (queue:194): <sqlalchemy.pool._ConnectionRecord object at 0x00C0F1F0> 0.050ms
                > notify (threading:236)
                  > _is_owned (threading:143)
                    > currentThread (threading:672)
                    < currentThread (threading:674): <_MainThread(MainThread, started)> 0.054ms
                  < _is_owned (threading:144): True 0.232ms
                  > _note (threading:44)
                  < _note (threading:45) 0.047ms
                < notify (threading:243) 0.649ms
                > release (threading:111)
                  > currentThread (threading:672)
                  < currentThread (threading:674): <_MainThread(MainThread, started)> 0.055ms
                  > _note (threading:44)
                  < _note (threading:45) 0.048ms
                < release (threading:119) 0.487ms
              < get (queue:157): <sqlalchemy.pool._ConnectionRecord object at 0x00C0F1F0> 2.532ms
            < do_get (pool:536): <sqlalchemy.pool._ConnectionRecord object at 0x00C0F1F0> 2.743ms
          < get (pool:172): <sqlalchemy.pool._ConnectionRecord object at 0x00C0F1F0> 2.914ms
          > get_connection (pool:219)
          < get_connection (pool:234): <connection object at 0x00B94E58;
              dsn: 'dbname=geniusql_bench host=localhost user=postgres password=*****', closed: 0> 0.079ms
          > __get__ (logging:86)
            > _get_instance_name (logging:56)
            < _get_instance_name (logging:60): 'sqlalchemy.pool.QueuePool.0x..50' 0.061ms
            > getLogger (logging/__init__:1225)
              > getLogger (logging/__init__:832)
                > _acquireLock (logging/__init__:171)
                  > acquire (threading:93)
                    > currentThread (threading:672)
                    < currentThread (threading:674): <_MainThread(MainThread, started)> 0.056ms
                    > _note (threading:44)
                    < _note (threading:45) 0.048ms
                  < acquire (threading:109): True 0.496ms
                < _acquireLock (logging/__init__:181) 0.687ms
                > _releaseLock (logging/__init__:183)
                  > release (threading:111)
                    > currentThread (threading:672)
                    < currentThread (threading:674): <_MainThread(MainThread, started)> 0.056ms
                    > _note (threading:44)
                    < _note (threading:45) 0.048ms
                  < release (threading:119) 0.481ms
                < _releaseLock (logging/__init__:188) 0.658ms
              < getLogger (logging/__init__:862): <logging.Logger instance at 0x00B96F58> 1.775ms
            < getLogger (logging/__init__:1232): <logging.Logger instance at 0x00B96F58> 1.994ms
            > getEffectiveLevel (logging/__init__:1125)
            < getEffectiveLevel (logging/__init__:1135): 30 0.134ms
          < __get__ (logging:90): False 2.731ms
        < __init__ (pool:273) 6.473ms
        > checkout (pool:319)
        < checkout (pool:325): <sqlalchemy.pool._ConnectionFairy object at 0x00C0FF50> 0.092ms
      < connect (pool:159): <sqlalchemy.pool._ConnectionFairy object at 0x00C0FF50> 6.878ms
    < raw_connection (base:1089): <sqlalchemy.pool._ConnectionFairy object at 0x00C0FF50> 7.059ms
  < __init__ (base:530) 7.308ms
< contextual_connect (base:1042): <sqlalchemy.engine.base.Connection object at 0x00C0F910> 7.486ms

Houston, we have a problem. Don't log if you don't have to (especially when using the Python stdlib's extremely-nested logging module, which is slow even when turned off), and use a map of conns to threads instead of going to the Queue for every request. Geniusql manages this by using weakref wrappers on the conns, so when a thread no longer references its mapped connection, it's returned to the pool. Storm gets around this by not having a pool, or even threadsafe Store objects--each Store has its own connection, so it "maps conns to threads" by limiting you to one Store+conn per thread.

That's enough analysis for now. :) Hope you enjoyed the ride.

08/10/07

Permalink 08:18:51 pm, by fumanchu Email , 528 words   English (US)
Categories: Dejavu

Looks like I reinvented the wheel again

I just followed a link from LtU to a paper on Safe Query Objects. The content of the paper was immediately familiar to me, since Geniusql does exactly what is described therein: use native language (Python) semantics to construct remote queries.

Notes while I read:

Despite their usefulness, call level interfaces have a number of significant problems. First, the embedded database programs are not checked until they are passed to the CLI at runtime. As a result, the syntax and types of database programs are not checked statically, but instead result in runtime errors. This is true despite the fact that the database structure is almost always static and known when the client program is compiled.

Geniusql deals with this by obtaining and checking a complete model of the database types. For example, an object with a unicode attribute would be mapped to a Table object with (say) an instance of the geniusql.providers.postgres.TEXT class.

Second, programs that use call level interfaces are difficult to write and maintain. There are important classes of queries that must be constructed dynamically at runtime. Manipulating programs as strings is complex and error-prone due to the complex rules for nesting expressions, quoting constants, and the interplay between embedded and host languages.

Which is why Geniusql eschews strings in favor of Python lambdas.

Concepts that are relatively straightforward to express in a language, like query parameters, are awkward to specify and invoke via an API. Query results are represented as untyped objects that are accessed by string names. There are many subtle but unchecked dependencies between the query being executed and the code that decodes its results.

Geniusql determines both the database type and the Python type of each result column, and provides adapters between them (which you can override if needed).

Finally, call level interfaces make it difficult to reuse queries; doing so involves complex manipulation of programs at runtime, while avoiding name conflicts and ensuring consistency of the resulting query.

True, but reuse can manifest itself at many levels. Quite frankly, I've found the gains from reuse of DB query plans to often be completely offset by the cost of libraries constructing and maintaining the requisite prepared statements. Geniusql caches the generated SQL for each AST instead.

In its simplest form, a safe query object is just an object containing a boolean method that can be used to filter a collection of candidate objects. ...syntax and types are checked at compile time: if manager is misspelled, a compile time error is produced.

Geniusql uses Python lambdas in a similar fashion. Of course, Python is dynamically typed, so we only get syntax checks at compile-time, but that's a huge benefit all by itself. And, because we're using Python, we can skip all the metaprogramming in the Java solution. Instead of writing:

class SalaryLimit instantiates RemoteQueryJDO
                  extends SafeQuery<Employee>
{
    double limit; /* parameter */
    SalaryLimit(double limit) {
        this.limit = limit;
    }
    boolean filter(Employee employee) {
        return employee.salary > limit;
    }
}

we can get it all done in one line:

Employee.select(lambda e: e.salary > limit)

07/09/07

Permalink 12:27:50 pm, by fumanchu Email , 241 words   English (US)
Categories: Python, Dejavu, CherryPy

Lines of code

I was asked last week how many lines of code some of my projects are, and didn't have an answer handy. Fortunately, it's easy to write a LOC counter in Python:

"""Calculate LOC (lines of code) for a given package directory."""

import os
import re

def loc(path, pattern="^.*\.py$"):
    """Return the number of lines of code for all files in the given path.

    If the 'pattern' argument is provided, it must be a regular expression
    against which each filename will be matched. By default, all filenames
    ending in ".py" are analyzed.
    """
    lines = 0
    for root, dirs, files in os.walk(path):
        for name in files:
            if re.match(pattern, name):
                f = open(os.path.join(root, name), 'rb')
                for line in f:
                    line = line.strip()
                    if line and not line.startswith("#"):
                        lines += 1
                f.close()
    return lines

I've added the above to my company's public-domain misc package at http://projects.amor.org/misc/. Here are the results for my high-priority projects (some are proprietary):

>>> from misc import loc
>>> loc.loc(r"C:\Python24\Lib\site-packages\raisersedge")
2290
>>> loc.loc(r"C:\Python24\Lib\site-packages\dejavu")
7703
>>> loc.loc(r"C:\Python24\Lib\site-packages\geniusql")
9509
>>> loc.loc(r"C:\Python24\Lib\site-packages\cherrypy")
16391
>>> loc.loc(r"C:\Python24\Lib\site-packages\endue")
9339
>>> loc.loc(r"C:\Python24\Lib\site-packages\mcontrol")
11512
>>> loc.loc(r"C:\Python24\Lib\site-packages\misc")
4648

~= 61 kloc. Pretty hefty for a single in-house web app stack. :/ But, hey, nobody said integration projects were easy.

01/23/07

Permalink 12:05:53 pm, by fumanchu Email , 433 words   English (US)
Categories: Python, Dejavu

Mapping Python types to DB types

Reading Barry Warsaw's recent use of SQLAlchemy, I'm reminded once again of how ugly I find SQLAlchemy's PickleType and SQLObject's PickleCol concepts. I have nothing against the concept of pickle itself, mind you, but I do have an issue with implementation layer names leaking into application code.

The existence of a PickleType (and BlobType, etc.) means that the application developer needs to think in terms of database types. This adds another mental model to the user's (my) tiny brain, one which is unnecessary. It constantly places the burden on the developer to map Python types to database types.

For Dejavu, I started in the opposite direction, and decided that object properties would be declared in terms of Python types, not database types. When you write a new Unit class, you even pass the actual type (such as int or unicode) to the property constructor instead of a type name! Instead of separate classes for each type, there is only a single UnitProperty class. This frees programmers from having to map types in their code (and therefore in their heads); it removes an entire mental model (DB types) at coding time, and allows the programmer to remain in the Python flow.

However, the first versions of Dejavu went too far in this approach, mostly due to the fact that Dejavu started from the "no legacy" side of ORM development; that is, it assumed your Python model would always create the database. This allowed Dejavu to choose appropriate database types for the declared Python types, but meant that existing applications (with existing data) were difficult to port to Dejavu, because the type-adaptation machinery had no way to recognize and handle database types other than those Dejavu preferred to create.

Dejavu 1.5 (soon to be released) corrects this by allowing true "M x N" type adaptation. What this means is that you can continue to directly use Python types in your model, but you also gain complete control over the database types. The built-in type adapters understand many more (Python type <-> DB type) adaptation pairs, now, but you also have the power to add your own. In addition, Dejavu now has DB type-introspection capabilities—the database types will be discovered for you, and appropriate adapters used on the fly. [...and Dejavu now allows you to automatically create Python models from existing databases.]

In short, it is possible to have an ORM with abstractions that don't leak (at least not on a regular basis—the construction of a custom adapter requires some thought ;) ).

08/24/06

Permalink 12:05:29 am, by fumanchu Email , 320 words   English (US)
Categories: Python, Dejavu, CherryPy

Upgrades to Python 2.5

I probably waited too long, but today I upgraded both CherryPy (3.0alpha/trunk) and Dejavu (1.5alpha/trunk) to Python 2.5. The moves were surprisingly easy:

CherryPy

There were three changes in all:

  1. When the WSGI server socket is closed, socket.accept now fails with a socket.error "Socket operation on non-socket". I "fixed" this by just ignoring the error.
  2. The output of Response.SimpleCookie now has no trailing semicolon as it did in Python 2.4. Just had to fix the test suite to be aware of that.
  3. Some attributes of unittest.TestCase moved from double-underscore names to single; webtest had a custom subclass of it. This was easy enough to fix: define a different method for 2.5 than 2.4 or less.

[P.S. I've noticed CP 3 is about 3% slower in 2.5 than 2.4, even with the zombie frames and other optimizations. Hmmm.]

Dejavu

Amazingly, even though Dejavu makes extensive use of bytecode hacks, there was only one real change! The "logic" module needed an upgrade to the "comparison" function, which produces an Expression via the types.CodeType() constructor. Apparently, function args are no longer included in co_names, and co_consts no longer includes a leading 'None' value (except when there are cell references?). Finally, co_flags for "normal" functions now includes CO_NESTED by default. These changes also forced some parallel upgrades to the test suite.

While fixing the above, however, I noticed a long-standing bug in Dejavu's LambdaDecompiler. Python 2.4 used ROT_THREE before a STORE_SUBSCR, and this worked in Dejavu; but Python 2.5 uses ROT_TWO before STORE_SUBSCR, which showed me I had the stack-popping backwards in both functions. Bah. Fixed now.

Absolute imports

Both packages needed a good bit of work changing some relative import statements into absolute ones. Not really hard, just boring. ;)

Thanks to the Python core devs for a very smooth transition!

06/28/06

Permalink 09:25:43 am, by admin Email , 365 words   English (US)
Categories: IT, Python, Dejavu

The Fourth Way

Ted Neward has written a good discussion of Object-Relational Mapper concerns. I'd like to react to, and associate, a couple of points he makes, seemingly unrelatedly:

...we typically end up with one of Query-By-Example (QBE), Query-By-API (QBA), or Query-By-Language (QBL) approaches.

A QBE approach states that you fill out an object template of the type of object you're looking for...
a "Query-By-API" approach, in which queries are constructed by Query objects...
a "Query-By-Language" approach, in which a new language, similar to SQL but "better" somehow, is written...

and

Several possible solutions present themselves...

5. Integration of relational concepts into the languages. Developers simply accept that this is a problem that should be solved by the language, not by a library or framework...bring relational concepts (which, at heart, are set-based) into mainstream programming languages, making it easier to bridge the gap between "sets" and "objects"...[such as] direct integration into traditional O-O languages, such as the LINQ project from Microsoft...

I propose (and implemented in Dejavu) a fourth approach from QBE, QBA, and QBL. Rather than build a DSL on top of a programming language (as QBL does), use the language instead. Rather than change the programming language by introducing relational syntax (as LINQ does), use the language instead. In Dejavu, you write plain old Python functions which take an object and return True or False. Iterate over a collection of objects and it works as a filter. Pass it to the storage backend and it is translated into SQL for you. Most commonly, you pass it to the library and it does both for you: iterates over its in-memory cache of objects and merges in new objects, queried from storage. Let's call it... "query". It's not "by" anything. It has an infinitesimal learning curve. LINQ is, in essence, shoehorning higher-order functions into its various target languages in a very limited domain. Why not use a programming language that has real HOF's?

1 2 3 >>

February 2017
Sun Mon Tue Wed Thu Fri Sat
 << <   > >>
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28        

Search

The requested Blog doesn't exist any more!

XML Feeds

free blog tool