Categories: IT, Architecture, Linnaeus Award, Python, Cation, CherryPy, Dejavu, WHELPS, WSGI, Robotics and Engineering

Pages: << 1 2 3 4 5 6 7 8 9 10 11 ... 17 >>

07/09/07

Permalink 12:27:50 pm, by fumanchu Email , 241 words   English (US)
Categories: Python, Dejavu, CherryPy

Lines of code

I was asked last week how many lines of code some of my projects are, and didn't have an answer handy. Fortunately, it's easy to write a LOC counter in Python:

"""Calculate LOC (lines of code) for a given package directory."""

import os
import re

def loc(path, pattern="^.*\.py$"):
    """Return the number of lines of code for all files in the given path.

    If the 'pattern' argument is provided, it must be a regular expression
    against which each filename will be matched. By default, all filenames
    ending in ".py" are analyzed.
    """
    lines = 0
    for root, dirs, files in os.walk(path):
        for name in files:
            if re.match(pattern, name):
                f = open(os.path.join(root, name), 'rb')
                for line in f:
                    line = line.strip()
                    if line and not line.startswith("#"):
                        lines += 1
                f.close()
    return lines

I've added the above to my company's public-domain misc package at http://projects.amor.org/misc/. Here are the results for my high-priority projects (some are proprietary):

>>> from misc import loc
>>> loc.loc(r"C:\Python24\Lib\site-packages\raisersedge")
2290
>>> loc.loc(r"C:\Python24\Lib\site-packages\dejavu")
7703
>>> loc.loc(r"C:\Python24\Lib\site-packages\geniusql")
9509
>>> loc.loc(r"C:\Python24\Lib\site-packages\cherrypy")
16391
>>> loc.loc(r"C:\Python24\Lib\site-packages\endue")
9339
>>> loc.loc(r"C:\Python24\Lib\site-packages\mcontrol")
11512
>>> loc.loc(r"C:\Python24\Lib\site-packages\misc")
4648

~= 61 kloc. Pretty hefty for a single in-house web app stack. :/ But, hey, nobody said integration projects were easy.

06/24/07

Permalink 03:22:20 pm, by fumanchu Email , 1679 words   English (US)
Categories: Python, CherryPy, WSGI

Web Site Process Bus

WSGI has enabled an ecosystem where site deployers can, in theory, mix multiple applications from various frameworks into a single web site, served by a single HTTP server. And that's great. But there are several areas where WSGI is purposefully silent, where there is still room for standards-based collaboration:

  • managing WSGI HTTP servers (start/stop/restart)
  • construction of the WSGI component graph (servers -> middlewares -> apps)
  • main process state control (start/stop/restart/graceful)
  • site-wide services (autoreload, thread monitors, site logging)
  • config file formats and parsing for all of the above

Most frameworks address all of the above already, to varying degrees; however, they still tend to do so in a very monolithic manner. Paste is notable for attempting to provide some of them in discrete pieces (especially WSGI graph construction and a config format tailor-made for it).

But I'm going to focus here on just two of these issues: process state and site-wide services. I believe we can separate these two from the rest of the pack and provide a simple, common specification for both, one that's completely implementable in 100 lines of code by any framework.

The problem

One of the largest issues when combining multiple frameworks in a single process is answering the question, "who's in control of the site as a whole?" Multiple frameworks means multiple code bases who all think they should provide:

  • the startup script
  • daemonization
  • dropping privileges
  • PID file management
  • site logging
  • autoreload
  • signal handling
  • sys.exit calls
  • atexit handlers
  • main thread error trapping

...and they often disagree about those behaviors. Throw Apache or lighttpd into the mix and you've got some serious deployment issues.

The typical solution to this is to have each component provide a means of shutting off each process-controlling feature. For example, CherryPy 3 obeys the config entry engine.autoreload_on = False, while django-admin.py takes a --noreload command-line arg. But these are different for each framework, and difficult to coordinate as the number of components grows. Since, for example, only one autoreloader is needed per site, a more usable solution would be to selectively turn on just one instead of turning off all but one.

For a worse example, let's look at handling SIGTERM. Currently, we have the following:

SIGTERM before WSPBus

OK, Django doesn't actually provide a SIGTERM handler, but you get the idea. If several components register a SIGTERM handler, only one of them will "win" by virtue of being the last one to register. And chances are, the winning handler will shut down its component cleanly and then exit the process, leaving other components to fend for themselves.

In fact, there's a whole list of negatives for the monolithic approach to process control and site services:

  1. Frameworks and servers have to provide all desirable site behaviors, or force their packagers/deployers to develop them ad-hoc.
  2. Frameworks and servers all have different API's for changing process state. Race conditions and unpredictable outcomes are common.
  3. Frameworks and servers all have different API's for reacting to process state changes. Resource acquisition and cleanup becomes a huge unknown.
  4. Frameworks and servers have to know they're being deployed alongside other frameworks and servers.

We could attempt to solve this with a Grand Unified Site Container, but that would most likely:

  1. force a single daemon implementation, thus eliminating innovation in process invocation,
  2. force a single configuration syntax, thus denying any market over declaration styles,
  3. force a static set of site services, limiting any improvements in process interaction,
  4. add an additional dependency to every framework,
  5. deny using HTTP servers like Apache and lighttpd in the same process (since they do their own process control), and
  6. be a dumping-ground for every other aspect of web development, from databases to templating.

A solution: the Web Site Process Bus

The Web Site Process Bus uses a simple publish/subscribe architecture to loosely connect WSGI components with site services. Here's our SIGTERM example, implemented with a WSPBus:

SIGTERM after WSPBus

The singleton Bus object does three things:

  1. It models server-availability state via a "state" attribute, which is a sentinel value from the set: (STARTING, STARTED, STOPPING, STOPPED).
  2. It possesses methods to change the state, such as "start", "stop", "restart", "graceful", and "exit".
  3. It possesses "publish" and "subscribe"/"unsubscribe" methods for named channels.

Each method which changes the state also has an equivalent named channel. Any framework, server, or other component may register code as a listener on any channel. For example, a web framework can register database-connection code to be run when the "start" method is called, and disconnection code for the "stop" method:

bus.subscribe("start", orm.connpool.start)
bus.subscribe("stop", orm.connpool.stop)

Any channel which has no listeners will simply ignore all published messages. This allows component code to be much simpler; callers do not need to know whether their actions are appropriate--they are appropriate if a listener is subscribed to that channel.

In addition to the builtin state-transition channels, components are free to define their own pub/sub channels. CherryPy's current implementation, for example, defines the additional channels start_thread and stop_thread, and registers channels for signals, such as "SIGTERM", "SIGHUP", and "SIGUSR1" (which then typically call bus methods like "restart" and "exit"). Some of these could be standardized. Other custom channels would be more naturally tightly-coupled, requiring awareness on the part of callers and callees.

Since WSPB state-changing method calls are expected to be sporadic, and often fundamentally serial (e.g., "autoreload"), their execution is synchronous. Subscribers (mostly of custom channels), however, are free to return immediately, and continue their operation asynchronously.

Benefits

The WSPB cleanly solves all of the problems outlined above. The various components are no longer in competition over process state; instead, there is a single race-free state machine. However, no single component has to know whether or how many other components are deployed in the same site.

Frameworks and servers can provide a subset of all site services, with a common, imperative-Python API for deployers to add or substitute their own. However, the WSPB doesn't define a config syntax, so each framework can continue to provide its own unique layer to translate config into that API. A deployer of a combined Pylons/Zope website could choose a Pylons startup script and config syntax to manage the lifecycle of the Zope components.

The WSPB doesn't try to instantiate or compose WSGI components (server -> middleware -> app) either. So there's even room for site daemons which provide no traditional web app functionality; instead, they specialize in providing tools to compose WSGI component graphs via a config file or even a GUI.

It also "plays nice" with mod_python, mod_proxy, mod_wsgi, FastCGI, and SCGI. Those who develop WSGI gateways for these will have a clear incentive to consolidate their ad-hoc startup and shutdown models into the WSPB. For example, a modpython gateway can use apache.register_cleanup to just call bus.stop() instead of providing custom cleanup-declaration code.

Best of all, the WSPB can be defined as a specification which any framework can provide in a small amount of code. Rather than attempt to draft the specification here (that can be hashed out on Web-SIG, since this is by no means complete), I'm just going to provide an example:

try:
    set
except NameError:
    from sets import Set as set
import sys
import threading
import time
import traceback as _traceback


# Use a flag to indicate the state of the bus.
class _StateEnum(object):
    class State(object):
        pass
states = _StateEnum()
states.STOPPED = states.State()
states.STARTING = states.State()
states.STARTED = states.State()
states.STOPPING = states.State()


class Bus(object):
    """Process state-machine and messenger for HTTP site deployment."""

    states = states
    state = states.STOPPED

    def __init__(self):
        self.state = states.STOPPED
        self.listeners = dict([(channel, set()) for channel
                               in ('start', 'stop', 'exit',
                                   'restart', 'graceful', 'log')])
        self._priorities = {}

    def subscribe(self, channel, callback, priority=None):
        """Add the given callback at the given channel (if not present)."""
        if channel not in self.listeners:
            self.listeners[channel] = set()
        self.listeners[channel].add(callback)

        if priority is None:
            priority = getattr(callback, 'priority', 50)
        self._priorities[(channel, callback)] = priority

    def unsubscribe(self, channel, callback):
        """Discard the given callback (if present)."""
        listeners = self.listeners.get(channel)
        if listeners and callback in listeners:
            listeners.discard(callback)
            del self._priorities[(channel, callback)]

    def publish(self, channel, *args, **kwargs):
        """Return output of all subscribers for the given channel."""
        if channel not in self.listeners:
            return []

        exc = None
        output = []

        items = [(self._priorities[(channel, listener)], listener)
                 for listener in self.listeners[channel]]
        items.sort()
        for priority, listener in items:
            # All listeners for a given channel are guaranteed to run even
            # if others at the same channel fail. We will still log the
            # failure, but proceed on to the next listener. The only way
            # to stop all processing from one of these listeners is to
            # raise SystemExit and stop the whole server.
            try:
                output.append(listener(*args, **kwargs))
            except (KeyboardInterrupt, SystemExit):
                raise
            except:
                self.log("Error in %r listener %r" % (channel, listener),
                         traceback=True)
                exc = sys.exc_info()[1]
        if exc:
            raise
        return output

    def start(self):
        """Start all services."""
        self.state = states.STARTING
        self.log('Bus starting')
        self.publish('start')
        self.state = states.STARTED

    def restart(self):
        """Restart the process (may close connections)."""
        self.stop()

        self.log('Bus restart')
        self.publish('restart')

    def graceful(self):
        """Advise all services to reload."""
        self.log('Bus graceful')
        self.publish('graceful')

    def block(self, state=states.STOPPED, interval=0.1):
        """Wait for the given state, KeyboardInterrupt or SystemExit."""
        try:
            while self.state != state:
                time.sleep(interval)
        except (KeyboardInterrupt, IOError):
            # The time.sleep call might raise
            # "IOError: [Errno 4] Interrupted function call" on KBInt.
            self.log('Keyboard Interrupt: shutting down bus')
            self.stop()
        except SystemExit:
            self.log('SystemExit raised: shutting down bus')
            self.stop()
            raise

    def stop(self):
        """Stop all services."""
        self.state = states.STOPPING
        self.log('Bus stopping')
        self.publish('stop')
        self.state = states.STOPPED

    def exit(self, status=0):
        """Stop all services and exit the process."""
        self.stop()

        self.log('Bus exit')
        self.publish('exit')
        sys.exit(status)

    def log(self, msg="", traceback=False):
        if traceback:
            exc = sys.exc_info()
            msg += "\n" + "".join(_traceback.format_exception(*exc))
        self.publish('log', msg)

06/05/07

Permalink 02:19:41 pm, by admin Email , 408 words   English (US)
Categories: IT, Python

Python concurrency syntax

via Bill de hÓra, I ran across this thread on LtU wherein Peter Van Roy comments:

The real problem is not threads as such; it is threads plus shared mutable state. To solve this problem, it's not necessary to throw away threads. It is sufficient to disallow mutable state shared between threads (mutable state local to one thread is still allowed).

...and Allan McInnes adds:

The "problem with threads" lies in the current approach to sharing state by default, and "pruning away nondeterminism" to get a correctly functioning system.

...and "dbfaken" adds:

Perhaps we should have strong syntax distinctions for mutation.

Since the first versions of Dejavu (my Python mediated-DB/ORM), I've noticed that this "pruning away nondeterminism" approach is exactly the wrong direction for systems which are designed to be thread-safe; we could instead explore languages and systems which allow us to "prune away determinism". By that I mean, mutable state should not be shared between threads by default; any mutable state which needs to be shared should be explicitly declared as such. This would make systems like Dejavu much simpler to create, use, and maintain.

I've often wondered what a "strong syntax distinction for [shared] mutation" would look like in Python. The simplest solution would probably have to:

  1. Make class.__dict__'s immutable. This is a natural choice given the normal usage patterns of classes by developers in the wild: generally, a class exists to share methods between instances. There are valid use cases for classes which are mutable, but they are rare; perhaps a sentinel of some kind provided by object could re-enable mutability for classes, but it should be off by default.
  2. Make all module.__dict__'s immutable. This has already been suggested on python-dev (IIRC by GvR himself), although I believe it was suggested as a way to reduce monkeypatching.
  3. Provide a @shared annotation for explicitly declaring shared mutable data.

This is just one solution to a small set of use cases: threaded programs where the explicit shared state is small compared to the total lines of code. I haven't the experience to state whether such a model is inherently damaging to other concurrent needs and designs. It has the benefit, however, of having little impact on single-threaded programs.

Would such a feature help catapult Python into the "large systems" space?

05/15/07

Permalink 11:56:50 am, by fumanchu Email , 263 words   English (US)
Categories: IT, General

The Fu Filter

All systems fail, and complex systems fail in a nearly infinite number of ways, some anticipated, many unanticipated. You could publish a large manual for how to deal with every anticipated failure, but for sufficiently-complex systems, the labor of writing such a manual far outweighs the benefit of having it. Heck, the labor of reading such a manual far outweighs the benefits. Even the labor of advertising the manual outweighs the benefits. And let's not forget version control, editing, publishing, distribution, recollection, authorization, errata, indexing, and a host of other system-management duties.

Take laptop overheating. Yes, it happens. Yes, damage is done. But the damage of creating a new system to usefully and efficiently communicate the dangers of laptop overheating to all laptop users in your company is probably far greater.

But people still try. And it takes a long time to explain the above. Wouldn't it be great if you could use a single short phrase to mean all that?

Here's my contribution to the world of Getting Things Done: the Fu Filter. Use it to imply that the issue in question is not worth addressing in any meaningful way, because to do so would be more trouble than it's worth. For example, you could tell someone that laptop overheating "doesn't pass the Fu Filter." Those of you with sufficient computing experience may wish to spell it "Foo Filter" in honor of all foo everywhere. Since "fu" can mean happiness (with the right tone), you can also think of this as the "Happiness Filter".

03/16/07

Permalink 03:55:32 pm, by fumanchu Email , 23 words   English (US)
Categories: CherryPy

It's official: CherryPy rocks

From sucks-rocks.com:

CherryPy rocks

No, really. It rocks. Rocks, rocks, rocks.

(Thanks, jamwt!)

03/14/07

Permalink 06:08:42 pm, by fumanchu Email , 129 words   English (US)
Categories: IT

You are what you code

Hey, you. Do you realize what you're writing? The long-standing IT joke is that you always end up coding your own job out of existence. But what are you coding yourself into?

  • You're writing a framework that turns website creation into an assembly line. Do you really want to work on an assembly line?
  • You're writing an API that wraps a well-understood common object model with a domain-specific language. Do you really want to be an expert on a language nobody else knows?
  • You're writing a program that needs regular maintenance. Do you really want to clean software toilets for a living?
  • You're writing a community tool with a moderator mode. Do you really want to be a bouncer for the rest of your life?

Nobody else does, either.

02/25/07

Permalink 02:21:23 pm, by admin Email , 572 words   English (US)
Categories: Python, CherryPy, WSGI

PyCon 2007 and CherryPy

PyCon 2007 is nearing a close; here are some notes on how it affected CherryPy:

Web application deployment

Chad Whitacre (author of Aspen) herded several cats into a room on Sunday and forced us to discuss the various issues surrounding Python web application deployment. This is hinted at in the WSGI spec:

Finally, it should be mentioned that the current version of WSGI does not prescribe any particular mechanism for "deploying" an application for use with a web server or server gateway. At the present time, this is necessarily implementation-defined by the server or gateway. After a sufficient number of servers and frameworks have implemented WSGI to provide field experience with varying deployment requirements, it may make sense to create another PEP, describing a deployment standard for WSGI servers and application frameworks.

There were three basic realms where the participants agreed we could try to collaborate/standardize:

  1. Process control: stop, start, restart, daemonization, signal handling, socket re-use, drop privileges, etc. If you're familiar with CherryPy 3, you'll recognize this list as 95% of the current cherrypy.engine object. The CherryPy team has already been discussing ways of breaking up the Engine object; this may facilitate that (and vice-versa). Joseph Tate volunteered to look at socket re-use issues specifically, but the general consensus seemed to be that much of this would be hashed out on Web-SIG.

  2. WSGI stack composition: Jim Fulton proposed that we could all agree on Paste Deploy (at least a good portion of the API) to manage this in a cross-framework manner. Most heads nodded, "yes". Jim also proposed that each of the framework authors take the next week to refamiliarize themselves with Deploy, and then start pestering Ian Bicking with specific API issues. Ian suggested that he should fork Paste Deploy into another project specifically for this. For CherryPy, this would first mean offering standard egg entry points. [Personally, I'd like to standardize on a pure-Python API for deploy, not a config file format API. In other words, make the config file format optional, so that users of CP-only apps could avoid having to learn a distinct config file format for deployment. It should be possible to transform various config file formats into the same Python object(s).]

  3. Benchmarks: Jim also suggested we create a standard WSGI HTTP server benchmark suite, with various test applications and concurrency scenarios. This would compare various WSGI HTTP servers, as opposed to CherryPy's existing benchmark suite which compares successive versions of the full CP stack. Ian volunteered to begin work on that project (with the expectation that others would contribute substantial use cases, etc).

Others who were present for at least a portion of the long discussion: me, Mark Ramm, Kevin Dangoor, Ben Bangert, Jonathan Ellis, Matt Good, Brian Beck, and Calvin Hendryx-Parker.

WSGI middleware authoring

After some discussion with Mark (and he with Ian and Ben), we agreed that CherryPy could do more in the WSGI-middleware-authoring department. There is a continuous pressure to simply re-use or fix up the existing CherryPy request object to fill this need; however, there are some fundamental problems with that approach (such as the use of threadlocals to manage context, and the difficulty of streaming WSGI output through a CherryPy app). At the moment, I'm leaning toward adding a new API to CherryPy which would be similar to the application API, but specifically targeted at middleware authoring.

02/05/07

Permalink 04:25:45 pm, by fumanchu Email , 49 words   English (US)
Categories: IT

Feedburner is ruining feeds for us all

What should have been 7 HTTP requests is now 81, and what's worse is that all of the feedburner responses are 200's. This is no way to run an Internet. At the least, feedburner, please do the fancy webhit dance for only 1 of the 3 gifs for each entry in the feed.

HTTP session

01/23/07

Permalink 04:33:26 pm, by fumanchu Email , 86 words   English (US)
Categories: IT

Spam innards

I got an uncompleted bit of spam in my inbox today. Here's the end of the headers for fun:

Received: from 192.168.0.%RND_DIGIT (203-219-%DIGSTAT2-%STATDIG.%RND_FROM_DOMAIN [203.219.%DIGSTAT2.%STATDIG]) by mail%SINGSTAT.%RND_FROM_DOMAIN (envelope-from %FROM_EMAIL) (8.13.6/8.13.6) with SMTP id %STATWORD for <%TO_EMAIL>; %CURRENT_DATE_TIME
Message-Id: <%RND_DIGIT[10].%STATWORD@mail%SINGSTAT.%RND_FROM_DOMAIN> 
From: "%FROM_NAME" <%FROM_EMAIL>
Bcc:
Date: Tue, 23 Jan 2007 14:08:41 -0800
X-pstn-levels:     (S: 1.07668/99.82653 R:95.9108 P:95.9108 M:97.0282 C:98.6951 )
X-pstn-settings: 3 (1.0000:1.0000) s gt3 gt2 gt1 r p m c 
X-pstn-addresses: from <mahpuchotx@netpipe.com> [81/4] 
Return-Path: <a href="mailto:mahpuchotx@netpipe.com">mahpuchotx@netpipe.com</a>
X-OriginalArrivalTime: 23 Jan 2007 23:02:12.0495 (UTC) FILETIME=[8464F9F0:01C73F42]

Fascinating stuff.

Permalink 12:05:53 pm, by fumanchu Email , 433 words   English (US)
Categories: Python, Dejavu

Mapping Python types to DB types

Reading Barry Warsaw's recent use of SQLAlchemy, I'm reminded once again of how ugly I find SQLAlchemy's PickleType and SQLObject's PickleCol concepts. I have nothing against the concept of pickle itself, mind you, but I do have an issue with implementation layer names leaking into application code.

The existence of a PickleType (and BlobType, etc.) means that the application developer needs to think in terms of database types. This adds another mental model to the user's (my) tiny brain, one which is unnecessary. It constantly places the burden on the developer to map Python types to database types.

For Dejavu, I started in the opposite direction, and decided that object properties would be declared in terms of Python types, not database types. When you write a new Unit class, you even pass the actual type (such as int or unicode) to the property constructor instead of a type name! Instead of separate classes for each type, there is only a single UnitProperty class. This frees programmers from having to map types in their code (and therefore in their heads); it removes an entire mental model (DB types) at coding time, and allows the programmer to remain in the Python flow.

However, the first versions of Dejavu went too far in this approach, mostly due to the fact that Dejavu started from the "no legacy" side of ORM development; that is, it assumed your Python model would always create the database. This allowed Dejavu to choose appropriate database types for the declared Python types, but meant that existing applications (with existing data) were difficult to port to Dejavu, because the type-adaptation machinery had no way to recognize and handle database types other than those Dejavu preferred to create.

Dejavu 1.5 (soon to be released) corrects this by allowing true "M x N" type adaptation. What this means is that you can continue to directly use Python types in your model, but you also gain complete control over the database types. The built-in type adapters understand many more (Python type <-> DB type) adaptation pairs, now, but you also have the power to add your own. In addition, Dejavu now has DB type-introspection capabilities—the database types will be discovered for you, and appropriate adapters used on the fly. [...and Dejavu now allows you to automatically create Python models from existing databases.]

In short, it is possible to have an ORM with abstractions that don't leak (at least not on a regular basis—the construction of a custom adapter requires some thought ;) ).

<< 1 2 3 4 5 6 7 8 9 10 11 ... 17 >>

June 2017
Sun Mon Tue Wed Thu Fri Sat
 << <   > >>
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30  

Search

The requested Blog doesn't exist any more!

XML Feeds

free blog software