Categories: IT, Architecture, Linnaeus Award, Python, Cation, CherryPy, Dejavu, WHELPS, WSGI, Robotics and Engineering

Pages: << 1 ... 7 8 9 10 11 12 13 14 15 16 17


Permalink 08:26:49 am, by fumanchu Email , 473 words   English (US)
Categories: IT

Automation will never have Zen

Jon Udell wrote about the Zen of automation, including

Either way, what’s missing is the sensitive feedback. To automate a task I turn on the macro recorder, self-consciously perform the task, analyze it, fiddle around with the script, and then package it for reuse. So long as the context remains the same, the script will continue to work. Of course the context never does remain the same, and I’ll end up adjusting the data sources read by my account exporter and tweaking the charts created by my log visualizer.

In my opinion, automation will rarely, if ever, have a Zen. To use Jon's terms, it's not that the context changes; it's that the tool has become the new context. When the original problem morphs into (or is replaced by) another, there is a choice to be made regarding whether the tool-we-built-to-solve-the-original-problem should adapt or be replaced. It's the classic "when all you have is a hammer..." problem.

It only takes a light touch to get the kid on the bicycle to sense and react to my cues, but I have to clobber my software to make it pay attention. Getting applications and services to share a common message bus should help.

Given the above, I don't see how a "common message bus" will help unless the tools themselves learn and adapt. You're asking a screwdriver to magically become a hammer.

The physical world answers the need for adaptation in a number of ways:

  1. Restate the problem in terms of the tools you have. Example: you have a screwdriver--don't buy nails.
  2. Use an existing tool as-is and live with the side-effects. Example: use the screwdriver handle to bash the nail. The tool is damaged, your hand is damaged, you'll bend and throw away more nails, and the nails that do get in will be weaker. But sometimes those tradeoffs are acceptable.
  3. Use an existing tool, but modify it irreparably to meet the new task. Example: attach a hammer head to the butt of your screwdriver. It may be less usable as a screwdriver as a result (but maybe not...).
  4. Buy or make a new tool to meet the new problem. Sometimes this is done in parallel, for example, philips-head screws required simultaneous development of philips screwdrivers.

I'm sure there are others.

For the glue-language scripting that Jon's talking about, I see developers choosing response 3 and 4, mostly. Users (as opposed to developers) will often choose 1 and 2--they force the task to fit the tool, since tools are expensive to make, and tool-making isn't something they do. The feedback Jon's seeking should be between developers and users, not developers and their tools. You don't train the bicycle; you train the child to ride the bicycle better, more confidently. You train the child to cry less when they crash.


Permalink 11:09:13 am, by fumanchu Email , 27 words   English (US)
Categories: Robotics and Engineering

Cool touch-sensitive robotics project

Andre Stubbe and Markus Lerner - Univ of Berlin. Project homepage

Check out the Quicktime videos.

Permalink 10:14:47 am, by fumanchu Email , 66 words   English (US)
Categories: IT, General

Apologies to all

I did a boneheaded thing. I sent a bunch of you an announcement about this new blog of mine, even though our network, at the time, looked like this:

ugly rack pic ugly rack pic 2

It's being put back together, slowly. So if you can't get to this page during the rest of this week, just try again the next day. If you have problems next week, let me know right away. :)


Permalink 03:58:55 pm, by fumanchu Email , 572 words   English (US)
Categories: IT, Dejavu

Explicit Domain object persistence

Dejavu's domain objects (those objects which are part of your model, and which you probably want to persist) all subclass from dejavu.Unit. When you create a new Unit, you declare that it should be persisted by calling Unit.memorize(). Plenty of other ORM's (Object-Relational Mappers) don't work this way; instead, every object is automatically persisted unless you specify otherwise. Why doesn't Dejavu do it automatically?

Legacy Database Design

In the most common case, Dejavu uses the Model which you design to create and populate a database. Your Unit classes become tables, Unit Properties translate into columns, and each Unit instance is persisted as a row. In this case, your write the Model and let it drive the database design to match. In some cases, however, you need to integrate with an existing database. Dejavu has been designed to support this, as well (although it requires more work).

Often, a pre-existent database will possesses validation checks, from type and value constraints to referential integrity enforcement. Such guards often require that data be collected and pre-processed by your application before submitting it for persistence; that is, you must get "everything right" before you add a new row to a table. Since these requirements are application-specific, it would be impossible for Dejavu to guess when the object is "ready to persist". As the Zen says, "In the face of ambiguity, refuse the temptation to guess."

It is entirely possible for you to write a subclass of dejavu.Unit which, at the end of __init__, calls memorize for you. It would have been much harder and more confusing to do the opposite. If you find yourself not needing the flexibility which explicit calls to memorize provide (and remembering to call memorize becomes a burden), feel free to use such a subclass.

Multiple stores

Further, Dejavu is designed to work with multiple stores, which may be determined dynamically. I had a use case, for example, to manage Transaction objects, where Income Transactions were placed in one store, and Expense Transactions were placed in another store. A custom Storage Manager proxied the two stores, and decided in which store to persist each Transaction Unit based on the is_expense attribute. That attribute might not be known at the time an automatic mechanism persisted the object. To make matters worse, the Income store used an autoincrementing ID field, a fact over which I had no control, so I couldn't simply migrate a Transaction from one store to the other as the attribute changed.

Another approach would have been to simply have two separate Unit classes, IncomeTransaction and ExpenseTransaction. However, this would have broken encapsulation--the storage requirements would be intruding on the model design. I very much want the Model to migrate seamlessly on the day when we ditch the Income store, and the integrated Transaction class fits the company's mental and behavioral model better.

All that said, the decision comes down to "explicit is better than implicit". Since Dejavu is a library, it's much easier to provide the functionality in an explicit manner, and allow application developers to make it implicit if they see fit. If the behavior is performed implicitly, "behind the scenes", it is much more difficult to allow developers to then make that explicit when they need to; you end up either exposing class internals, or forcing the developer to reproduce your internal logic, often incompletely.

Permalink 02:59:46 pm, by fumanchu Email , 839 words   English (US)
Categories: IT, Python, Cation

Paraware vs Middleware

Update: I stumbled onto Mike Spille's blog, which talks a bit more (and better) about middleware versus libraries.

Ian Bicking recently promoted the idea of a WSGI reference library, to possibly include the following components (among others):

  • Sessions middleware
  • Logging middleware/library (I assume he meant request logging)
  • Error reporting middleware/library
  • Test frameworks
  • A file application (handling If-Modified-Since, etc)
  • A proxy application
  • Libraries for parsing query strings and all that.
  • Authentication.
  • URL parsers.
  • And maybe a few of the more boring servers, like the CGI server, which will otherwise be homeless (or widely repeated).

Not being the most careful reader in the world, I was thrown by the phrase, "...collaborating on a ... library of WSGI middleware"; I read the list as if he meant each piece would be a middleware component! Of course he did not intend that. Many of the items in the list are WSGI applications, which sit at the end of the software stack.

Some of the items in the list are, in fact, paraware; that is, they parallel the main application. Traditional programming libraries/toolkits are a common example of paraware. They provide functionality by supplying input and output hooks, which are supplied and consumed by the main application:

result = mylib.get('f')

Middleware, on the other hand, handles/munges a content stream, and sits between at least two other components in a software stack. Middleware is a nasty thing in many environments, because each middleware component must manage I/O of all shared objects, in two directions (both its caller and the next component in the stack). In Python, however (and specifically WSGI), the shared objects are all on the same heap, and can all be passed by reference.

I see problems with writing most of these components as middleware. WSGI has a shot at being ubiquitous because it enforces a set of interfaces and a data model; this same enforcement, however, can also be a liability, since WSGI is not yet ubiquitous. As a developer of a web framework, I have a dilemma: I need to provide the same functionality whether my users use WSGI or not. This means I need to write such components as libraries (so they can be used as paraware) and then wrap them with WSGI boilerplate (so they can be used as middleware). This leads to serious code smell. WSGI's callback structure is complicated enough without me introducing library-code wrappers. Perhaps what we need are generic pieces of WSGI middleware which you can init with a callback from your library code. Hmmm.

Potential components from Cation

I've been meaning for a while now to investigate breaking my Cation app framework down into a set of libraries (instead of the monolithic framework it is today). You can see from the dearth of recent checkins that I haven't done any of it yet. ;) Many of those could be added to a WSGI library (some are already on Ian's list). Here are the ones I'd be most interested in writing:

Top-level error trapping, logging, and pretty printing

I'd like to do this myself because Cation keeps a list of application developers (usernames), and shows full tracebacks in the browser to developers. Ordinary users get a "pretty" error message, and the full traceback goes into the log only. I'm pretty sure a standard library version wouldn't do that. Integrating the usernames into the error handling logic leads me to want to provide this as paraware, since middleware components are usually not expected to interoperate.

Timed, threaded Worker classes for getting things done on a schedule, possibly recurring

This isn't WSGI-specific, and shouldn't be a candidate for WSGI. But it's something I'd like to rewrite in more of a library style, instead of a framework.

Centrally registered and managed requests

For example, this would assist a WSGI application in fulfilling a request to shut down--each active web request (thread) could be sent a shutdown message and kill itself gracefully from outside the application itself.

Data type coercion (both inbound and outbound), including encoding

Since HTML form values are always received as strings, a standard (but overridable) way to convert them into Python values would be helpful. In the other direction, values need to be coerced to strings, put in the encoding of the server (or of the page), and often quoted safely. Again, this would probably need enough customizability that it would be a poor candidate for middleware, but a good candidate for a set of library calls.


Classic middleware, meeting a need orthogonal to the actual content delivery, and not needing customization or context.

HTTP uploads.

Something that might on occasion need to be specialized, but ultimately a commodity for 90+% of cases. The standard implementation would be nothing more than a pretty interface over simple (but secure) file management.

That's enough for the next year or so :) Pity I have so many other projects to work on simultaneously.


Permalink 11:58:41 am, by fumanchu Email , 85 words   English (US)
Categories: IT

Oh-point-forever releases

Ned nails it. If someone, somewhere has been using your software in production for a while (a year? years?), your core functionality is 1.0. Branch and fork all you want from there, but please let us, your users, know it's worth a try.

I particularly don't understand why this trend seems to apply to development libraries more than anything else. Library users are developers--give them some credit. They'll find and fix the broken bits on their own.


Permalink 11:06:32 pm, by fumanchu Email , 461 words   English (US)
Categories: IT

rsync backup to a hotsite

In an attempt to automate our backups (my PFY was doing manual DVD burning every day), we bought a Dell Poweredge SC420 with a pair of 250G SATA drives and no OS. It'll go either to our disaster hotsite, or a colo. The only thing the box will be doing is rsync over ssh. Our various nix and Windows servers (via cwrsync) will connect on a schedule and back up various top-level directories. Testing on a different server over the 'Net showed typical backup times of 5 to 10 minutes for one such directory, depending on how much had changed overnight. We expect to have backup traffic for about 1 hour each night, up to a max of 4 hours (rarely). Email will, of course, be the killer. We may go back to CD's/DVD's for that.

At any rate, here are the steps I went through to set up the Dell server:

  • Setup (F2): make box power on when power lost and restored.
  • Configure SATA by hitting Ctrl-A at boot, no RAID.
  • Insert sarge debian-installer CD and boot from it. I can't praise the Debian team enough--the debian-installer is fantastic! Thanks to Greg Folkert for recommending it!
  • At FIRST prompt (F1 for help or enter to boot), type "linux26", hit Enter (this selects the 2.6 kernel).
  • SATA drives should be auto-recognized. Partition them ext3 and mount to /data1, /data2.
  • Finish install normally.
  • CD ejects and box reboots. When Real Time Clock freezes (twice) during boot, hit Ctrl-C to kill the hung process and continue boot. This is an ACPI problem, which we'll fix in a minute.
  • Continue setting up Debian as prompted. is a nice http mirror for me. Oh, and run the ssh server in protocol 2 only.
  • apt-get remove exim4, which won't work. But it's fun to try. Anyone know why it isn't removed (still shows up in ps after removal/reboot)?
  • Turn ACPI off, or the Real Time Clock will freeze on every boot:
    vi /etc/grub/menu.lst
    Change the lines starting with "kernel": append the text " acpi=off".
  • Mount the SATA drives:
    cd /
    mkdir /data1
    mkdir /data2
    vi /etc/fstab

    add the lines:
       /dev/sda1 /data1 ext3 defaults,errors=remount-ro 0 2
       /dev/sdb1 /data2 ext3 defaults,errors=remount-ro 0 2
  • Change the order in which discover and checkfs are called:
    cd /etc/rcS.d
    mv S36discover S29discover
  • Setup rsync
    apt-get install rsync
    vi /etc/default/rsync (set RSYNC_ENABLE=true)
    vi /etc/rsyncd.conf (see online docs)
    /etc/init.d/rsync start
  • Setup ssh
    vi /etc/ssh/sshd_config
     PermitRootLogin no
     RSAAuthentication no
  • make an rsync user
Permalink 10:11:12 pm, by admin Email , 490 words   English (US)
Categories: IT

More about the choice of blog software

Another Scott on IWETHEY asked me to expand on why I chose b2evolution for the blog software here, especially in relation to this post. I'm awfully bad at recording my decision-making processes, but I'll try.

Lots of blogs I examined failed one of our requirements outright, or at least offended my sensibilities ;) :

  • Wordpress: Multiple blogs aren't built into the core. There's a separate Wordpress-mu project, but it seems to be still in serious beta, with only one developer actively working on it.
  • Blosxom: Perl. Bleah. If I used it every day, maybe. But the blog is something I want to work on (writing plugins, etc.) once or twice a year. PHP is something I can pick up quickly; in fact, I wrote my first working plugin for b2evolution in an hour, never having even looked at PHP code before in my life. By the end of the afternoon, I had a patch ready (against CVS-HEAD) for applying plugins to comments (not just posts), with enough confidence to mail it off to the project leads.
  • Serendipity: no multiblog as far as I could see. By "multiblog", I mean multiple authors on one install, each with their own blog (each with their own feed(s)). We have 40 people on staff at Amor, each with their own financial supporters with whom they wish to communicate. Those who support me don't necessarily have any interest in reading what my co-workers are writing (but if they want to read what everyone at Amor is writing, b2evolution gives them that opportunity out of the box, as well).
  • Textpattern: editing is done with Textile, and the whole editing process is very HTML-centric. If I were the only author, it might fly. But I have a wide range of authors, from complete Luddites to, well, me. HTML is something to hide from many of them. For b2evolution, on the other hand, I quickly found and applied a plugin to use Markdown. Users can also choose GreyMatter, BB code (a la phpBB ), Textile, or Texturize, all included in the default install.
  • Nucleus. Same parent as b2evolution. I honestly can't remember why I chose b2evolution over nucleus, except for a vague feeling that nucleus was written by developers for developers, instead of for users. Oh, yeah, and the thumbnails didn't work, at least not quickly and easily enough. The whole "media library" idiom is nice for developers, but some of my users would never be able to add a picture to a post, a task they will desire to do quite often.

Meh. That's enough for now. b2evo has had its own quirks, but the problems have been surmountable with a minimum of effort. I think it will serve us well enough.


Permalink 02:30:36 pm, by fumanchu Email , 227 words   English (US)
Categories: IT


Yes, this really is the first post.

My company, Amor Ministries, has been talking about staff websites since...well, forever. We've toyed with various ideas and CMS systems. I even wrote one myself for HTML editing called Tibia. However, nothing seemed to solidify--the burden on the authors is usually too great.

Last week, I floated the concept of blogs past Alon, our Development Team leader, and for the first time, his desires and the available tech started to gel very nicely. We made a short laundry list of desirable features for blog software, and the result will be built here.

I went with b2evolution because:

  1. It is the most multi-blog friendly.
  2. It's free and open-source. Free is always nice, of course, but the modifications we wanted pretty much forced an OSS solution.
  3. It seemed the cleanest of the few I downloaded and evaluated (including Nucleus, Wordpress, and Serendipity). "Clean" in both on-screen appearance, and the codebase (again, to make our mods easier).
  4. Feeds are built in and on by default.

The first features I'm looking to write (and contribute back to b2evo, if they'll answer my email):

  1. Profanity filter. A simple one--no boiling of any oceans. This is already done, btw.
  2. A tool for staff to collect recent posts into a printed newsletter.

<< 1 ... 7 8 9 10 11 12 13 14 15 16 17

August 2019
Sun Mon Tue Wed Thu Fri Sat
 << <   > >>
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31


The requested Blog doesn't exist any more!

XML Feeds

free blog software