Certified HTTP API

From Second Life Wiki
Jump to navigation Jump to search

<< Back to Certified HTTP Project

This is the ever-evolving api for a Python implementation of Certified HTTP.


The are two types of messages you'll want to send from the client:

  • Fire-and-forget: you don't want to do anything with the response of the message, you just want to make sure the message gets to its destination.
  • Resumable: When the response comes, there is some computation that needs to be performed on it, or the message is one in a series of certified messages.

Fire and Forget

This API should be straightforward:

certified_http.put(url, body)

If the client crashes and resumes, it should not reenter that code path at all. If it does, then we have no way to detect the duplication. The certified_http system should retry the message if necessary (discarding the body of the response).


If you planned on sending a message, then taking some actions based on the response, how can you ensure that said actions get carried out on resumption? In the general sense, this implies that a certified http client must be able to capture and store a continuation.

There may be a way to capture a continuation automatically, but it seems unlikely that we'll be able to solve that problem in general, much more likely that the developer writing the chttp client will have to write continuation-friendly code. We need to have an API that allows you to conveniently store and restore the enough application state to serve as a continuation, and to unit test that you actually captured all the state you needed! What we've chosen to do is called "deterministic replay".

The basic idea here is that the resumable section of code is wrapped up in a function. The function should be written with as few external dependencies as possible, i.e. "pure". The oplog argument allows you to store away data whose creation is not atomic with the execution of the method.

def start_transferring_foo(oplog, *args):

   # this context method saves away the non-deterministic thing
   # during the initial execution, and spits out the saved value on subsequent runs
   local = oplog.persist(some_non_deterministic_thing)
   response = certified_http.post(oplog, escrow_url, "start transaction")
   coupon_code = local + response['random number']

# this wraps the worker function with another whose signature is *args,
# and constructs an oplog upon invocation

start_transferring_foo = certified_http.resumable(start_transferring_foo)

# in python 2.5 you would precede start_transferring_foo with @certified_http.resumable


The server is all about resumability. The mulib style of programming is to dispatch to a single method that handles the entire request, so it's very natural to layer resumability on top. We still have to be careful about relying on module-level objects and other external contexts.

class Endpoint(certified_http.Resource)

  def handle_foo(self, oplog, req):
      # do stuff here!


Here's a little diagram of the oplog and its relationship to code.


The oplog is the core concept, but the coordinator for all the oplogs is the Persistence object, of which there will be as many subclasses as we have underlying mechanisms.

class Persistence(object):
  def __init__(*args):
    # do implementation-specific initialization here

  def message(id):
    # retrieves or creates a message with the specified id
  def oplogs():
    # returns a list of open oplogs -- i.e. oplogs that haven't completely run through

class Oplog(object):
  def do(method, *args, **kwargs):
    # if not resuming, calls the method and stores away the return value
    # if resuming, simply returns the stored return value

  def message(*args):
    # special version of do that stores the arguments of the message *before* executing the function

class Message(object):
  def oplog():
    # returns the associated oplog

  # member variable request is the request part of the HTTP message

  # response part of the HTTP message