Difference between revisions of "Eventlet/Examples"
< Eventlet
Jump to navigation
Jump to search
Which Linden (talk | contribs) m (→echo server: fixed weird formatting) |
Which Linden (talk | contribs) (→web crawler: reword a bit) |
||
Line 30: | Line 30: | ||
== web crawler == | == web crawler == | ||
This is a simple web "crawler" that fetches a bunch of urls using a coroutine pool. It | This is a simple web "crawler" that fetches a bunch of urls using a coroutine pool. It has as much concurrency (i.e. pages being fetched simultaneously) as coroutines in the pool. | ||
urls = ["http://www.google.com/intl/en_ALL/images/logo.gif", | urls = ["http://www.google.com/intl/en_ALL/images/logo.gif", |
Revision as of 14:33, 24 August 2007
Examples
These are short examples demonstrating the use of Eventlet. They are also included in the examples
directory of the source.
echo server
This is a simple server that listens on port 6000 and simply echoes back every input line it receives. Connect to it with: telnet localhost 6000
Terminate your connection by quitting telnet (typically Ctrl-] and then 'quit')
from eventlet import api def handle_socket(client): print "client connected" while True: # pass through every non-eof line x = client.readline() if not x: break client.write(x) print "echoed", x print "client disconnected" # server socket listening on port 6000 server = api.tcp_listener(('0.0.0.0', 6000)) while True: new_sock, address = server.accept() # handle every new connection with a new coroutine api.spawn(handle_socket, new_sock) server.close()
web crawler
This is a simple web "crawler" that fetches a bunch of urls using a coroutine pool. It has as much concurrency (i.e. pages being fetched simultaneously) as coroutines in the pool.
urls = ["http://www.google.com/intl/en_ALL/images/logo.gif", "http://wiki.secondlife.com/w/images/secondlife.jpg", "http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"] import time from eventlet import coros, httpc, util # replace socket with a cooperative coroutine socket because httpc # uses httplib, which uses socket. Removing this serializes the http # requests, because the standard socket is blocking. util.wrap_socket_with_coroutine_socket() def fetch(url): # we could do something interesting with the result, but this is # example code, so we'll just report that we did it print "%s fetching %s" % (time.asctime(), url) httpc.get(url) print "%s fetched %s" % (time.asctime(), url) pool = coros.CoroutinePool(max_size=4) waiters = [] for url in urls: waiters.append(pool.execute(fetch, url)) # wait for all the coroutines to come back before exiting the process for waiter in waiters: waiter.wait()