SLGOGP Draft 1/Discuss 2-4 Event Queues
Jump to navigation
Jump to search
SLGOGP Draft 1 > Discuss 2-4 Event Queues |
- Event queues make sense for the reasons given, but the implementation via long poll seems bizarre and unhelpful. What seems to be happening here is that everything is being seen through the eyes of HTTP, even when that leads to a very bad design. Here are some reasons why it is bad:
- It consumes extra resources on client and server alike, because the communication path needs to be re-established repeatedly both at network and application levels.
- It increases average latency in event handling dramatically, since long poll setup takes a lot of time compared to actual event communication, and poll intervals become subject to the vagaries of client timing.
- It requires contorted client design, with poll points being placed at selected places in the code. (This is the wrong design approach in almost all circumstances).
- It adds nothing useful to using a simple TCP stream for communicating events between server and client. The rationale given for event queues requires clients to establish the connection to the server, but there is no need to ever close this connection as long as that event queue is being monitored. Items sent to the event queue can just continue appearing at the client end through this pipe. Simple TCP is enough.
- It assumes a rather antiquated polling client structure, instead of a modern event-based one which reacts to everything through callbacks. While the long poll may persist the TCP stream for some significant time, it is highly inappropriate if the server side EVER terminates the stream except on event queue shutdown. In general, a modern client wants to set up the event stream once, and thereafter just react to incoming data on that socket, without polling. It is efficient, elegant, and simple.
- Summary: Because good event handling is so central to a client's effectiveness, this is a crucial area to get right. Seeing everything through HTTP glasses is not always helpful, and in general polling is a very poor technique for event propagation. HTTP can be used to create the connection, but the opened TCP stream must never terminate except on event queue shutdown. Morgaine Dinova 22:58, 11 March 2008 (PDT)
- I don't know how they might do things in the future, but the current EventQueueGet is meant to provide continuity for the existing code-base and I suspect this is the rationale for the new and improved event queue as well. Whether or not this should be the model used for all eternity is a different debate, IMHO. Saijanai 01:58, 13 March 2008 (PDT)
- Some URLs about COMET (which long-polling is part of)
- http://cometdaily.com/2007/12/18/latency-long-polling-vs-forever-frame/ (Latency discussion)
- http://cometdaily.com/2007/11/15/the-long-polling-technique/ (what is long-polling?)
- http://cometdaily.com/2007/12/11/the-future-of-comet-part-1-comet-today/ (an overview over various COMET techniques)
- I think there should be more details on the long-polling technique, e.g.
- State that the request stays open until some event occurs, after that it's reopened
- clients SHOULD implement Keep-Alive so it also can stay open then TaoTakashi 09:31, 13 March 2008 (PDT)
- It's that closing+reopening of the connection that's the problem here, Tao. Sai reckons that it's the opposite of the expected behaviour in our system, ie. that our event stream would be closed only at the express request of the client or at session end. However, that's not stated anywhere, so the extremely damaging COMET behaviour is naturally expected. Well COMET is fine for web apps, but not for high-speed event handling in interactive 3D worlds! :-) Morgaine Dinova 14:07, 13 March 2008 (PDT)
- We had a long discussion about this today after the Zero OH meeting, and one of the items of information that emerged was the following, from Wikipedia on COMET:
- Long polling: As the name suggests, long polling requires a new request for each event (or set of events). The browser makes an Ajax-style request to the server, which is kept open until the server has new data to send to the browser. After sending such an event, the server closes the connection, and the browser immediately opens a new one.
- In the absence of any statement to the contrary, the above immediately raises areas of concern, since the reference implementation of long poll in COMET terminates the TCP stream after each event sent. Needless to say, this would be totally tragic for the very high event rate expected in all virtualworld-type applications, particularly in event-rich ones of the SL type. And it conveys the wrong message for client designers too, as I detailed at the beginning of this section.
- In our discussion, it was suggested that while breaking the TCP stream is the norm in COMET, that was never the intention here. Well fine, but that has to be stated plainly, or else COMET's interpretation becomes the accepted norm, since it is the accepted reference for the long poll technique.
- I suggest that the best way to avoid this (mis)interpretation causing problems for future client design is to express the intent: to send events from server to client through a channel created when the client opens a TCP connection to the server. That's the goal. It's not "just an implementation detail", because it will be hardwired into clients which have to work with multiple different grids. The TCP event stream needs to be a fixed point of downstream messaging, just like the RESTful upstream operations are fixed points.
- The above doesn't preclude additional mechanisms being added later of course, but at least the first one needs to be a good one. Morgaine Dinova 13:59, 13 March 2008 (PDT)