User:Zero Linden/Office Hours/2008 Apr 10

From Second Life Wiki
Jump to navigation Jump to search
  • [8:38] Zero Linden: hello hello
  • [8:38] Saijanai Kuhn: morning Teacher
  • [8:38] Zero Linden: this is a chat based office hours
  • [8:38] Zha Ewry: Morning
  • [8:38] Morgaine Dinova: Hiya Zero
  • [8:38] Zha Ewry: Did you bring catnip?
  • [8:38] Zero Linden: for who ever is on voice
  • [8:39] SignpostMarv Martin: hi Zero
  • [8:39] Zero Linden: So - just walked in the door at the office in RL
  • [8:39] Zero Linden: welcome all, sorry to be a bit late
  • [8:39] Morgaine Dinova: gave us time to make the coffee :-)
  • [8:39] Wyn Galbraith: Morning Zero.
  • [8:39] Zha Ewry: And tell old school stories
  • [8:39] Harleen Gretzky: hi Zero
  • [8:40] Wyn Galbraith: We tend to fill the gap :)
  • [8:40] Zha Ewry: Alas with megaprims filled with sawdust and random conversation, but.. we try
  • [8:40] Zero Linden: "Mind the gap!"
  • [8:40] Morgaine Dinova: Hehehe
  • [8:41] Wyn Galbraith: LOL
  • [8:41] BigMike Bukowski: mmm Portal.
  • [8:41] Wyn Galbraith: I just listened to a book called Neverwhere, where mind the gap had a special meaning.
  • [8:41] Zero Linden: Okay - well then, welcome all to my tardy office hours
  • [8:41] Zero Linden: speak freely and speak in public
  • [8:41] Zero Linden: Agenda items for today?
  • [8:41] Morgaine Dinova: I so laugh when UK underground doors open, and the speakers blare out "Mind the gap!"
  • [8:42] Zero Linden: I've noticed that no one has taken to writing in ideas in the wiki
  • [8:42] Zero Linden: (prod.... prod...)
  • [8:42] Wyn Galbraith: Exactly Morgaine, the story is about the London below.
  • [8:42] SignpostMarv Martin: heh
  • [8:42] Morgaine Dinova: shouts: Zero: I read up on BEEP, and I also tried to test it, but failed miserably. The Spanish open-source library in C (Vortex) compiled and installed fine, but BEEP test examples segfaulted in the library init. The wxWidgets-based C++ version (wxbeep) didn't even build (bug in its configure). The python version (beepy) built but failed every single unit test. So, disaster all 'round in practice, although the docs sound promising.
  • [8:42] Zero Linden: Well, Morgaine - that is a resounding negative for BEEP, I'd say
  • [8:43] Zero Linden: I think it is a reasonable requirement that if we're going to base things on some existing protocol stack
  • [8:43] Saijanai Kuhn: all in favor of using BEEP?
  • [8:43] Morgaine Dinova: Nah, might have been a local problem. However, I have more ....
  • [8:43] Saijanai Kuhn: The ayes have it
  • [8:43] Zero Linden: that it have at least moderate, working, active, library projects to support it
  • [8:43] Leffard Lassard: What was the reason for BEEP? More than one EventQueyGet connection or what?
  • [8:43] Morgaine Dinova: Zero: It then occurred to me that most of the BEEP functionality is in SCTP anyway, so I'm currently playing that (the initial sctp_darn tests work fine). SCTP sounds perfect for server->client transport. (Each SCTP association provides a bundle of internal streams to avoid TCP's head-of-line blocking, provides multi-homing, is message-oriented instead of bytestream, provides better reliability and reduced latency than TCP, and is cross-platform).
  • [8:44] Morgaine Dinova: SCTP sounds a lot more promising.
  • [8:44] Zero Linden: The problem with SCTP is that it isn't in all operating systems -
  • [8:44] Morgaine Dinova: It is, now
  • [8:45] Zero Linden: Oh?
  • [8:45] Morgaine Dinova: All the common ones. :-) Dunno about Plan9, lol
  • [8:45] Zero Linden: Is it in BSD w/o a patch?
  • [8:45] Morgaine Dinova: BSD's is regarded as the fastest. Don't know re patch, I'll look.
  • [8:46] Zero Linden: okay, something to think about
  • [8:46] Morgaine Dinova: Yep. I'll be "playing" with it. As it seems currently, I'd be using that for any games type platform I'd be building. It seems to tick all the boxes.
  • [8:47] Saijanai Kuhn: wiki says BSD with patch, but FreeBSD is reference
  • [8:48] Zero Linden: reasonable intro: [1] (curtesy IBM)
  • [8:49] Morgaine Dinova: This would be for the server->client event traffic link only. AFAIK, there isn't any problem with TCP+HTTP as it stands for the upstream traffic.
  • [8:49] Zha Ewry: smiles at the linkage
  • [8:50] Morgaine Dinova: IBM does get some things right ;-)
  • [8:50] Morgaine Dinova: ducks
  • [8:51] Xugu Madison: Although is there a good reason to make up/downstream different? Just seems like it might be making things more complicated, in the long term?
  • [8:51] Rex Cronon: hello everybody, i just had to download a new rcv. and is dazzeling. so bright. where did i put my sunglasses?
  • [8:51] Zero Linden: Yeah - I'm looking to see if there is an official defintion of HTTP over SCTP
  • [8:51] Zero Linden: if so, that would make things mucho nicer
  • [8:51] Morgaine Dinova: Xugu: we want to build on normal web functionaliotq as far as possible, so I think best leave the upstream layering as is.
  • [8:52] Xugu Madison: Ah, right, thanks for explaining
  • [8:52] Zha Ewry: /the more layers we add here, the more liely we'll make somethign break
  • [8:52] Tao Takashi: the more likely nobody will implement it ;-)
  • [8:53] Zha Ewry: / That too
  • [8:53] Tao Takashi: so it might be good to also take into account what libraries for which languages are out there to handle it
  • [8:53] Morgaine Dinova: Well I like symmetry and elegance too, but for the downstream link we have an actual hard requirement to fulfil, whereas upstream HTTP/TCP is working fine already :-)
  • [8:53] Tao Takashi: we just have the same problem in the DataPortability group
  • [8:54] Zero Linden: I agree, Tao, existance of librarys in many languags is important
  • [8:54] Tao Takashi: I am sure LL will provide everything then for Python ;-)
  • [8:55] Zha Ewry: nods at Pythonic ubiquity in Linden Land
  • [8:56] Saijanai Kuhn: mulls over Pythonesque reference. Where's John Cleese when you need him?
  • [8:56] Tao Takashi: he's working on his own AWG
  • [8:56] Morgaine Dinova: If I enter dance mode, you'll know it's my KVM being creative.
  • [8:56] Zero Linden: I thought he was in the Cheese shop
  • [8:56] Lazarus Longstaff: farts in Sai's general direcion
  • [8:56] Saijanai Kuhn: actually, he's offered to become Obama's speech writer
  • [8:56] Paul Bumi: writing speeches for obama
  • [8:56] Lazarus Longstaff:  ;)
  • [8:57] Saijanai Kuhn: which isn't a joke. Cleese is a very good writer
  • [8:57] Rex Cronon: i was wandering why u were dancing morgaine:)
  • [8:57] Zero Linden: Okay - so our quest for standards based downstream (domain -> viewer) transport continues
  • [8:57] Morgaine Dinova: Rex: heh. I'll be replacing the KVM with a 1920x1440 one in a few days, but I bet that has bugs too. I
  • [8:57] Zero Linden: This is good....
  • [8:58] Zha Ewry: It is
  • [8:58] Morgaine Dinova: I've never found a KVM without bugs, and I've used many.
  • [8:58] Zha Ewry: tho. as we begin to code more and more
  • [8:58] Zha Ewry: that decisino will become more urgent
  • [8:58] Rex Cronon: just use 2 keyboards:)
  • [8:58] Paul Bumi: gonna run
  • [8:59] Paul Bumi: adios
  • [8:59] Rex Cronon: bye paul
  • [8:59] Zha Ewry: Zero. there were some questions on Tuesday about the performance of pushign textures down the UDP pipe
  • [8:59] Zha Ewry: vs fetchign them over http gets
  • [8:59] Morgaine Dinova: I have 3 keyboards already (+ a MIDI one), hehe.
  • [8:59] Zero Linden:  :-)
  • [8:59] Zha Ewry: I was wondering if anyone has done any actual performance testign
  • [8:59] Zero Linden: Really, and what was the question?
  • [8:59] Saijanai Kuhn: in the same stream as the rest of the events
  • [8:59] Zero Linden: Yes
  • [8:59] Zha Ewry: listens
  • [8:59] Zero Linden: Initial tests show that it is twice as fast going the HTTP way
  • [9:00] Zha Ewry: LOL
  • [9:00] Zha Ewry: My intuition was faster, not that much faster
  • [9:00] Saijanai Kuhn: mutters about real world facts getting in the way
  • [9:00] Rex Cronon: does it depend on the type of internet connection?
  • [9:00] Zero Linden: but mind you, any test we do isn't "real" - as there is nothing like 60k users from around the world with varying internet connections
  • [9:00] Zha Ewry: Well, sure
  • [9:00] Zha Ewry: But. the base numebrs are useful
  • [9:01] Zero Linden: So deployment will be staged to make sure there aren't situations we didn't see in testing
  • [9:01] Zha Ewry: If it retries at all better, it will actually do better, in real load
  • [9:01] Zero Linden: right - and 2x was way more than anyone here hoped for
  • [9:01] Zha Ewry: I see 50% of my UDP traffic being half loaded textures in busy sims
  • [9:01] Zero Linden: we were in fact going to be happy with even, say 0.8x, becuase it will remove computation load on the sims
  • [9:01] Morgaine Dinova: Well perhaps we'd better virtualize the transport anyway so that it can be replaced with any other, because we're going to have to rewrite it for IPv6 soon anyway. Time to bite the bullet.
  • [9:01] Zero Linden: and go smoother, even if it were slower
  • [9:01] Saijanai Kuhn: but that's a different issue than the bottleneck for using the same TCP stream for both textures and events, right
  • [9:01] Zero Linden: that it was 2x was ajust icing
  • [9:02] Zero Linden: Sai - in HTTP texture we do NOT use the same TCP stream for the textures as the event queue
  • [9:02] Saijanai Kuhn: ah, OK. confused
  • [9:02] Saijanai Kuhn: as usual
  • [9:02] Zero Linden: What's even more amazing is that the HTTP texture implemetnation actually uses a capability
  • [9:02] Morgaine Dinova: With SCTP, transporting large textures or other monolithic objects doesn't jam up the queue and add latency, because you'd stick large items down their own channel.
  • [9:02] Zero Linden: per texture fragment request (!) and the texture data is fully proxied by a Python process
  • [9:03] Zero Linden: and it was still 2x
  • [9:03] Zha Ewry: The texture UUIDs come down the even queue, right? then the cilent does an http get?
  • [9:03] Zha Ewry: (well, https)
  • [9:03] Zero Linden: so one day when we just return those caps as URLs directly to the texture, we'll see even less latency
  • [9:03] Zero Linden: Zha, no, they don't
  • [9:03] Xugu Madison: Textures are sent over HTTPS?
  • [9:03] Zha Ewry: oh?
  • [9:03] Zero Linden: So - I see some confusion over the event queue useage
  • [9:03] Zha Ewry: clearly
  • [9:04] Zero Linden: the Event queue is only used for things that the sim needs to tell the viewer without the viewer asking
  • [9:04] Zero Linden: IN the case of textures
  • [9:04] Zero Linden: the viewer invokes a cap on the sim that asks "I'd like a one-shot cap for texture xyz"
  • [9:04] Zero Linden: the sim returns that cap in the response
  • [9:05] Zha Ewry: how does it know it needs the texture, tho?
  • [9:05] Rex Cronon: if u r designing for the future it doesn't make sense for the server to send texture to viewers, makes more sense for server to tell viewer to fetch this texture from given url
  • [9:05] Zha Ewry: That I assume is coming down the event queue
  • [9:05] Zero Linden: so far, one cap/HTTP/TCP connection (note that the TCP & HTTP connection could have been a keep-alive session from a prior request)
  • [9:05] Zero Linden: then the viewer invokes that one-shot with a GET and a byte-range
  • [9:05] Morgaine Dinova: Exactly. That was the whole reason for being concerned with the TCP event stream being terminated a la COMET. Obviously that's not helpful when you want a continuous stream of updates, hence the questions a couple of weeks ago ;-)
  • [9:05] Zero Linden: a second cap/HTTP/TCP request, though again could be via a keep-alive session that already exists
  • [9:06] Saijanai Kuhn: well, the speed of textures isn't as important for responsiveness as the speed of other updates, in general (I would guess)
  • [9:06] Saijanai Kuhn: thinking in terms of twitchiness here
  • [9:06] Zha Ewry: you want get, when possible, Rex, because http get caches, and behaves well
  • [9:07] Morgaine Dinova: I think Grasmere needs to be relocated away from intro areas ;-)
  • [9:08] Zero Linden: True - but as the texture fetching is done by the viewer, the viewe is in control of how often to fetch
  • [9:08] Zero Linden: vs. how often to be checking the event queue
  • [9:08] Zero Linden: though, in the current implementation, it basically is ALWAYS on the event queue
  • [9:09] Morgaine Dinova: Event queue (local end) shouldn't be checked by polling, it should by processed by callbacks.
  • [9:09] Zero Linden: Yes, Morgai - it is
  • [9:10] Zha Ewry: Also, fetch, means you can local cache at will
  • [9:10] Zha Ewry: Shove, the server doesn't know what's cached
  • [9:10] Zero Linden: In the viewer C++ implementation, you make a request to the event queue cap, and supply a responder object
  • [9:11] Zero Linden: when the event queue comes back with events, the responder gets called by the HTTP client infra structure of the code base
  • [9:11] Zero Linden: then the responder unpacks the events, process them, acknowledges them, and immediatly makes another request to the event queue
  • [9:11] Zero Linden: now, there is a bug there -it should unpack, ack, re-request, and THEN process
  • [9:12] Zero Linden: but then we need to have a throttle as well
  • [9:12] Zero Linden: unpack, add to process queue, process down to N messages, ack, req-reqeust, process rest of queue
  • [9:14] Morgaine Dinova: That's working like a kernel device driver then (interrupt side), which is fine. However, those re-requests upstream are not lightweight, and shouldn't be necessary. Anything that was enabled for streaming down should continue doing so under its own steam, with the same responder invoked as callback each time.
  • [9:14] Zha Ewry: Well
  • [9:14] Zha Ewry: The thottle case may matter
  • [9:14] Zha Ewry: I'd like a way for the client to say
  • [9:14] Zha Ewry: "Look, I'm only goign to take the next 100 events"
  • [9:14] Morgaine Dinova: Yes, appart from throttles and termination and any other exceptions.
  • [9:14] Zha Ewry: so we dont' keep sending 1000, discarding 900
  • [9:14] Zha Ewry: and gah
  • [9:15] Zha Ewry: You see the pathology
  • [9:15] Morgaine Dinova: Zha: even better make rate adjustment a standard feature, with progressive rolloff just like in network protocols.
  • [9:15] Saijanai Kuhn: the current EQG is just a tcp implementation f UDP spamming though,r ight?
  • [9:16] Morgaine Dinova: That's built into SCTP anyway, hehe
  • [9:16] Zero Linden: Well, perhaps, the problem is this:
  • [9:16] Zero Linden: currently, nothing sent via the event queue is optional
  • [9:16] Zero Linden: they can't be dropped
  • [9:16] Zha Ewry: Well, in fact, yes..
  • [9:16] Zero Linden: like - say, IM, you can't not send it
  • [9:17] Zha Ewry: Realy, we want "And compress out the ones which are marked as discardable"
  • [9:17] Zero Linden: Or, say, teleport complete messages
  • [9:17] Zero Linden: Zha - yes
  • [9:17] Morgaine Dinova: Oh I dunno, IM seems pretty good at not sending it ;-)))))
  • [9:17] Zha Ewry: ie "compress the last N locations updates"
  • [9:17] Zero Linden: and there is some discussion internally as to which side should make that determination
  • [9:17] Zha Ewry: I'd like to say sim
  • [9:17] Zha Ewry: But.. I'm of the suspicion
  • [9:17] Zero Linden: I'm thinking the system adding to the queue should say "add this, and you can supersceed all similar ones int he queue"
  • [9:17] Zha Ewry: it may vary by ype of object
  • [9:18] Xugu Madison: Queue by type. So for teleport complete, only the most recent counts, for example? Just a thought...
  • [9:18] Morgaine Dinova: The client should always be in control, with the sim having a veto for sanity --- primarily to safeguard the sim's resources from exhaustion.
  • [9:18] Zero Linden: Sai - yes it is, but we don't funnel everything through it - just selected messages
  • [9:18] Zero Linden: so far, the only one that has given us trouble was the local friend's position updates
  • [9:18] Zero Linden: but that is one of these "only need to send the last one" things
  • [9:19] Zero Linden: and somethign went awry in the logic
  • [9:19] Zha Ewry: Well, not clear Morgaine. Some updates, clearly are compressable, and having the sim do it, might make al ot of sense
  • [9:19] Zero Linden: The other thing that gave us fits recently was inventory via caps
  • [9:19] Morgaine Dinova: Oh BTW, SCTP is a lot more resistent to DoS attacks. That's a possible reason for using it upstream too, although I don't know if it would be a sufficient reason.
  • [9:19] Zero Linden: The problem there was that the backend could take longer to respond than the HTTP time outs
  • [9:19] Rex Cronon: whats wrong with inventory?
  • [9:19] Zero Linden: (which is really really sad)
  • [9:20] Zha Ewry: Ouch, Zero
  • [9:20] Zero Linden: Right - I don't think, if we know the semantics at the application layer, that SCTP or UDP unreliable deliver worys
  • [9:20] Zero Linden: works
  • [9:20] Zero Linden: the problem is at the transport, it drops willy nilly
  • [9:21] Zero Linden: not in some smooth, predictable manner
  • [9:21] Rex Cronon: u mean tcp packets are lost?
  • [9:21] Morgaine Dinova: Zero: sounds like just an algorithmic problem in inventory processing .... ie. iterating all he way to the end before replying, instead of delivering the head and recursing for rest.
  • [9:21] Zero Linden: No, UDP, or SCTP in unreliable stream mode
  • [9:21] Zero Linden: if you send A, B, C, A, B, C, A, B, C, A, B, C
  • [9:22] Zero Linden: and there is congestion
  • [9:22] Zero Linden: UDP may deliver A, A, A, A
  • [9:22] Zero Linden: which doesn't really help
  • [9:22] Zha Ewry: Right
  • [9:22] Zha Ewry: because it has not perspctive about the higher layers
  • [9:22] Rex Cronon: if u number each packet
  • [9:22] Zero Linden: whereas the event queue can correctly deliver A, B, C
  • [9:22] Zha Ewry: Can't chose to send one of each
  • [9:22] Zero Linden: Exactly
  • [9:22] Zha Ewry: That is, in fact, fundamental
  • [9:22] Zha Ewry: ISO 7 layer diagram, and all
  • [9:22] Zha Ewry: The lower level CAN"T know
  • [9:23] Morgaine Dinova: SCTP can't deliver A A A A, but it can deliver A C B D if you disable in-order processing.
  • [9:23] Zero Linden: True, I but don't know if you create, say, 35 streams (one for each message type), and mark them all as unreliable
  • [9:24] Zero Linden: if it can be told to no starve any single stream
  • [9:24] Xugu Madison: Can't all types go over one stream, and be split up client-side?
  • [9:25] Morgaine Dinova: Don't know, but I'll try to find out. However, in most cases you would want reliable in-order delivery.
  • [9:25] Zha Ewry: I expect you'd have to lok very carefully, at the promised and tested semantics before depending on them
  • [9:25] Zero Linden: Xugu - sure, but then you have to do the dropping in light of congestion at a higher level in the protocol stack
  • [9:25] Zero Linden: which is what I'm argueing for
  • [9:26] Xugu Madison: nods "Ah, right..."
  • [9:26] Rex Cronon: why is UDP even discussed? i thought that everything will go over TCP
  • [9:27] Morgaine Dinova: Xugu: Well we're trying to get the protocol to handle as much as it can for us natively (if it can do it). It'll probably do a much better job of it than if we did our own multiplexing and prioritization.
  • [9:28] Zero Linden: Well, I think for the time being, there is plent going between the viwer and simulator over UDP that just works
  • [9:28] Zero Linden: and is implemented by everyone, so we should just leave it there for now
  • [9:29] Zero Linden: We SHOULD move things to TCP and caps and HTTP and resources that need to go to the agent domain
  • [9:29] Zero Linden: and that need to be reliable
  • [9:29] Zero Linden: which in the end will leave basically object updates as the last bastion of UDP
  • [9:29] Zero Linden: and some avatar updates as well
  • [9:29] Zha Ewry: and then we can have a discussion as to whetehr even that makes sense
  • [9:29] Zero Linden: and these things we want to make sure that the eventual transport
  • [9:29] Zero Linden: can handle their semantics
  • [9:30] Morgaine Dinova: Zero: Is that one of the sim-side areas in which some code will be released, so that alternative (plugin?) transports can be added by both sides working at it? (At least on test grid)
  • [9:31] Zero Linden: Well, we might choose to experiment with that internally -- but we won't be releasing or opening up our simulator implmeentation to such plug-ins for awhile
  • [9:31] Zero Linden: on the viewer side
  • [9:31] Zero Linden: the messaging system is sufficiently abstracted now that you
  • [9:31] Morgaine Dinova: KK. We'll have to do it in Opensim then.
  • [9:31] Zero Linden: can inject messages from what ever transport you care to add to it
  • [9:31] Zero Linden: That was one of the hapy side effects of message liberation
  • [9:32] Zero Linden: that with few exceptions, the message handling code (sending and receiving) is really blind now
  • [9:32] Zero Linden: to the transport
  • [9:32] Rex Cronon: u mean to what goes over it?
  • [9:32] Zero Linden: (the few exceptions are messages like object update, where on the sim side, the upper layer is acutely aware of the UDP packet sizes and semantics)
  • [9:32] Zero Linden: Rex - no to how it is sent
  • [9:33] Zero Linden: so, for example, the code tha handle chat or IM messages
  • [9:33] Zero Linden: is really blind to if those message are coming via UDP, or via the Event Queue,
  • [9:33] Zero Linden: or if you wnat to plug it together, via say SCTP or BEEP or AMQP or....
  • [9:33] Zha Ewry: well, sure, because, the low levil imposes some things, although.. you could hide them.. the cost is probably ghiher than you want to pay
  • [9:33] Zero Linden: eek... XMPP?
  • [9:34] SignpostMarv Martin: heh
  • [9:34] Xugu Madison: Why eek XMPP? I mean, at least for IM...
  • [9:34] Zero Linden: Right - we could abstract that, but not clear it is worth it - since only a very few messages do it
  • [9:34] Zero Linden: because XMPP's connection and auth models are very antithetical to the grids
  • [9:34] Zha Ewry: At some point, the costs of adding a layer, to abstract away an issue, gets balanced witht he amount of high level code that is needed to handel the issue
  • [9:35] Zero Linden: and beacuse so much of XMPP is left to the implemetnation... like, say, routing!
  • [9:35] Zero Linden: Exactly....
  • [9:35] Zero Linden: In the case of things like mini-map updates, I think we found a pattern that is reused often enough (
  • [9:35] Zero Linden: (that is, send this, but superceed prior ones in the queue) that we can make that a
  • [9:36] Morgaine Dinova: Might be nice if we went off and created a massively scalable message transport subsystem with semantics suitable for virtual worlds --- an interesting side project. Libraries only, no clients except for testing. But target 500m populations in the design.
  • [9:36] Zero Linden: general facility to the higher level code
  • [9:36] Zero Linden: Okay all... I've got to run