User Experience Interest Group/Transcripts/2009-06-18

From Second Life Wiki
Jump to navigation Jump to search

Topic & Summary

User Experience Interest Group Discussion for June 18, 2009.

Topic: Camera Tracking / Gesture Recognition.

We were planning on discussing Viewer Modes, but Mm Alder had a special request to discuss Linden Lab's camera tracking / gesture recognition project, so we discussed that instead.

Mm invited Merov Linden to attend, and Merov told us that the intern who was going to be working on the project had "flaked out". However, Merov and Philip Linden are still planning to work on the project.

We discussed various aspects of the project. Key points:

  • The initial aim for the project is to improve immersion (rather than interacting with the UI via a camera). The focus at first will be on head tracking, e.g. your avatar nods when you do.
  • The user should be able to toggle the recognition system in a way that won't be misinterpreted as a gesture by the system.
  • It's important that users can disable or reassign specific gesture cues. E.g. a user with a facial tic may want to disable the "wink" cue to avoid accidently triggering it.
  • The UI for users to reassign gestures needs to be flexible and scalable, so that more cues or actions could be added later without crowding the UI.
  • The code API for the system should be flexible enough that it could be extended to use gestures to trigger things besides just avatar animations.

Links

Transcript

[15:01] Jacek Antonelli: Hey Mm. I got your email
[15:01] Mm Alder: What do you think?
[15:03] Mm Alder: Would you like to discuss gesture recognition?
[15:04] Charlette Proto: hi Jacek, Mm
[15:04] Jacek Antonelli: Hey Charlette
[15:04] Mm Alder: howdy
[15:04] Charlette Proto: your first time Mm?
[15:04] Charlette Proto: or alt?
[15:05] Mm Alder: No, I've just been busy Thursdays lately
[15:05] McCabe Maxsted waves
[15:05] Charlette Proto: ah, lately - like for last 12 months
[15:05] Charlette Proto: hi McCabe
[15:05] McCabe Maxsted: hey there
[15:05] McCabe Maxsted: how's it going?
[15:05] Jacek Antonelli: Mm has been here before, Charlette
[15:06] Charlette Proto: kewl except for the sound at level 0 problem with random users in the sim
[15:06] Mm Alder: I "attend" by reading the transcripts.
[15:06] Jacek Antonelli: Mm: I'm not sure about Gesture Recognition. I'm wondering if it would be better to discuss it on the mailing list? Especially if you're looking for some Linden involvement in the discussion
[15:07] Charlette Proto: ah OK perhaps the best way to catch up with the issues, but the gossip is cut off
[15:07] Mm Alder: Mailing lists tend to be rather uncreative.
[15:08] Charlette Proto: true I really like getting live responses to the issues at hand
[15:08] Mm Alder: I was hoping to get the new intern involved.
[15:08] Charlette Proto: get a better idea of how people perceive things
[15:08] McCabe Maxsted: the new intern?
[15:08] Mm Alder: Merov has an intern working on gesture recognition.
[15:09] Jacek Antonelli: Okay. We could discuss it today, or schedule it for next week and I'll get the announcement out earlier so Merov and/or the intern might be able to make it?
[15:09] Morgaine Dinova: Hi folks
[15:09] Jacek Antonelli: Hey Morgaine
[15:09] Mm Alder: OK, but don't count my absence as lack of interest.
[15:09] Charlette Proto: hi Morgaine
[15:09] Morgaine Dinova: Don't look too strongly in this direction, or Snowglobe will crash
[15:10] Mm Alder: My son's graduation is next Thursday :-)
[15:10] Charlette Proto: gesture recognition? what kind of gestures
[15:10] McCabe Maxsted: ahoy morgaine!
[15:10] Mm Alder: Charlette, that's one of the questions that needs to be answered.
[15:11] Morgaine Dinova: Hiya Jacek, Charlette, McCabe, Mm, and anyone who hasn't rezzed yet :-)
[15:11] Charlette Proto: ah I personally like the 3D eg hand waving etc rather than swipes of the touchscreen like iPhone
[15:11] Jacek Antonelli: Okay. So, which would you prefer, Mm? Today or next week? Or perhaps we could talk a bit about it today, and continue (hopefully with Merov + intern) next week?
[15:11] Mm Alder: I put a bunch of questions on the wiki since I didn't think I could attend in "person". http://wiki.secondlife.com/wiki/User_Experience_Interest_Group/Topics
[15:12] Mm Alder: I'll defer to your judgment.
[15:13] Mm Alder: We may be able to grab Merov from the SLDev meeting.
[15:14] Jacek Antonelli: Okay. Let's talk about it some now, then see if we can carry it over to the mailing list, and then talk some more next week if needed
[15:14] Jacek Antonelli: Want to IM Merov and see if he can come over, Mm?
[15:15] Charlette Proto: suppose one needs to define the input modes in context of gestures, eg camera, accelerometer, touchscreen
[15:15] Jacek Antonelli: Charlette: In this case, the project is camera-based
[15:16] Jacek Antonelli: http://wiki.secondlife.com/wiki/Gesture_Recognition
[15:16] Charlette Proto: some of the capture modes lend themselves to different parts of the UI better than others
[15:16] Charlette Proto: ah OK
[15:16] Jacek Antonelli: The scope of it is actually pretty limited right, now, since it's just a summer intern working on it.
[15:17] Charlette Proto: OK translated to avatar interactions rather than operation of the UI like I thought
[15:17] Jacek Antonelli: Right
[15:18] Jacek Antonelli: So far, it's making your avatar move its head when you move it, and also triggering specific animations when you do certain gestures with your head or body
[15:18] Charlette Proto: interesting, but the UI should be integrated too or elese the modality switching may become a real bore
[15:18] Jacek Antonelli: I imagine that's just a starting point, though. I could see someone extending it further
[15:19] McCabe Maxsted: interesting page
[15:19] Charlette Proto: I've done some of the stuff like that with a Wii Remote strapped to the arms, the modality problem was immediately obvious
[15:20] Jacek Antonelli: True. That's one of the issues Mm raised on the topic page. When is gesture recognition on, and when is it off ("blind")?
[15:20] Charlette Proto: even walking about in one on one interactions with other residents presented a bit of a problem
[15:20] Jacek Antonelli: And how do you switch between the two states?
[15:21] Jacek Antonelli waves to Garn
[15:21] Garn Conover: hewwo's
[15:21] Garn Conover: any luck on the popups errors McCabe?
[15:21] Charlette Proto: suppose the arms down on the keyboard being blind is the obvious thing
[15:21] Charlette Proto: hi Garn
[15:21] Jacek Antonelli: But it's also for head tracking, Charlette
[15:21] Garn Conover: allo :)
[15:22] Jacek Antonelli: Hey Merov :)
[15:22] Garn Conover: EEEEK!
[15:22] Charlette Proto: head movements really clash with looking at the screen
[15:22] Garn Conover: A LINDEN RUN!!!!
[15:22] Jacek Antonelli: lol Garn
[15:22] Merov Linden: where! where!
[15:22] Charlette Proto: hi Merov
[15:23] Morgaine Dinova: Eeek, I escaped from Rob's but Merov followed me here ... ;-)
[15:23] Merov Linden: Mm Alder asked me to come
[15:23] Jacek Antonelli: It's the fuzz, hide your stashes. ;) Anyway. We're talking about the gesture recognition project, at Mm's request
[15:23] Garn Conover: Merov which area do u fall under?
[15:23] Mm Alder: I was about to say you'd be here, but you *are* here.
[15:23] Jacek Antonelli: Hey Aimee :)
[15:23] Garn Conover: Aimee :)
[15:23] Aimee Trescothick: yellow :)
[15:23] Charlette Proto: Merov you haven't rezzed for me at all (cloud)
[15:23] Morgaine Dinova waves to Aimee
[15:23] Merov Linden: k, gesture recognition
[15:23] Jacek Antonelli: Mm has put in some questions at http://wiki.secondlife.com/wiki/User_Experience_Interest_Group/Topics
[15:24] Merov Linden: we *had* an intern but, as I said to Mm Alder, he flaked out
[15:24] Jacek Antonelli: Ahh
[15:24] Merov Linden: disappointing
[15:24] Jacek Antonelli: That's too bad
[15:24] Aimee Trescothick: that's a shame
[15:24] Merov Linden: but, Philip and myself are still determined to get something going
[15:24] Morgaine Dinova: Shouldn't pick interns from humanities :P
[15:24] Garn Conover grins @ Morg
[15:25] Merov Linden: this is an area Philip and myself are very interested in
[15:25] Aimee Trescothick: yeah, humans are always flakey
[15:25] Merov Linden: and already sinked some time in
[15:25] Jacek Antonelli: Ah, cool Merov
[15:26] Merov Linden: there are a lot of exisiting research and even open source packages related to this
[15:26] Charlette Proto: are there any specific animations developed for gestures or is it too early for that
[15:26] Merov Linden: so it's definitely an area where you need to make sure you're not reinventing the wheel
[15:26] Charlette Proto: agrres
[15:27] Merov Linden: when Philip says "gesture", he really meant "animation" (aka "gestures" in SL)
[15:27] Mm Alder: Are you interested in creating better immersion or are your goals functional?
[15:27] Merov Linden: better immersion
[15:28] Merov Linden: i.e. we're not trying to *control* SL functions with gestures (e.g. building)
[15:28] Mm Alder: Hmm. I expected you were looking at conversational cues, etc.
[15:28] Morgaine Dinova: Jan Cigar seems to be an amazing resource for the gestures project, great background.
[15:28] Charlette Proto: but integrating gestures into the UI as well is essential from what I've experienced in my experiments
[15:28] Merov Linden: but rather trigger animations automatically
[15:28] Morgaine Dinova: On SLdev
[15:28] Merov Linden: to create a better copresence fee;
[15:28] Merov Linden: feel!
[15:28] Charlette Proto: functionally the UI operation should be possible too Merov
[15:28] Mm Alder: I think it would be very useful for group discussions.
[15:28] Merov Linden: Jan is indeed very thoughtful
[15:29] Merov Linden: he knows the field
[15:29] Merov Linden: great to have him on sldev
[15:29] Mm Alder: How about taking some questions that I wrote up earlier?
[15:29] Merov Linden: Charlette: the UI operation are possible but it's hard to get fine control
[15:30] Merov Linden: shoot
[15:30] Mm Alder: Which gestures are worth recognizing?
[15:30] Charlette Proto: I trealise that but certain things are simply mapped to given gestures you learn
[15:30] Merov Linden: For the moment, we'll be focusing on head movements (nod, negate)
[15:31] Charlette Proto: UI operation is a part of the immersive experience as well eg walking turning
[15:31] Morgaine Dinova: Merov: controlling animations automatically with various cues is great, but not so great if that's the ONLY way those particular anims can be controlled. I hope that the triggers get coupled via an API that other things can drive too, and not be hardwired to the cues as triggers.
[15:31] Merov Linden: and face emotion (smile, frown, laugh)
[15:31] Mm Alder: But those are very hard to see in SL.
[15:32] Charlette Proto: that is the side that is purely cosmetic as far as immersion in Second Life™ goes
[15:32] Merov Linden: not harder than lip sync Mm Alder :)
[15:32] Mm Alder: :-)
[15:32] Merov Linden: cosmetic is important actually for the feeling of immersion...
[15:32] Mm Alder: They're also harder to recognize than for example moving your hands.
[15:32] Morgaine Dinova: Well everything here is kinda cosmetic, but that's what distinguishes 3D VWs from MUDs ;-)
[15:33] Charlette Proto: one is immersed more when the operation moves towards the background of interactions, a smile etc will not get one very immersed
[15:33] Merov Linden: ...or we'd be having this discussion in WebEx!
[15:33] Jacek Antonelli: hehe
[15:33] Jacek Antonelli: Hey Malbers :)
[15:33] Malbers Linden: hello all
[15:33] McCabe Maxsted: ahoy malbers
[15:33] Charlette Proto: hi malbers
[15:33] Morgaine Dinova: Blimey
[15:33] Mm Alder: How do you "mute" (I guess that would be "blind") the camera?
[15:33] Merov Linden: I agree that resognizing hands is easier
[15:33] Morgaine Dinova: Hiya Malbers, long time no see :-)
[15:34] Malbers Linden: yep, it's been a while
[15:34] Merov Linden: I really haven't thought of the details of the UI yet Mm Alder
[15:34] Merov Linden: but, sure, you should be able to turn it off somewhat
[15:34] Mm Alder: Oh, it's all about the UI :-)
[15:34] Morgaine Dinova: Merov: and the API?
[15:34] Charlette Proto: hand expressions (body language) and explicit gestures are some of the most interesting propositions in my view
[15:35] Merov Linden: I'd side with Morgaine for once: the API is (to me) more critical
[15:35] Mm Alder: Unless you "blind" it by voice, your movements would be interpreted as gestures.
[15:35] Morgaine Dinova: You always side with me Merov. It's the cookies I send you, I'm pretty sure :P
[15:36] Merov Linden: we should make it so that someone else could plug a better mocap system and give the avatar better, more realistic moveents
[15:36] Merov Linden: Mm Alder: why voice?
[15:36] Charlette Proto: but since there is a limited vocabulary of gestures being recognised and one of them could be the toggle (blind) gesture this issue brings us beck to mapping the UI to some of the gestures as well
[15:37] Mm Alder: So are you thinking about mocap to puppeteering or gesture reco to canned gestures?
[15:37] Charlette Proto: don't mix voice modality with gestures, that is very messy
[15:37] Mm Alder: Merov, because you can talk without moving.
[15:37] Merov Linden: that sounds very "Natal"!
[15:37] Mm Alder: Mut you can't click a button without gesturing.
[15:37] Charlette Proto: talking to the UI is a real nono for the interface
[15:37] Aimee Trescothick: Mm's a ventriloquist :D
[15:38] Mm Alder: :-)
[15:38] ATechwolf Foxclaw: Hi all.
[15:38] Jacek Antonelli: There could be a gesture for toggling gesture recognition. Like swiping your fingers across the screen or something
[15:38] McCabe Maxsted: hehe
[15:38] ATechwolf Foxclaw tosses a snack to Garn
[15:38] Morgaine Dinova: A talk without movement is pretty boring. While they should be decoupled for flexibility, in the vast majority of cases you'll couple them for a talk.
[15:38] Malbers Linden: Hi ATechwolf
[15:38] McCabe Maxsted: ahoy techwolf
[15:38] Merov Linden: the thing is: we think we better shoot for devices that people have on their existing machines
[15:38] Charlette Proto: yup I said the same thing as Jacek
[15:38] Charlette Proto: one gesture to toggle
[15:38] Merov Linden: i.e.: webcams integrated in their laptop or desktop
[15:39] Merov Linden: that usually have a rather small field of view
[15:39] Charlette Proto: shit you aren't lookig at my cam now?
[15:39] Merov Linden: good view of the head and trunk but not much else
[15:39] Merov Linden: hands are even out of view most of the time
[15:39] Morgaine Dinova: Example: a wink is a very important gesture in many situations, but a person with a chronic eye twitch may want to decouple that particular cue during a talk.
[15:40] ATechwolf Foxclaw: I came in late here. I have a spacenav 3D joystick and its nice for using on SL. But not everyone has one and some folks I talked to that know what it is like to get one, but can't affrored it. So using what they have is a good thing.
[15:40] Mm Alder: Merov, but if the user knew that his hand movements were interpreted, he might use them.
[15:40] Charlette Proto: hehe mine is quite on the side of wide view at the distance I sit at and I wouldn't need to look wider than that
[15:40] Merov Linden: Mm Alder: sure, could be
[15:40] Charlette Proto: ATech we are limiting this to video capturing I think
[15:40] Mm Alder: Maybe even a gesture "sign language"
[15:41] Morgaine Dinova: ASL dcetection? That would be pretty advanced, no?
[15:41] Merov Linden: I did some experiment with that remember, I can't tell you how many emails I received pointing at the fact that gesturing is too hard, etc...
[15:41] Mm Alder: I don't mean real sign language, I mean a language of gestures.
[15:41] Aimee Trescothick: hmm, I have a webcam on each monitor, could do stereo :D
[15:42] Morgaine Dinova: Hehe
[15:42] Garn Conover: lol
[15:42] Mm Alder: Merov, I hadn't seen that work
[15:42] Morgaine Dinova: Aimee: so you could twitch both your eyes :P
[15:42] Merov Linden: well, whatever you do, there are always nay sayers for sure
[15:42] Charlette Proto: stereo is used to improve spatial recognition as one would expect
[15:42] ATechwolf Foxclaw: I've talk to one person that can sign. Told me that sign language can even have its own slang and dialate. So captureing sign language is about the same problem as voice to speech.
[15:42] ATechwolf Foxclaw: voice to text..
[15:42] Merov Linden: Mm Alder: you did, you debunked me on sldev remember? :)
[15:43] Merov Linden: the HandsFree3D demos
[15:43] Mm Alder: How about "hello", "good bye" "come here"?
[15:43] Aimee Trescothick thinks Merov should have been "Leaning Linden"
[15:43] Mm Alder: ?
[15:43] Jacek Antonelli chuckles at Aimee
[15:43] Merov Linden: hehe
[15:44] Mm Alder: How long ago was that?
[15:44] Merov Linden: the demo? that was last year (May 2008)
[15:44] Merov Linden digs the links
[15:46] Mm Alder: How good are the techniques to recognize smiles and frowns these days?
[15:46] Merov Linden: the demo video are here
[15:46] Merov Linden: http://www.handsfree3d.com/videos/
[15:46] Merov Linden: first is navigation
[15:46] Charlette Proto: I'm getting a feeling that this (like everything else in SL) will be thrown in without much foresight in the design of the greater system capabilities and long term prove to be limited through this approach
[15:46] Merov Linden: second is gesture
[15:46] Charlette Proto: sry
[15:47] Aimee Trescothick: nice way to be defeatist
[15:47] Morgaine Dinova: So, API, and how the API might be controlled from the UI. Merov, do you think that the number of gestures will be beyond the capacity of a crossbar-type matrix of tickboxes, with detected cues on one edge and triggered anims on the other?
[15:47] Charlette Proto: gesture (physical) computing is one of my pet subjects, but somehow i feel like going to make toast
[15:47] Mm Alder: Ah, but that's about control, not communication. Big difference.
[15:48] Jacek Antonelli waves to Geneko
[15:48] Malbers Linden: hey geneko
[15:48] Geneko Nemeth: Hey!
[15:48] Charlette Proto: hi geneko
[15:48] Aimee Trescothick hands Charlette a loaf
[15:48] Morgaine Dinova: Hiya Gen :-)
[15:48] Merov Linden: Mm Alder: right, that was control (for gesture)
[15:48] Charlette Proto: thankis aimee
[15:49] Mm Alder: Morgaine, the API from the gesture recognition should only say what was recognized, not what to do with it.
[15:49] Morgaine Dinova: Mm: that's only the cues
[15:49] Merov Linden: and it's not good unless you make a UI around it from the ground up
[15:49] Morgaine Dinova: Mm: I'm talking about the user linkage from cues to triggered anims
[15:50] Charlette Proto: the issue of MAPPINGS is what Morgaine is talking about
[15:50] Mm Alder: Morgaine, wouldn't that be a table lookup?
[15:50] Merov Linden: guys, I need to go: there's a snowglobe release going on and the Mac installer is busted...
[15:50] Morgaine Dinova: Merov: well sure, none of this exists, the UI would be from the ground up necessarily :-)
[15:50] Charlette Proto: byee Merov
[15:50] McCabe Maxsted: take care merov
[15:50] Charlette Proto: good luck
[15:50] Jacek Antonelli: Woops, good luck with that Merov!
[15:50] Charlette Proto: hehe
[15:50] Jacek Antonelli: Thanks for dropping by!
[15:50] Mm Alder: Thanks for dropping by Merove. Sorry to hear about the intern.
[15:50] Merov Linden: I'll be happy to come next week and chat more about this if you like
[15:51] Morgaine Dinova: Mm: no, it's as Charlette said, the mapping from detected cues to triggered anims. That has to be user-definable.
[15:51] Merov Linden: hopefully, I'll have more concrete things to say
[15:51] Aimee Trescothick: bye Merov :)
[15:51] Morgaine Dinova: Great Merov :-)
[15:51] Charlette Proto: precisely Morgaine
[15:52] Mm Alder: Yes, I agree Morgaine.
[15:52] Charlette Proto: not everyone will be comfortable with gesturing in the same way (eg cultural diffs etc)
[15:52] Morgaine Dinova: How are things with you Malbers? Working on anything UI-related?
[15:52] Malbers Linden: that's all I do
[15:52] Jacek Antonelli: hehe
[15:52] Geneko Nemeth: Hmm...
[15:52] Charlette Proto: suppose octous don't gesture much
[15:52] Morgaine Dinova: Indeed, and don't forget my example of disabling your eye twitch. Flexibility is essential.
[15:52] Malbers Linden: i was comming to see how the discussion was about different kinds of UIs for different people
[15:52] Mm Alder: Malbers, what to say anything about viewer 2.0? :-)
[15:53] Malbers Linden: nope, there enough excitement over on the blog
[15:53] Charlette Proto: yes please ver 2 Malbers
[15:53] Jacek Antonelli: Malbers: We put off that topic for another time, sorry. Mm had a special request
[15:53] ATechwolf Foxclaw checks the blog
[15:53] Geneko Nemeth: Actually, there's something I've seen the UX team working/has worked on that is not UI. I'm not sure if this would be a good place to ask about it though.
[15:53] Malbers Linden: yeah, cool that Merov came by to talk
[15:54] Malbers Linden: Geneko?
[15:54] Geneko Nemeth: There's a region next to Here (region, that is) that looks like an orientation island. Does not show up on the world map.
[15:54] Malbers Linden: interesting
[15:55] Geneko Nemeth: And the nominal owner is Benjamin Linden, I think...
[15:55] Morgaine Dinova: I think it would be useful to ennumerate the potential detected cues (without any rocket science ones), and the likely list of anims that one might want to trigger with them.
[15:55] Malbers Linden: that's weird
[15:55] Charlette Proto: Merov seemed to be more of a YES person than an ideas/design person I would expect working on this sort of thing
[15:55] Geneko Nemeth: I had pics, but can't find them. >_<
[15:55] Charlette Proto: I've been there too Geneko
[15:55] Geneko Nemeth: Ah, so it's pretty old.
[15:56] Mm Alder: Charlette, what is a "YES person"?
[15:56] Malbers Linden: Merov's a good guy; have talked to him about Maps stuff
[15:56] Jacek Antonelli: Was the sim called "Dazzle Land", Geneko? :D
[15:56] McCabe Maxsted might have left a surprise there ages ago
[15:56] McCabe Maxsted: yeah, he seemed cool
[15:56] Aimee Trescothick: Merov is a hard worker
[15:56] Geneko Nemeth: ... I probably would have remembered if it was that ridiculous...
[15:56] Jacek Antonelli hopes McCabe didn't leave a bag of burning dog poo in Benjamin's sim. >_<
[15:56] McCabe Maxsted: haha :P
[15:57] Malbers Linden: are there burning bags of dog poo; that would be kinda fun
[15:57] Jacek Antonelli: Yeah, Merov is cool. I don't know what you're talking about, Charlette
[15:57] Charlette Proto: not saying that he is not a good guy, but this is a very difficult area and just listening to Philip will not do any good
[15:57] Jacek Antonelli: I don't think he's Philip's puppy dog, if that's what you're saying. They are both interested in the topic, and Merov definitely seems extremely well qualified.
[15:57] Morgaine Dinova: Mm, if you want to follow up your topic with a UXIG wiki page on the gesture issue, that might be pretty interesting.
[15:58] Mm Alder: Merov is the person who was working on puppeteering a while back.
[15:58] Jacek Antonelli: Are you sure, Mm? That doesn't sound right...
[15:58] Morgaine Dinova: Merov's dead keen on it.
[15:58] Charlette Proto: gesture computing is a very specific problem (eg modality etc) and yes/no nodding is only a corner of this world
[15:58] Aimee Trescothick: have you actually seen his hands free 3d work Charlette?
[15:58] Malbers Linden: well, I came late and have a 4pm meeting
[15:58] Malbers Linden: keep sending those emails with the topic
[15:59] McCabe Maxsted waves. Take care malbers
[15:59] Malbers Linden: for different UIs for different people, I was interested in differnt kinds of UIs for Land
[15:59] Jacek Antonelli: Take care, Malbers. Next week will probably be viewer modes for real
[15:59] McCabe Maxsted: for land?
[15:59] Malbers Linden: like different UIs for land owners
[15:59] Charlette Proto: no I haven't Aimee, just going by what I heard today
[15:59] Morgaine Dinova: Oh it goes way beyond yes/no nodding. Anything that current technology can detect using a consumer webcam should be usable as a cue by us. Not just nods and shakes! :-)
[15:59] Mm Alder: Ok, Morgaine, I'll write up something when I get a chance if you promise to add to it.
[15:59] Morgaine Dinova: Sure Mm
[15:59] Geneko Nemeth: Cuddles?
[15:59] Aimee Trescothick: well, I suggest you actually look at what he's done rather than guess
[16:00] Jacek Antonelli: I think you're being confused by the term "gesture", Charlette. There are a lot of different things called "gestures", and Merov is focusing on one specific area -- camera-based recognition of the controller's pose and expressions
[16:00] Malbers Linden: good seeing all of you again.
[16:00] Malbers Linden: bye
[16:00] Charlette Proto: hope we get to see you again soo Malbers
[16:00] McCabe Maxsted: ahh
[16:00] Morgaine Dinova: You too Malbers!
[16:00] Jacek Antonelli: Bye Malbers!
[16:00] Morgaine Dinova: Have fun Malb :-)
[16:00] Geneko Nemeth: See you next time!
[16:00] ATechwolf Foxclaw: I would think Land Owners would need a lot of text on the screen for managing lands. Seeing all the info at once without cluttering up the limite screen space I would think would be nice for land owners.
[16:00] Geneko Nemeth: Daww too slow.
[16:01] Jacek Antonelli: Yeah, land management is an interesting UI mode I hadn't considered
[16:01] Aimee Trescothick: time for bed
[16:01] Morgaine Dinova: Charlette: as you heard, Merov's keen on the API, so he wants it flexible.
[16:01] Aimee Trescothick: night
[16:01] Jacek Antonelli: Ni ni Aimee
[16:01] Morgaine Dinova: See you Aimee :-)
[16:01] Aimee Trescothick waves at both cameras
[16:01] Jacek Antonelli: hehe
[16:01] Morgaine Dinova: lol
[16:02] McCabe Maxsted: :)
[16:02] Charlette Proto: BT Jacek I don't think I'm being confused by the term, more like I realise the implications of using gestures as part of the interface
[16:02] Charlette Proto: BTW*
[16:02] Mm Alder: Which implications Charlette?
[16:03] Charlette Proto: like the issues of modality eg keyboard/mouse/gesture switching etc
[16:03] Morgaine Dinova: Well it's always *possible* they they'll limit it mercilessly to nods. But if it's in my earshot, that'll earn them a ton of woe, and they'd regret it. :P
[16:03] Charlette Proto: some of these things have to be considered in the spec even if a very limited scope is being implemented at first
[16:03] Mm Alder: Would you switch or use them concurrently?
[16:04] Morgaine Dinova: I'd just set up the trigger matrix, and allow anything to trigger anything or everything.
[16:04] Charlette Proto: one needs to include some of the critical UI functions into the gesture API
[16:04] Charlette Proto: agrees with morgaine
[16:05] Morgaine Dinova: The default mapping then simply becomes the diagonal tickboxes enabled.
[16:05] Jacek Antonelli: Morgaine: I think there's probably a better UI than a big grid/matrix, but yeah, definitely want to be able to mix and match cues with actions
[16:05] Mm Alder: How much detail would you put in the matrix?
[16:05] Mm Alder: And why a matrix rather than a map?
[16:05] Morgaine Dinova: It's just tickboxes, pretty noddy. Everyone's used to ticking boxes like in tic-tac-toe.
[16:06] Geneko Nemeth: Not when you have 2500 of them on one screen...
[16:06] Jacek Antonelli: hehe, exactly
[16:06] Charlette Proto: the main point is to design a system that can cater for future expansions well and not to be stuck with a subset of the possible interaction
[16:07] Mm Alder: You'd want a choice of "smile" gestures adn a choice of "waving hello" gestures, so a matrix would probably be very big.
[16:07] Morgaine Dinova: I agree on overload, which is why my first question to Merov was "How many?", if you recall :-)
[16:07] Jacek Antonelli: A grid isn't scalable in the long run. It'd be fine if you only had, like, 5 possible cues and 5 possible actions. But a map (choose cue, choose action) is just as good in that situation, and scales much better
[16:07] Mm Alder: Yes, "how many" is an important question. Too few and it's boring. Too many and it's a mess.
[16:07] Jacek Antonelli: too
[16:07] Charlette Proto: 'mapping' is a general term but a multidimentional matrix is more likely what would be needed in the representation Mm
[16:08] Morgaine Dinova: Does anyone know how many gestures those open packages can detect using a consumer webcam? Typically?
[16:08] Morgaine Dinova: 1, 2, 5, 10, 20?
[16:08] Geneko Nemeth: Maybe we could use dropdowns. New, empty dropdowns that appear when you made a choice in the old one. That'll take care of multiple gesture triggering.
[16:08] Geneko Nemeth: I say zero?
[16:09] Morgaine Dinova: Hehehe
[16:09] Mm Alder: Geneko, what do you mean by multiple gesture triggering?
[16:10] Geneko Nemeth: One cue mapping to multple gestures, of course!
[16:10] Morgaine Dinova: Geneko: not only is that a lot more complex than a matrix, but you can't even see what you've selected if you have morethan 1 selection. Don't like dropdowns.
[16:10] Geneko Nemeth: I'm not saying one dropdown, I'm saying a series of dropdowns as many as you need them.
[16:10] Jacek Antonelli nods to Geneko
[16:11] Morgaine Dinova: Well that's gonna be distressingly more complex.
[16:11] Geneko Nemeth: I'm not saying this is optimal either... just less confusing than checkboxes.
[16:11] Mm Alder: Would all of those be run together? Are they assemblies of separate hand, face, etc. animations?
[16:11] Geneko Nemeth: Details, details... ^_^
[16:11] Charlette Proto: need to tak nboard UI gestures not only animation triggers
[16:12] Morgaine Dinova: Mm: decoupled. So for example you could make your ears twitch and one eye blink when in RL you stick your tongue out.
[16:12] Charlette Proto: hehe Morg
[16:13] Morgaine Dinova: I'm thinking of furries here ;-)
[16:13] Mm Alder: Morgaine, do you think this would be a one time setup or something that is in constant use that is modified during a conversation for example?
[16:13] Morgaine Dinova: Mm: I don't think we can prejudge. People will always amaze us.
[16:14] Charlette Proto: maybe a 'context' based approach would suit the selection of gesture specific maps
[16:14] Mm Alder: Charlette, "tak nboard"?
[16:14] Charlette Proto: eep got stuck typing
[16:14] Morgaine Dinova: Charlette: oh, that's a damn good idea!
[16:14] Morgaine Dinova: I like that
[16:14] Mm Alder: Ah, a "mood" based approach?
[16:15] Morgaine Dinova: Charlette's just suggested that the whole map set can be reloadable, and you could have as many as you like, eg. one whole set per file.
[16:15] Charlette Proto: mood /attitude/situation - context in a general term for the logic involved
[16:16] Mm Alder: Come to think of it, that is probably not only nice, but necessary.
[16:16] Morgaine Dinova: Mm: good observation. Because no way can the maps be toggled by hand, it's just too much.
[16:16] Charlette Proto: otherwise the avie gestures would be as borring as AOs which do nothing to communicate
[16:17] Mm Alder: Well I wasn't thinking about how they'd be toggled, I was just noting that there has to be sets for different situations.
[16:17] Morgaine Dinova: Hmmm ... pity that we can't get the AO to do the mood switch, but one is server side and the other client-side.
[16:18] Jacek Antonelli: If only LL would implement script->viewer comms.... *dreams*
[16:18] Mm Alder: Actually, I'm looking at client side AOs. It may be possible.
[16:18] Morgaine Dinova: Mm: that's interesting!
[16:18] Charlette Proto: hehe we could use biometrics to switch moods and enviromental sensors for situational context selection
[16:18] Jacek Antonelli: Definitely possible, Mm
[16:19] Mm Alder: The catch is that movement animations are sent by the sim.
[16:19] Mm Alder: Gesture animations are already sent by the client.
[16:19] Geneko Nemeth: Probably needs server cooperation but I don't know why it hasn't been implemented.
[16:19] Jacek Antonelli: Ah, drat.
[16:20] Jacek Antonelli hates LL's stupid, jagged server/viewer split.
[16:20] Mm Alder: But those server animations are triggered by an AgetUpdate message from the client.
[16:20] Charlette Proto: most likely because vehicles cause so much work on the server side the avie misses out on the effort and time needed to delevop it
[16:20] Mm Alder: So the client can immediately follow with an overriding animation.
[16:21] Charlette Proto: agrees with Jacek, the client/server boundary should bleed more
[16:21] Geneko Nemeth: Well, I think it's that SL was designed so that the sim and the client are geologically close, no delays, etc. Not for connecting from the other side of the world.
[16:21] Charlette Proto: the separation of projects is very limiting to the development of complex ideas atm
[16:22] Morgaine Dinova: Hey, I'm on Snowglobe, it bleeds quite enough thank you ;-)
[16:22] Geneko Nemeth freezes for one second when region crossing...
[16:22] Jacek Antonelli: heh. No bleeding please.
[16:22] Charlette Proto: hehe
[16:22] ATechwolf Foxclaw: Is there a wiki entry for this office hour? User EXperence I think, correct?
[16:22] Charlette Proto: yes
[16:23] Geneko Nemeth: UXIG on the Linden-run wiki.
[16:23] Charlette Proto: http://wiki.secondlife.com/wiki/User_Experience_Interest_Group/Topics
[16:23] Jacek Antonelli: But it's obvious that designing a clean split between the server and the viewer was not on their priorities list early on
[16:23] Charlette Proto: and the Imprudence wiki is also part of this
[16:24] Charlette Proto: ATech must have walked over to the wiki machine
[16:24] McCabe Maxsted: okay, I need to go hop into the shower
[16:25] Charlette Proto: byee McCabe
[16:25] McCabe Maxsted waves. Take care everyone :)
[16:25] Morgaine Dinova: See ya McCabe :-)
[16:25] Mm Alder: I have to take off. Thanks for letting me hijack the meeting. I hope to attend soon, but I'll be reading the transcripts when I'm not here.
[16:26] Charlette Proto: yvw Mm, good subject anyway
[16:26] Jacek Antonelli: Take care Mm
[16:26] Mm Alder: Whoever post the transcripts: Thank you very much.
[16:26] Jacek Antonelli: You're welcome. :)