Benchmarked? Or hard-coded?
@Mona Eberhard has posted on the new feedback portal quite different values:
Walk mode (SL/RL)
|
Speed in Metres Per Second (m/s)
|
Speed in Kilometres Per Hour (km/h)
|
SL default
|
3.2
|
11.52
|
RL average
|
1.33
|
4.8
|
RL brisk
|
1.5
|
5.4
|
RL fast
|
1.75
|
8.3
|
RL slow
|
0.5
|
1.8
|
Run mode (SL/RL)
|
Speed in Metres Per Second (m/s)
|
Speed in Kilometres Per Hour (km/h)
|
SL default
|
5.12
|
18.432
|
RL jog
|
1.8
|
6.48
|
RL average
|
2.5
|
9
|
RL fast
|
3.5
|
12.6
|
RL sprint
|
7.5
|
27
|
Swim mode (SL/RL)
|
Speed in Metres Per Second (m/s)
|
Speed in Kilometres Per Hour (km/h)
|
SL default
|
(unknown)
|
(unknown)
|
RL slow
|
1
|
3.6
|
RL brisk
|
1.5
|
5.4
|
RL fast
|
2
|
7.2
|
My question, therefore, is how the data presented on this page was collated for the SL Wiki:
- Is it something hard-coded somewhere? (one presumes that a snippet can be extracted from the viewer's open-source code showing the built-in values)
- If not: then, what measurements have been made, using which techniques, to come up with the numbers presented here?
- And, of course, are these still reproductible, within a small margin of error? There used to be quite a lot of QA templates here on the SL Wiki; was there ever one checklist to establish a procedure to calculate if those speeds are within the expected margin of error?
Because otherwise I cannot account for the huge discrepancy between Mona's tests and the values presented here...
— Gwyneth Llewelyn (talk) 18:57, 1 April 2024 (PDT)