Difference between revisions of "User talk:Becky Pippen/Numeric Storage"
(Created page with 'I like this code and it shows you have put a decent amount of thought into it. I too take pleasure in writing algorithms and would like to give you my critique of the code. Howev...') |
Becky Pippen (talk | contribs) (Thanks Strife!) |
||
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
I like this code and it shows you have put a decent amount of thought into it. I too take pleasure in writing algorithms and would like to give you my critique of the code. However before I start I want to impress upon you that the logic of your implementation is not only sound it is pretty efficient. | I like this code and it shows you have put a decent amount of thought into it. I too take pleasure in writing algorithms and would like to give you my critique of the code. However before I start I want to impress upon you that the logic of your implementation is not only sound it is pretty efficient. | ||
Your approach in '''decodeCharTo15Bits''' surprised me (in a good way), I had to stop and really think about why I would have done it another way. As it stands you are using 3 string additions, and 4 function calls that work with strings. I'd rewrite it as: | Your approach in '''decodeCharTo15Bits''' surprised me (in a good way), I had to stop and really think about why I would have done it another way. As it stands you are using 3 string additions, and 4 function calls that work with strings. The trouble is that this is a more expensive way of processing the string then just doing it all at once to a single integer. I'd rewrite it as: | ||
<lsl>integer decodeCharTo15Bits(string ch) | <lsl>integer decodeCharTo15Bits(string ch) | ||
{ | { | ||
integer val = llBase64ToInteger(llStringToBase64(ch)); | integer val = llBase64ToInteger(llStringToBase64(ch)); | ||
return ((val & | if((val & 0xE0000000) ^ 0xE0000000) | ||
return -1; //The character is not 3 bytes. | |||
return ((val & 0x0f000000) >> 12) + | |||
((val & 0x003f0000) >> 10) + | ((val & 0x003f0000) >> 10) + | ||
((val & 0x00003f00) >> 8) - 0x1000; | ((val & 0x00003f00) >> 8) - 0x1000; | ||
}</lsl> | }</lsl> | ||
There is more error detection that can be done before decoding but I don't have time now to write it.. | |||
Similarly I would rewrite '''encode15BitsToChar'''', which eliminates the need for hexChar2. | Similarly I would rewrite '''encode15BitsToChar'''', which eliminates the need for hexChar2. | ||
Line 36: | Line 40: | ||
-- '''[[User:Strife_Onizuka|Strife]]''' <sup><small>([[User talk:Strife_Onizuka|talk]]|[[Special:Contributions/Strife_Onizuka|contribs]])</small></sup> 04:50, 2 January 2010 (UTC) | -- '''[[User:Strife_Onizuka|Strife]]''' <sup><small>([[User talk:Strife_Onizuka|talk]]|[[Special:Contributions/Strife_Onizuka|contribs]])</small></sup> 04:50, 2 January 2010 (UTC) | ||
:Thanks, Strife! Much appreciated. I like how you are able to squeeze a few more bytes out of any code you see :-) For now I'll just leave both implementations available for comparison for educational purposes. [[User:Becky Pippen|Becky Pippen]] 18:17, 6 January 2010 (UTC) |
Latest revision as of 10:17, 6 January 2010
I like this code and it shows you have put a decent amount of thought into it. I too take pleasure in writing algorithms and would like to give you my critique of the code. However before I start I want to impress upon you that the logic of your implementation is not only sound it is pretty efficient.
Your approach in decodeCharTo15Bits surprised me (in a good way), I had to stop and really think about why I would have done it another way. As it stands you are using 3 string additions, and 4 function calls that work with strings. The trouble is that this is a more expensive way of processing the string then just doing it all at once to a single integer. I'd rewrite it as: <lsl>integer decodeCharTo15Bits(string ch) {
integer val = llBase64ToInteger(llStringToBase64(ch));
if((val & 0xE0000000) ^ 0xE0000000) return -1; //The character is not 3 bytes.
return ((val & 0x0f000000) >> 12) + ((val & 0x003f0000) >> 10) + ((val & 0x00003f00) >> 8) - 0x1000;
}</lsl> There is more error detection that can be done before decoding but I don't have time now to write it..
Similarly I would rewrite encode15BitsToChar', which eliminates the need for hexChar2.
<lsl>string encode15BitsToChar(integer num) {
// Check the incoming range
if (num < 0 || num >= 0x8000) { // illegal input -- do whatever is appropriate return "�"; }
// Bias the incoming numeric value by 0x1000 to avoid illegal Unicode codes:
num += 0x1000; // construct UTF-8 layout: num = 0xE0808000 | (num << 12) | ((num << 10) & 0x3f0000) | ((num << 8) & 0x3f00);
// Convert the UTF-8 into a 16-bit Unicode character:
return llGetSubString(llBase64ToString(llIntegerToBase64(num)), 0, 0);
}</lsl>
-- Strife (talk|contribs) 04:50, 2 January 2010 (UTC)
- Thanks, Strife! Much appreciated. I like how you are able to squeeze a few more bytes out of any code you see :-) For now I'll just leave both implementations available for comparison for educational purposes. Becky Pippen 18:17, 6 January 2010 (UTC)