@Travis, you did (before) premature optimization, which is "the source of all evil" ... But, now when we have a proper javascript we can optimize.
var thtrim3 = typeof "".trim === "function" ? function( text ) {
return ! text ? "" : text.trim();
}
: function ( text ) {
return ! text ? "" : text.replace( /^\s\s*/, "" ).replace( /\s\s*$/, "" );
};
Above you have replaced rtrim in "my" method and arrived to above faster version. Which is OK. I am wondering will it trim-out "\u00A0" , as current offical regexp does ?
rtrim = /^(\s|\u00A0)+|(\s|\u00A0)+$/g ; // current jQuery
I am also wondering will it be able to cache the regular expressions used ? Also, is this extreme optimization necessary? In real programs, you will never find string trimming called 5000 times in a tight loop. Or strings which are longer than approx 1024. I am pretty sure that in the region of string length < 1024, one will see very little speed difference between browsers, using any of the trims from here. And one will se negligible difference v.s. all that repeated with the current jQuery.trim() .
Perhaps one can deliver a plugin/replacement, to be used by people who happen to deal with strings where length > 1024? Maybe then, this could be a better solution :
- // replacement for trimming Very Large Strings
- // for hosts which do not support ES5 "".trim()
- jQuery.trim = typeof "".trim === "function" ?
- function( text ) { return ! text ? "" : text.trim(); }
- : function (str) {
- // http://blog.stevenlevithan.com/archives/faster-trim-javascript
- var str = str.replace(/^[\s\xA0]*/, ''),
- ws = /[\s\xA0]/, i = str.length;
- while (ws.test(str.charAt(--i)));
- return str.slice(0, i + 1);
- }
Only to be used when and if needed ...
--DBJ