My colleague Bram de Jong at splicemusic did some tests with number types in arrays to find out, if we can save memory somehow. However his findings are rather not expected that way. Numbers can loose their type in an irreproducible way.
So lets try:
trace( getQualifiedClassName( uint( 0 ) ) ); // int
trace( '28bits', getQualifiedClassName( uint( 0x0fffffff ) ) ); // int
trace( '29bits', getQualifiedClassName( uint( 0x1fffffff ) ) ); // Number
Above all check this:
var num: uint;
for( var i: int = 0 ; i < 64 ; i++ )
{
num = 1 << i;
trace( i, getQualifiedClassName( num ) );
}
From 0 to 27 it is int
From 28 to 31 it is Number
From 32 to 59 it is int (again)
From 60 to 63 it is Number (again)
The disadvantage of this is that Numbers are slower in computation as well as reading, writing in an Array. And however it doesn't make any sense at all. Bram has already sent a note to adobe. I'll keep you updated.
Anti-Spam: What is the sum of 7 and 3?
int
Someone @ Adobe writes us back the explanation:
Which, sadly enough makes total sense and is a bit of a bummer for us…
Great post… but, hmm, Sho claims that his tests led him to believe that Numbers may actually be faster than ints in Flash:
Grant Skinner’s tests showed that ints had only a small improvement over Number, but uint was much worse then either:
Michael Baczynski claims that it is converting between types is the big killer (which makes sense):
It seems like the only consistent claim is that numerical data types in Flash are inconsistent ;)
Speaking of weird number behavior.. What is going on here?
var n:Number = 2374;
trace(n);
trace(n*.01);
Traces:
2374
23.740000000000002
Nikolaj, that’s pretty standard IEEE roundof errors. Have a look here:
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
Hey thanks for the article! I’m trying to get the decimal part out of a Number. I was just trying to narrow down the bits that store the decimal part so I did the following and got 0. Now I’m stuck.
trace( 1.23 & 2.23 );//0
Any suggestions?
Hi guys,
I have posted this on Grant Skinner’s blog as well, but seeing as how he referred to this blog post, I might as well share my findings here:
Interestingly, I believe that the exact numeric type being used can be determined through a performance test, using an Object as a hash map:
var hashMap:Object = new Object();
var time:int = getTimer();
for( var i:int = 0; i < 1000000; i++ )
hashMap[ 1 << 27 ] = "blah";
trace( getTimer() – time );
If the key's value is between 0 (inclusive) and 1 << 27 (inclusive), this loop takes 35 ms on my hardware. Otherwise (e.g. change 1 << 27 to 1 << 28), it takes a 150 ms.
This leads me to believe that, above 1 << 27 (and below 0), Numbers are used instead of integers.
What surprises me, though, is that 1 << 27 has its first bit set to 1 and the other 27 bits set to 0. Why couldn't ActionScript set all these bits to 1, reaching a value almost twice as great, before switching to the bigger Number type?
Additionally, why are the integer's negative values not used? Why does ActionScript use a Number for valid negative integer values such as -17?
I realize that some (or even all?) of these occurences may apply purely to Object keys rather than to integers in general.
Could anybody share more insight?