Numbers int and double

int is used to store a 32-bit whole number, up to 2 billion.
double can store a decimal, up to approximately  10^308 .

What will be printed the following commands?  Explain any surprising results.

   int  c = 1000 ;
   double  d = 1e100 ;
   double  f = 0.1 ;

   output( c / 3 );
   output( c * d );
   output( c*c*c );
   output( c*c*c*c);
   output( f + f + f);

Solution

   333    :   the decimal .333 is thrown away as both c and 3 are integers 

    1.0E103     :    it's a big number, so it displays in scientific notation

   1000000000    :   1 billion   

   -727379968     :   over 2 billion causes an overflow, and wraps around to a big negative number

   0.30000000000000004   :   
        0.1 is not stored EXACTLY in binary, so after a few calculations an error is seen.