You always write a programming using char
, int
, float
and double
), and optional specifiers (signed
, unsigned
, short
, long
).
What is the different between this four type??
THE MAIN DIFFERENT IS SIZE.
Type Explanation Type Explanation char smallest addressable unit of the machine that can contain basic character set. It is an integer type. Actual type can be either signed or unsigned depending on implementation signed char same as char
, but guaranteed to be signed.unsigned char same as char
, but guaranteed to be unsigned.short
short int
signed short
signed short intshort signed integer type. At least 16 bits in size. unsigned short
unsigned short intsame as short
, but unsigned.int
signed intbasic signed integer type. At least 16 bits in size. unsigned
unsigned intsame as int
, but unsigned.long
long int
signed long
signed long intlong signed integer type. At least 32 bits in size. unsigned long
unsigned long intsame as long
, but unsigned.long long
long long int
signed long long
signed long long intlong long signed integer type. At least 64 bits in size. Specified since the C99 version of the standard. unsigned long long
unsigned long long intsame as long long
, but unsigned. Specified only in C99 version of the standard.float (single precision) floating-point type. Actual properties unspecified, however on most systems this is IEEE 754 single precision floating point format. double double precision floating-point type. Actual properties unspecified, however on most systems this isIEEE 754 double precision floating point format. long double extended precision floating-point type. Actual properties unspecified. Unlike types float anddouble, it can be either 80-bit floating point format, the non-IEEE "double-double" or IEEE 754 quadruple precision floating-point format if a higher precision format is provided, otherwise it is the same as double. See this page for details.
Different between char, float, int and double
Written by Unknown on Saturday, 8 September 2012 at 18:13
FLOAT:
Single-precision floating-point format is a computer number format that occupies 4 bytes (32 bits) in computer memory and represents a wide dynamic range of values by using a floating point.
In IEEE 754-2008 the 32-bit base 2 format is officially referred to as binary32. It was called single in IEEE 754-1985. In older computers, other floating-point formats of 4 bytes were used.
DOUBLE:
In computing, double precision is a computer number format that occupies two adjacent storage locations in computer memory. A double-precision number, sometimes simply called a double, may be defined to be an integer, fixed point, or floating point (in which case it is often referred to as FP64).
Modern computers with 32-bit storage locations use two memory locations to store a 64-bit double-precision number (a single storage location can hold a single-precision number).Double-precision floating-point is an IEEE 754 standard for encoding binary or decimal floating-point numbers in 64 bits (8 bytes).
thank you so much osman, for asking me this question.
i really hope this explanation can helps you.
all the best...
Subscribe to:
Post Comments (RSS)
0 Responses to "Different between char, float, int and double"
Post a Comment