Thursday 17 July 2008

.Net Double vs. Decimal

One twist with floating-point values is that they cannot represent every possible number. In part this is because the types are of limited size, e.g. Double only has 64-bits of data to work with. The number base used to represent values also has an effect. The .Net float and Double are encoded in base 2 (binary) format. It's not possible to represent every base 10 decimal number in base 2.

To increase accuracy, use the Decimal data type. The .Net Double data type is a 64-bit floating point value whereas the .Net Decimal data type is a 128-bit floating point value. These extra bits alone might be sufficent to make the Decimal type more accurately represent numbers than Double. However, unlike Double, Decimal is encoded in base 10. This means it can exactly represent base 10 numbers, i.e. the number system humans use, and make it an ideal type for financial calculations. The way the type works allows 28 digits of accuracy, i.e. you can represent 28 decimal digits with the decimal point moved to any position amongst the digits.

So why use Double if Decimal is available? The downside with Decimal is performance. Decimal operations can take 20 to 40 times longer than Double calculations. Most, if not all, processors used in PCs these days have 64-bit floating-point support built into the processor. Decimal operations however have to be done in software and take much longer to execute.

6 comments:

Anonymous said...

This is by far the most simplest explanations I have seen compare to other results returned by Googling "double vs decimal"

Good stuff!!

Robert said...

Good explaination. Thank you!!

Wynand said...

I agree. Plain and simple. Well done.

Anonymous said...

Great explaination!!!

Ohad Tsamir said...

+1
Short and to the (floating) point ;).

Thanks!
Ohad

Ohad Tsamir said...

+1
Short and to the (floating) point ;).

Thanks!
Ohad