Skip to main content

What is the difference between Decimal, Float and Double in C#?


What is the difference between Decimal , Float and Double in C#?



When would someone use one of these?


Source: Tips4allCCNA FINAL EXAM

Comments

  1. float and double are floating binary point types. In other words, they represent a number like this:

    10001.10010110011


    The binary number and the location of the binary point are both encoded within the value.

    decimal is a floating decimal point type. In other words, they represent a number like this:

    12345.65789


    Again, the number and the location of the decimal point are both encoded within the value - that's what makes decimal still a floating point type instead of a fixed point type.

    The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations. Not all decimal numbers are exactly representable in binary floating point - 0.1, for example - so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well - the result of dividing 1 by 3 can't be exactly represented, for example.

    As for what to use when:


    For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.
    For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.

    ReplyDelete
  2. Precision is the main difference.

    Float - 7 digits (32 bit)

    Double-15-16 digits (64 bit)

    Decimal -28-29 significant digits (128 bit)

    Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.

    Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

    ReplyDelete
  3. float is a single precision (32 bit) floating point data type as defined by IEEE 754 (it is used mostly in graphic libraries).

    double is a double precision (64 bit) floating point data type as defined by IEEE 754 (probably the most normally used data type for real values).

    decimal is a 128-bit floating point data type, it should be used where precision is of extreme importance (monetary calculations).

    ReplyDelete
  4. The thing to keep in mind is that both float and double are considered "approximations" of a floating point number. Some floating point numbers cannot be accurately represented by floats or doubles, and you can get weird rouding errors out at the extreme precisions.

    Decimal doesn't use IEEE floating point representation, it uses a decimal representation that is 100% accurate by doing decimal based math rather than base 2 based math.

    What this means is that you can trust math to within the accuracy of decimal precision whereas you can't fully trust floats or doubles unless you are very careful.

    ReplyDelete
  5. The Decimal structure is strictly geared to financial calculations requiring accuracy, which are relatively intolerant of rounding. Decimals are not adequate for scientific applications, however, for several reasons:


    A certain loss of precision is acceptable in many scientific calculations because of the practical limits of the physical problem or artifact being measured. Loss of precision is not acceptable in finance.
    Decimal is much (much) slower than float and double for most operations, primarily because floating point operations are done in binary, whereas Decimal stuff is done in base 10 (i.e. floats and doubles are handled by the FPU hardware, such as MMX/SSE, whereas decimals are calculated in software).
    Decimal has an unacceptably smaller value range than double, despite the fact that it supports more digits of precision. Therefore, Decimal can't be used to represent many scientific values.

    ReplyDelete
  6. float 7 digits of precision

    double has about 15 digits of precision

    decimal has about 28 digits of precision

    If you need better accuracy (eg: in accounting applications), use double instead of float.
    In modern CPUs both data types have almost the same performance. The only benifit of using float is they take up less space. Practically matters only if you have got many of them.

    I found this is interesting. What Every Computer Scientist Should Know About Floating-Point Arithmetic

    ReplyDelete
  7. Double and float can be divided by integer zero without an exception at both compilation and run time.
    Decimal cannot be divided by integer zero. Compilation will always fail if you do that.

    ReplyDelete
  8. This has been an interesting thread of me, as today, we've just had a nasty little bug, concerning "decimal" having less precision than a "float".

    In our C# code, we are reading numeric values from an Excel spreadsheet, converting them into a decimal, then sending this decimal back to a Service, to save into a SQL Server database.

    Microsoft.Office.Interop.Excel.Range cell = ...
    object cellValue = cell.Value2;
    if (cellValue != null)
    {
    decimal value = 0;
    Decimal.TryParse(cellValue.ToString(), out value);
    }


    Now, for almost all of our Excel values, this worked beautifully. But for some, very small Excel values, using "decimal.TryParse" lost the value completely. One such example:


    cellValue = 0.00006317592
    Decimal.TryParse(cellValue.ToString(), out value); would return 0


    The solution, bizarrely, was to convert the Excel values into a double first, and then into a decimal.

    Microsoft.Office.Interop.Excel.Range cell = ...
    object cellValue = cell.Value2;
    if (cellValue != null)
    {
    double valueDouble = 0;
    double.TryParse(cellValue.ToString(), out valueDouble);
    decimal value = (decimal)valueDouble;
    ...
    }


    Even though double has less precision than a decimal, this actually ensured small numbers would still be recognised. For some reason, "double.TryParse" was actually able to retrieve such small numbers, whereas "decimal.TryParse" would set them to zero.

    Odd. Very odd.

    ReplyDelete

Post a Comment

Popular posts from this blog

[韓日関係] 首相含む大幅な内閣改造の可能性…早ければ来月10日ごろ=韓国

div not scrolling properly with slimScroll plugin

I am using the slimScroll plugin for jQuery by Piotr Rochala Which is a great plugin for nice scrollbars on most browsers but I am stuck because I am using it for a chat box and whenever the user appends new text to the boxit does scroll using the .scrollTop() method however the plugin's scrollbar doesnt scroll with it and when the user wants to look though the chat history it will start scrolling from near the top. I have made a quick demo of my situation http://jsfiddle.net/DY9CT/2/ Does anyone know how to solve this problem?

Why does this javascript based printing cause Safari to refresh the page?

The page I am working on has a javascript function executed to print parts of the page. For some reason, printing in Safari, causes the window to somehow update. I say somehow, because it does not really refresh as in reload the page, but rather it starts the "rendering" of the page from start, i.e. scroll to top, flash animations start from 0, and so forth. The effect is reproduced by this fiddle: http://jsfiddle.net/fYmnB/ Clicking the print button and finishing or cancelling a print in Safari causes the screen to "go white" for a sec, which in my real website manifests itself as something "like" a reload. While running print button with, let's say, Firefox, just opens and closes the print dialogue without affecting the fiddle page in any way. Is there something with my way of calling the browsers print method that causes this, or how can it be explained - and preferably, avoided? P.S.: On my real site the same occurs with Chrome. In the ex