The C# decimal keyword denotes a 128-bit data type.  Compared to floating-point types, the decimal type has a greater precision and a smaller range, which makes it suitable for financial and monetary calculations.

Approximate Range: ±1.0 × 10−28 to ±7.9 × 1028

Precision:  28-29 significant digits

.NET Type:  System.Decimal

Decimal Literal

If you want a numeric literal to be treated as decimal, you must use the m or M suffix, for example:

decimal myMoney = 314.15M;

If you forget to add the M suffix, you will receive the following compiler error:

Literal of type double cannot be implicitly converted to type ‘decimal’; use an ‘M’ suffix to create a literal of this type

Decimal Conversions

Integral types are implicitly converted to decimal, and their result evaluates to decimal.  Therefore, you can initialize a decimal variable using an integer literal without the suffix.  For example, this statement is valid:

decimal myMoney = 300;

However, there is no implicit conversion between floating-point types and the decimal type.  Thus, you must use an explicit cast to convert between these two types.  For example:

decimal myMoney = 123.4m;
double yourMoney = (double)myMoney;
myMoney = (decimal)yourMoney;

Formatting Decimal Output

To format decimal output using Console.WriteLine, decimal.ToString, or String.Format, you can use the standard currency format string "C" or "c", for example:

decimal myMoney = 0.123m;
decimal yourMoney = 123456m;
Console.WriteLine( "My Cash = {0:C}", myMoney );
Console.WriteLine( "Your Cash = {0:C}", yourMoney );

The console output would be:

My Cash = $0.12

Your Cash = $123,456.00