Fundamental Numeric Types

The basic numeric types in C# have keywords associated with them. These types include integer types, floating-point types, and a special floating-point type called decimal to store large numbers with no representation error.

Integer Types

There are 10 C# integer types, as shown in Table 2.1. This variety allows you to select a data type large enough to hold its intended range of values without wasting resources.

Table 2.1: Integer Types

Type

Size

Range (Inclusive)

BCL Name

Signed

Literal Suffix

sbyte

8 bits

–128 to 127

System.SByte

Yes

byte

8 bits

0 to 255

System.Byte

No

short

16 bits

–32,768 to 32,767

System.Int16

Yes

ushort

16 bits

0 to 65,535

System.UInt16

No

int

32 bits

–2,147,483,648 to 2,147,483,647

System.Int32

Yes

uint

32 bits

0 to 4,294,967,295

System.UInt32

No

U or u

Long

64 bits

–9,223,372,036,854,775,808 to 9,223,372,036,854,775,807

System.Int64

Yes

L or l

ulong

64 bits

0 to 18,446,744,073,709,551,615

System.UInt64

No

UL or ul

nint

Signed 32-bit or 64-bit integer is a [depends on platform]

Depends on platform where the code is executing.

Use sizeof(nint) to retrieve size.

System.IntPtr

Yes

nuint

Unsigned 32-bit or 64-bit integer is a [depends on platform]

Depends on platform where the code is executing.

Use sizeof(nuint) to retrieve size.

System.UIntPtr

No

Included in Table 2.1 (and all the type tables within this section) is a column for the full name of each type; we discuss the literal suffix later in the chapter. All the fundamental types in C# have both a short name and a full name. The full name corresponds to the type as it is named in the BCL. This name, which is the same across all languages, uniquely identifies the type within an assembly. Because of the fundamental nature of these types, C# also supplies keywords as short names or abbreviations for the full names of fundamental types. From the compiler’s perspective, both names refer to the same type, producing identical code. In fact, an examination of the resultant Common Intermediate Language (CIL) code would provide no indication of which name was used.

Language Contrast: C++—short Data Type

In C/C++, the short data type is an abbreviation for short int. In C#, short on its own is the actual data type.

Floating-Point Types (float, double)

Floating-point numbers have varying degrees of precision, and binary floating-point types can represent numbers exactly only if they are a fraction with a power of 2 as the denominator. If you were to set the value of a floating-point variable to be 0.1, it could very easily be represented as 0.0999999999999999 or 0.10000000000000001 or some other number very close to 0.1. Similarly, setting a variable to a large number such as Avogadro’s number, 6.02 × 1023, could lead to a representation error of approximately 108, which after all is a tiny fraction of that number. The accuracy of a floating-point number is in proportion to the magnitude of the number it represents. A floating-point number is precise to a certain number of digits of precision, not by a fixed value such as ±0.01. There are at most 17 significant digits for a double and 9 significant digits for a float1 (assuming the number wasn’t converted from a string as described in the “Advanced Topic: Floating-Point Types Dissected”). 2

C# supports the two binary floating-point number types listed in Table 2.2. Binary numbers appear as base 10 (denary) numbers for human readability. While there are additional floating-point types beyond the scope of this book (see https://learn.microsoft.com/dotnet/standard/numerics), they are not built-in types with associated keywords.

Table 2.2: Floating-Point Types

Type

Size

Significant Digits

BCL Name

Significant Digits

Literal Suffix

float

32 bits

±1.5 × 10−45 to ±3.4 × 1038

System.Single

7

F or f

double

64 bits

±5.0 × 10−324 to ±1.7 × 10308

System.Double

15–16

D or d

AdVanced Topic
Floating-Point Types Dissected

Denary numbers within the range and precision limits of the decimal type are represented exactly. In contrast, the binary floating-point representation of many denary numbers introduces a rounding error. Just as ⅓ cannot be represented exactly in any finite number of decimal digits, so ¹¹⁄₁₀ cannot be represented exactly in any finite number of binary digits (the binary representation being 1.0001100110011001101…). In both cases, we end up with a rounding error of some kind.

A decimal is represented by ±N * 10k where the following is true:

N, the mantissa, is a positive 96-bit integer.
k, the exponent, is given by -28 <= k <= 0.

In contrast, a binary float is any number ±N * 2k where the following is true:

N is a positive 24-bit (for float) or 53-bit (for double) integer.
k is an integer ranging from -149 to +104 for float and from -1074 to +970 for double.
Decimal Type

C# also provides a decimal floating-point type with 128-bit precision (see Table 2.3). This type is suitable for financial calculations.

Table 2.3: Decimal Type

Type

Size

Range (Inclusive)

BCL Name

Significant Digits

Literal Suffix

decimal

128 bits

1.0 × 10−28 to approximately 7.9 × 1028

System.Decimal

28–29

M or m

Unlike binary floating-point numbers, the decimal type maintains exact accuracy for all denary numbers within its range. With the decimal type, therefore, a value of 0.1 is exactly 0.1. However, while the decimal type has greater precision than the floating-point types, it has a smaller range. Thus, conversions from floating-point types to the decimal type may result in overflow errors. Also, calculations with decimal are slightly (generally imperceptibly) slower.

AdVanced Topic
Native-Sized Integers

C# 9.0 added new contextual keywords to represent native-sized signed and unsigned integers, specifically nint and nuint respectively starting in C# 11. (See Table 2.4.) Unlike the other numeric types that are the same size regardless of the underlying operating system, the native sized integer types will vary depending on what platform the code is executing. A nint, for example, will be 32-bits on a 32-bit platform and 64-bit on a 64-bit platform. These types are designed to match the size of a pointer within the system on which they are executing. They are a more advanced type because they are generally only useful when working with pointers and memory in the underlying operating system rather than memory within the managed execution context of .NET.

Table 2.4: Native Integer Types3

Type

Size

Range (Inclusive)

BCL Name

Significant Digits

Literal Suffix

nint

Matches Operating System (OS)

Variable but available at runtime via nint.MinValue and nint.MaxValue

System.IntPtr

Matches OS

none

nuint

Matches Operating System (OS)

Variable but available at runtime via unint.MinValue and unint.MaxValue

System.UIntPtr

Matches OS

none

See Chapter 23 for more information on nint and nuint.

Literal Values

A literal value is a representation of a constant value within source code. For example, if you want to have Console.WriteLine() print out the integer value 42 and the double value 1.618034, you could use the code shown in Listing 2.1 with Output 2.1.

Listing 2.1: Specifying Literal Values
1. Console.WriteLine(42);
2.  
3. Console.WriteLine(1.618034);

Output 2.1
42
1.618034
Beginner Topic
Use Caution When Hardcoding Values

The practice of placing a value directly into source code is called hardcoding, because changing the values requires recompiling the code. Developers must carefully consider the choice between hardcoding values within their code and retrieving them from an external source, such as a configuration file, so that the values are modifiable without recompiling.

By default, when you specify a literal number with a decimal point, the compiler interprets it as a double type. Conversely, a literal value with no decimal point generally defaults to an int, assuming the value is not too large to be stored in a 32-bit integer. If the value is too large, the compiler interprets it as a long. Furthermore, the C# compiler allows assignment to a numeric type other than an int, assuming the literal value is appropriate for the target data type. short s = 42 and byte b = 77 are allowed, for example. However, this is appropriate only for constant values; b = s is not allowed without additional syntax, as discussed in the section “Conversions between Data Types” later in this chapter.

As previously discussed in this section, there are many different numeric types in C#. In Listing 2.2, a literal value is passed to the WriteLine method. Since numbers with a decimal point will default to the double data type, the output, shown in Output 2.2, is 1.618033988749895 (the last two digits, 48, are now 5), corresponding to the expected accuracy of a double.

Listing 2.2: Specifying a Literal double
1. Console.WriteLine(1.6180339887498948);
Output 2.2
1.618033988749895

To view the intended number with its full accuracy, you must declare explicitly the literal value as a decimal type by appending an M (or m) (see Listing 2.3 and Output 2.3).

Listing 2.3: Specifying a Literal decimal
1. Console.WriteLine(1.6180339887498948M);
Output 2.3
1.6180339887498948

Now the output of Listing 2.3 is as expected: 1.618033988749895. Note that d is the abbreviation for double. To remember that M should be used to identify a decimal, remember that “m is for monetary calculations.”

You can also add a suffix to a value to explicitly declare a literal as a float or double by using the F (or f) and D (or d) suffixes, respectively. For integer data types, the suffixes are U, L, LU, and UL. The type of an integer literal can be determined as follows:

Numeric literals with no suffix resolve to the first data type that can store the value, in this order: int, uint, long, and ulong.
Numeric literals with the suffix U resolve to the first data type that can store the value, in the order uint and then ulong.
Numeric literals with the suffix L resolve to the first data type that can store the value, in the order long and then ulong.
If the numeric literal has the suffix UL or LU, it is of type ulong.

Note that suffixes for literals are case insensitive. However, uppercase is generally preferred to avoid any ambiguity between the lowercase letter l and the digit 1.

Guidelines
DO use uppercase literal suffixes (e.g., 1.618033988749895M).

On occasion, numbers can get quite large and difficult to read. To overcome the readability problem, C# 7.0 added support for a digit separator, an underscore (_), when expressing a numeric literal, as shown in Listing 2.4.

Listing 2.4: Specifying Digit Separator
1. Console.WriteLine(9_814_072_356M);

In this case, we separate the digits into thousands (threes), but this is not required by C#. You can use the digit separator to create whatever grouping you like as long as the underscore occurs between the first and last digits. In fact, you can even have multiple underscores side by side—with no digit between them.

In addition, you may wish to use exponential notation instead of writing out several zeroes before or after the decimal point (whether using a digit separator or not). To use exponential notation, supply the e or E infix, follow the infix character with a positive or negative integer number, and complete the literal with the appropriate data type suffix. For example, you could print out Avogadro’s number as a float, as shown in Listing 2.5 and Output 2.4.

Listing 2.5: Exponential Notation
1. Console.WriteLine(6.023E23F);
Output 2.4
6.023E+23
Beginner Topic
Hexadecimal Notation

Usually, you work with numbers that are represented with a base of 10, meaning there are 10 symbols (0–9) for each place value in the number. If a number is displayed with hexadecimal notation, it is displayed with a base of 16 numbers, meaning 16 symbols are used: 0–9, A–F (or in lowercase). Therefore, 0x000A corresponds to the decimal value 10 and 0x002A corresponds to the decimal value 42, being 2 × 16 + 10. The actual number is the same. Switching from hexadecimal to decimal, or vice versa, does not change the number itself—just the representation of the number.

Each hex digit is four bits, so a byte can represent two hex digits.

In all discussions of literal numeric values so far, we have covered only base 10 type values. C# also supports the ability to specify hexadecimal values. To specify a hexadecimal value, prefix the value with 0x and then use any hexadecimal series of digits, as shown in Listing 2.6.

Listing 2.6: Hexadecimal Literal Value
1. //Display the value 42 using a hexadecimal literal
2. Console.WriteLine(0x002A);

Output 2.5 shows the results of Listing 2.6. Note that this code still displays 42, not 0x002A.

Output 2.5
42

Starting with C# 7.0, you can also represent numbers as binary values (see Listing 2.7).

Listing 2.7: Binary Literal Value
1. // Display the value 42 using a binary literal
2. Console.WriteLine(0b101010);

The syntax is like the hexadecimal syntax except with 0b as the prefix (an uppercase B is also allowed). See “Beginner Topic: Bits and Bytes” in Chapter 4 for an explanation of binary notation and the conversion between binary and decimal.

Note that starting with C# 7.2, you can place the digit separator after the x for a hexadecimal literal or the b for a binary literal.

AdVanced Topic
Formatting Numbers as Hexadecimal

To display a numeric value in its hexadecimal format, it is necessary to use the x or X numeric formatting specifier. The casing determines whether the hexadecimal letters appear in lowercase or uppercase. Listing 2.8 with Output 2.6 shows an example of how to do this.

Listing 2.8: Example of a Hexadecimal Format Specifier
1. //Displays "0x2A"
2. Console.WriteLine($"0x{42:X}");
Output 2.6
0x2A

Note that the numeric literal (42) can be in decimal or hexadecimal form. The result will be the same. Also, to achieve the hexadecimal formatting, we rely on the formatting specifier, separated from the string interpolation expression with a colon.

AdVanced Topic
Round-Trip Formatting

By default, Console.WriteLine(1.618033988749895); displays 1.61803398874989, with the last digit missing. To more accurately identify the string representation of the double value, it is possible to convert it using a format string and the round-trip format specifier, R (or r). For example, Console.WriteLine($"{1.618033988749895:R}") will display 1.6180339887498949.

The round-trip format specifier returns a string that, if converted back into a numeric value, will always result in the original value. Listing 2.9 with Output 2.7 shows the numbers are not equal without the use of the round-trip format.

Listing 2.9: Formatting Using the R Format Specifier
1. // ...
2. const double number = 1.618033988749895;
3. double result;
4. string text;
5.  
6. text = $"{number}";
7. result = double.Parse(text);
8. Console.WriteLine($"{result == number}{result} == {number}");
9.  
10. text = $"{number:R}";
11. result = double.Parse(text);
12. Console.WriteLine($"{result == number}{result} == {number}");
13.  
14. // ...
Output 2.7
False: result == number
True: result == number

When assigning text the first time, there is no round-trip format specifier; as a result, the value returned by double.Parse(text) is not the same as the original number value. In contrast, when the round-trip format specifier is used, double.Parse(text) returns the original value.

For those readers who are unfamiliar with the == syntax from C-based languages, result == number evaluates to true if result is equal to number, while result != number does the opposite. Both assignment and equality operators are discussed in the next chapter.

________________________________________

1. Starting with .NET Core 3.0.
2. Prior to .NET Core 3.0, the number of bits (binary digits) converts to 15 decimal digits, with a remainder that contributes to a sixteenth decimal digit as expressed in Table 2.2. Specifically, numbers between 1.7 × 10307 and less than 1 × 10308 have only 15 significant digits. However, numbers ranging from 1 × 10308 to 1.7 × 10308 will have 16 significant digits. A similar range of significant digits occurs with the decimal type as well.
3. The exact BCL type varies based on context.
{{ snackbarMessage }}
;