1Converting integers from decimal¶
Given an integer in decimal, convert it into Xbit (or variable bit length) signed (or unsigned) binary (two's complement), and represent it in hexadecimal format.
1.1General solution¶
1.1.1Unsigned binary¶
For unsigned binary, we can only represent positive numbers, so just convert from decimal to binary. If X bits are required and the binary representation has fewer than X bits, just pad it with 0's on the left until X bits are reached.
1.1.2Signed binary¶
For signed binary, we can represent positive numbers in much the same way  just make sure the most significant bit (the leftmost bit) is 0. This shouldn't be a problem. For example, 3012 in 16bit signed binary is simply 2048 + 512 + 256 + 128 + 64 + 4 = $2^{11} + 2^9 + 2^8 + 2^7 + 2^6 + 2^2$ = 0000101111000100_{two}.
To represent negative numbers using the two's complement method, we first convert the number into binary, take the complement of the number (switch all 0's into 1's etc), and add 1. Then pad it with 1's to the left until X bits are reached. For example, to convert 435 into 16bit signed binary:
 Convert to binary as usual: 435 = 256 + 128 + 32 + 16 + 2 + 1 = $2^8 + 2^7 + 2^5 + 2^4 + 2^1 + 2^0$ = 110110011_{two}
 Take the complement: 001001100_{two}
 Add one: 001001101_{two}
 Pad it with 1's on the left until 16 bits are reached: 1111111001001101_{two}
1.1.3Converting to hexadecimal¶
To convert the binary number into hexadecimal, first split the number into groups of four, starting from the right if the number of bits is not a multiple of four. For example, the bitstring 10011010 would be split as 1001 1010
, and the bitstring 101011110 would be split as 1 0101 1110
(which is equivalent to 0001 0101 1110
).
Then, just take each 4bit group and convert into decimal. If the number lies between 0 and 9 (inclusive), then the hexadecimal equivalent is the same; otherwise, if it's between 10 and 15 (inclusive), then use the following mapping: 10 = a, 11 = b, 12 = c, 13 = d, 14=e, 15 = f. Write the hexadecimal equivalents in the same order as in the split bitstring and prefix it with 0x
^{1}.
For example, the bitstring 111111001001101 would be split as 1111 1110 0100 1101
. The decimal equivalents for each bit group are: 15, 14, 4, 13. Thus, in hexadecimal, we have 0xfe4d
(or 0xFE4D
).
1.2Examples¶
 Exercises 1, questions 1, 3, 5 (a) and 14
2Converting nonintegers from decimal¶
Given a real number in decimal, convert it first into a normalised binary number using scientific notation, then write it as a singleprecision float (IEEE format), then represent it in hexadecimal format.
2.1General solution¶
2.1.1Getting the binary representation¶
First, figure out how to represent the floor of the given number in binary, the usual way. For example, if the number is 86.5625, first convert 86 into binary: 64 + 16 + 4 + 2 = $2^6 + 2^4 + 2^2 + 2^1$ = 1010110_{two}. Then, take the part to the right of the decimal point and convert that: 0.5625 = 0.5 + 0.0625 = $2^{1} + 2^{4}$ = 0.1001. Put the two together to get 86.5625 in binary: 1010110.1001.
2.1.2Scientific notation in base 2¶
Now we need to write it using scientific notation. If we moved the binary point^{2} 7 places to the left, we would have 1.0101101001, and so 1010110.1001 and 1.0101101001_{two} $\times 2^6$ are equivalent. We can drop the 1 to the left of the binary point to obtain the significand, which is 0101101001
(note the presence of the leading 0  required in this situation).
2.1.3Converting into IEEE format¶
Since we need 23 bits for the significand, we pad it with zeroes on the right, resulting in 01011010010000000000000
.
To get the exponent code, we add 127 to exponent of 2 used (6 in this case), resulting in 133, then represent this as an 8bit unsigned int: 10000101
.
Since the number is positive, the sign bit is 0.
Consequently, we can write the number in IEEE format: 01000010101011010010000000000000
(sign bit, then the exponent bitstring, then the significand bitstring).
To convert this into hexadecimal, we use the following grouping: 0100 0010 1010 1101 0010 0000 0000 0000
to obtain 0x42ad2000
.
2.2Examples¶
 Exercises 1, questions 6, 9 and 11
3Converting from binary¶
Given a binary number:
a. Convert it into decimal, treating it as unsigned
b. Convert it into decimal, treating it as signed
c. Convert it into hexadecimal
d. Write it as an IEEE single precision float
3.1General solution¶
a. Just convert it from binary to decimal, the standard way. Example: 11101110_{two} is $128+64+32+8+4+2 = 238$.
b. Take the first bit as the sign bit. If it's 0, then it's a positive number, and you convert it the same way you would convert an unsigned number. On the other hand, if the sign bit is 1, then it's a negative number. Take all the digits but the first one, take the complement, then add one and convert that to decimal. Example: 1111101001
has a sign bit of 1, so we take the complement of 111101001
to get 000010110
, add one to get 000010111
, and finally convert that to decimal: 23. So the original decimal number is 23.
c. See the Converting to hexadecimal section above.
d. See the Scientific notation in base 2 and Converting into IEEE format sections above. As an example, if the binary number is 0.10101010...
(10 repeating on the right side of the binary point), then the significand would be 0101010101010101010101010101
, the exponent would be 1 + 127 = 128 so 10000000
, and the sign bit would be 0, resulting in 0100000000101010101010101010101010101
.
3.2Examples¶
 Exercises 1, questions 2 and 13
4Converting from hexadecimal¶
Given a hexadecimal number, convert it into signed (or unsigned) binary, then into decimal.
4.1General solution¶
First, just convert the number from hexadecimal to binary. This is pretty straightforward  for example, to convert 0xff3e
, write out the bit group corresponding to each hexadecimal number: 1111 1111 0011 1110
. See the Converting from binary section for how to proceed from there.
To continue the above example: if 1111111100111110
is treated as a signed binary integer, then we would first take the complement of 111111100111110
to get 000000011000001
, then add one to get `000000011000010
, then convert that to decimal to get 194. So the answer is 194.
If this number were treated as unsigned, then it would be the large and unwieldy 65342 in decimal.
4.2Examples¶
 Exercises 1, questions 4 and 5 (b)
5Properties of binary numbers¶
What are the largest and smallest numbers that can be represented in Xbit signed and unsigned binary?
5.1General solution¶
For Xbit unsigned binary, the smallest number that can be represented is 0, for obvious reasons, and the largest is $2^X  1$.
For Xbit signed binary, the largest number that can be represented is $2^{X1}1$, and the smallest is $(2^{X1})$^{3}.
5.2Examples¶
 Exercises 1, question 8 (a) and (b)
6Properties of IEEE single precision floats¶
a. How many different numbers represented in IEEE single precision float format are strictly between X and Y?
b. If the bitstrings for two different numbers represented in IEEE single precision float format were treated instead as signed (or unsigned) integers, what can we conclude about their relationship to each other?
c. What are the largest and smallest numbers that can be represented in this format?
d. What is the smallest difference between two distinct and normalised floating point numbers? In other words, what is the smallest unit of precision?
6.1General solution¶
a. If X and Y are both powers of 2 in the form $2^{a}$ and $2^{b}$ (for instance, $2^{5}$ and $2^{3}$, or $2^{17}$ and $2^{19}$), then there are $(ba) \times (2^{23}  1)$ different numbers strictly between them. ^{Not sure if correct}
b. Nothing, really. (Well, the first bit is still a sign bit EhsanKia)
c. $+\infty$ (01111111100000000000000000000000
) and $\infty$ (11111111100000000000000000000000
) are both possible, although they probably don't count. The smallest positive normalised number is $2^{126}$ (the smallest negative number is its counterpart  the additive inverse), and the smallest positive denormalised number (with an exponent code of 00000000
) is $2^{149}$, which is represented as 00000000000000000000000000000001
. The largest number is $1.11111111111111111111111 \times 2^{127}$ (according to the answer key to question 7 in exercises 1), which is represented as 01111111011111111111111111111111
.
d. Theoretically, $0.00000000000000000000001 \times 2^{126}$ in binary, which is $2^{149}$. Confirmation needed.
6.2Examples¶
 Exercises 1, questions 7, 8 (c), 10 and 12

In his notes for lecture 1, Professor Langer mentions that when hexadecimal numbers are written with capital letters (so A, B, etc instead of a, b etc), then we prefix it with
0X
instead of0x
. Although this may be true in some specific context, this does not appear to be generally true, so whenever capital letters are used in hex codes, we use the0x
prefix instead of0X
. ↩ 
Like a decimal point, but for binary, lol ↩

For example, if we have 8 bits, the largest signed number is 127 and the smallest is 128. This is because 10000000 is 128, as is mentioned in the lecture notes. See the MySQL docs for a very applicationspecific example. ↩