Originally posted by Dannyx
View Post
I don't speak binary so I'm having a hard time following those numbers, what do they mean exactly ?
bit = Binary digIT = 0 or 1
octal digit = 0, 1, 2, 3, 4, 5, 6, 7
decimal digit = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
hexadecimal digit = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F
An octal digit is a shorthand for a group of 3 contiguous bits (###). Three bits, each having one of two values, means you have 2*2*2=8 combinations {000,001,010,011,100,101,110,111}.
A hexadecimal (hex = 6, decimal = 10) digit is a shorthand for a group of 4 contiguous bits (####). Four bits, each having one of two values, means you have 2*2*2*2=16 combinations {0000,0001,0010,0011,0100,0101,0110,0111,1000,1001,1010,1011,1100,1101,1110,1111}.
Note that the rightmost bit is the least significant -- it changes most often, just like the rightmost digit in a decimal number changes most often (00,01,02,03,04,05,06,07,08,09,10,11...}
If we group bits into groups of 4 bits, then a byte (8 bits) can be conveniently represented by exactly two hexadecimal digits! If we tried to use octal digits, we can only represent three bits with each octal digit so we would need three octal digits to represent an eight bit byte; hex is thus more terse.
The "0x" prefix is a way of indicating that the digits that follow are to be interpreted as a hexadecimal value -- otherwise, you wouldn't know that "77" was decimal, hexadecimal or even octal!
So, 0x40 is a hexadecimal value. The two nybbles (i.e., half bytes) are '4' and '0'. Being half a byte, a nybble is exactly 4 bits -- just like a hexadecimal digit!
To decode/encode between binary bits and hexadecimal digits:
0 = 0000
1 = 0001
2 = 0010
3 = 0011
4 = 0100
5 = 0101
...
9 = 1001
A = 1010
B = 1011
...
F = 1111
So, 0x40 is really 0100 0000 or 01000000, binary.
Since they follow a DEFINE tag, it would seem you assign CH_A/B/C those values (much like I tried to do when I wrote "define CH_A 2" which assigns pin 2 to that CH_A).
#define CH_A HAPPY
#define CH_B BIRTHDAY
in which case
CH_A CH_B
would be seen as
HAPPY BIRTHDAY
This may or may not make sense to the compiler. Clearly, if CH_A and CH_B were 0x40 and 0x08, then it would be seen as
0x40 0x08
which probably wouldn't make sense to the compiler (you typically need an operator -- like +, -, &, ^, ... -- between two numeric values!)
To adopt the syntax that you used in your example code, you would treat CH_A and CH_B as variables and explicitly assign values to them:
CH_A = 0x40;
CH_B = 0x08;
The difference is subtle but significant. If you later write (on purpose or by accident)
CH_A = 5;
Then, the value of CH_A will silently change from 0x40 (which is actually 64 decimal) to 5.
In my case, that would, instead, be seen as an error. Because the preprocessor would LITERALLY replace the "CH_A" with "0x40" and your statement would appear as
0x40 = 5;
which makes no sense! It is equivalent to saying
64 = 5;
Leave a comment: