Announcement

Collapse
No announcement yet.

555 countdown timer design question

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Re: 555 countdown timer design question

    Originally posted by Curious.George View Post
    First, I assume "CH A" is supposed to be "CH_A", etc. -- spaces are not allowed inside identifiers.
    Yes, that is correct - my bad.

    #define CH_A 0x40
    #define CH_B 0x08
    #define CH_C 0x04
    #define CH_D 0x01
    I don't speak binary so I'm having a hard time following those numbers, what do they mean exactly ? Gotta hand it to the people who can actually write code like that Since they follow a DEFINE tag, it would seem you assign CH_A/B/C those values (much like I tried to do when I wrote "define CH_A 2" which assigns pin 2 to that CH_A). This worked for me as well: the code still executed properly even after using DEFINE instead of INT for my "button pin" (the same pin as before, just for testing purposes), which I read was not only better as far as memory usage is concerned, but it also prevents errors, something you mentioned way back when you said the compiler can detect errors - I may be wrong in these assumptions though.

    The problem was trying to read the bastards at the same time, since it resulted in the erratic behavior I mentioned. Seems I was on to something and got close enough, just have to use an AND rather than OR - slightly counterintuitive, since logically your brain tells you "if this OR that is pressed, do something", but it's not quite like that for a micro
    Wattevah...

    Comment


      Re: 555 countdown timer design question

      it's not binary, it's hexadecimal.

      Comment


        Re: 555 countdown timer design question

        Originally posted by stj View Post
        it's not binary, it's hexadecimal.
        Proof that I don't speak it
        Wattevah...

        Comment


          Re: 555 countdown timer design question

          indeed.

          but you need to learn - it's the foundation of computer coding.

          so i'm gonna teach you.

          decimal digits go from 0-9.
          when they loop, they roll over to the next digit creating 10.

          hexadecimal digits go from 0 to F (0123456789ABCDEF)
          then looping to 10.
          so 10 in hexadecimal is actually equal to 16 in normal (decimal) numbers.

          why does this matter?
          because it relates to binary.
          1-2-4-8
          4bits
          add them up and you get 15

          1248 - 1248
          8bits - hex is FF and decimal = 255
          Last edited by stj; 12-23-2017, 11:47 AM.

          Comment


            Re: 555 countdown timer design question

            Originally posted by Dannyx View Post
            Yes, that is correct - my bad.
            That's why letting your code speak directly is always best (i.e., cut and paste).

            I don't speak binary so I'm having a hard time following those numbers, what do they mean exactly ?
            Revisit my earlier post. Please note that these were just example values that I pulled out of the air -- your actual values will depend on how your circuit is wired, etc.

            bit = Binary digIT = 0 or 1
            octal digit = 0, 1, 2, 3, 4, 5, 6, 7
            decimal digit = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
            hexadecimal digit = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F

            An octal digit is a shorthand for a group of 3 contiguous bits (###). Three bits, each having one of two values, means you have 2*2*2=8 combinations {000,001,010,011,100,101,110,111}.

            A hexadecimal (hex = 6, decimal = 10) digit is a shorthand for a group of 4 contiguous bits (####). Four bits, each having one of two values, means you have 2*2*2*2=16 combinations {0000,0001,0010,0011,0100,0101,0110,0111,1000,1001,1010,1011,1100,1101,1110,1111}.

            Note that the rightmost bit is the least significant -- it changes most often, just like the rightmost digit in a decimal number changes most often (00,01,02,03,04,05,06,07,08,09,10,11...}

            If we group bits into groups of 4 bits, then a byte (8 bits) can be conveniently represented by exactly two hexadecimal digits! If we tried to use octal digits, we can only represent three bits with each octal digit so we would need three octal digits to represent an eight bit byte; hex is thus more terse.

            The "0x" prefix is a way of indicating that the digits that follow are to be interpreted as a hexadecimal value -- otherwise, you wouldn't know that "77" was decimal, hexadecimal or even octal!

            So, 0x40 is a hexadecimal value. The two nybbles (i.e., half bytes) are '4' and '0'. Being half a byte, a nybble is exactly 4 bits -- just like a hexadecimal digit!

            To decode/encode between binary bits and hexadecimal digits:
            0 = 0000
            1 = 0001
            2 = 0010
            3 = 0011
            4 = 0100
            5 = 0101
            ...
            9 = 1001
            A = 1010
            B = 1011
            ...
            F = 1111

            So, 0x40 is really 0100 0000 or 01000000, binary.

            Since they follow a DEFINE tag, it would seem you assign CH_A/B/C those values (much like I tried to do when I wrote "define CH_A 2" which assigns pin 2 to that CH_A).
            #define is a directive that tells the compiler "whenever you see the identifier CH_A, replace it with 0x40 -- LITERALLY! (actually, this is handled in the preprocessor before the compiler actually sees it). So, I could write something like

            #define CH_A HAPPY
            #define CH_B BIRTHDAY

            in which case

            CH_A CH_B

            would be seen as

            HAPPY BIRTHDAY

            This may or may not make sense to the compiler. Clearly, if CH_A and CH_B were 0x40 and 0x08, then it would be seen as

            0x40 0x08

            which probably wouldn't make sense to the compiler (you typically need an operator -- like +, -, &, ^, ... -- between two numeric values!)

            To adopt the syntax that you used in your example code, you would treat CH_A and CH_B as variables and explicitly assign values to them:

            CH_A = 0x40;
            CH_B = 0x08;

            The difference is subtle but significant. If you later write (on purpose or by accident)

            CH_A = 5;

            Then, the value of CH_A will silently change from 0x40 (which is actually 64 decimal) to 5.

            In my case, that would, instead, be seen as an error. Because the preprocessor would LITERALLY replace the "CH_A" with "0x40" and your statement would appear as

            0x40 = 5;

            which makes no sense! It is equivalent to saying

            64 = 5;

            Comment

            Working...
            X