Thank you to the guys at HEGE supporting Badcaps [ HEGE ] [ HEGE DEX Chart ]

Announcement

Collapse
No announcement yet.

<plays taps> 500GB bites the dust...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #21
    Re: &lt;plays taps&gt; 500GB bites the dust...

    Originally posted by Wester547 View Post
    So you don't think HDDs that run 24/7 are reliable?
    Well, I trust HDDs when they have a bit more power cycles, that's all I'm saying. That way, it gives me a bit more confidence that the next time I power-cycle the machine, the HDD isn't as likely to come out dead.

    Problem with running HDDs 24/7 for too long is that the bearings (be it oldschool ball bearings or the fluid bearings that most HDDs use now) on both the headstack and the spindle motor wear out a bit. And when they wear out, they can screw up the tolerances between the heads and the platters just enough to make the HDD not boot.

    When you power OFF the HDD when not in use, you save the HDD some power-on hours at the expense of power cycles. Both POH and power cycles are not good for the HDD when you reach a certain number, so I think it's good to balance them.

    Originally posted by Wester547 View Post
    For what it's worth, I have a ST340016A which conveniently decided to develop a bad head stack after 16,000 hours of use and 7,000 power cycles for no apparent reason, about three years ago.
    In my experience, both the Barracuda ATA IV and 7200.7 don't like high POH. In this case, 16000 POH seems like very little, but it's actually quite a bit for these HDDs. From what I've seen from the 7200.7 line, after 10k to 15k hours, they become much more likely to fail. I think it has to do something with the spindle motor bearings just not being able to handle it. Yours taking much longer to spin up does sound a bit like spindle motor bearings problem. But of course, it could also be a spindle motor driver is faulty on one phase on the circuit board.

    Would sure be interesting to investigate.

    What I like about these older HDDs like the ATA IV and 7200.7 is they still use the old parallel recording technology. So even if you don't use the HDD in a very long time, the chances of the data going corrupt is much smaller than on the high data density perpendicular technology HDDs of today.

    Originally posted by Wester547 View Post
    I don't trust HDDs, period.
    Yes, me neither, which is why I always keep at least several backups of my data.

    Also, I want to warn people that flash drives are not very good for long-term backups either. They can die out of the blue too. I had a 4 GB die on me with barely any use. Interesting thing is that it has a Samsung flash IC, so one would think it should have been a reliable flash drive. Yet, it just quit without any prior signs. Interesting thing the OS could still see the flash drive and even still see the files on the flash drive. It just couldn't read anything nor write anything to the flash drive.

    So, as always, the most secure way to prevent data loss is to keep multiple backups. And if your most important data isn't too much and can fit on a standard 700 MB CD, then by all means burn it on a CD. I've yet to have a burned CD fail on me.

    Originally posted by Wester547 View Post
    Brand doesn't matter.
    You are probably right, but I think there are particular brands and models that really do fail more than others. The Toshiba 2.5" laptop HDDs come mind here.

    Comment


      #22
      Re: &lt;plays taps&gt; 500GB bites the dust...

      Originally posted by momaka View Post
      Problem with running HDDs 24/7 for too long is that the bearings (be it oldschool ball bearings or the fluid bearings that most HDDs use now) on both the headstack and the spindle motor wear out a bit. And when they wear out, they can screw up the tolerances between the heads and the platters just enough to make the HDD not boot.
      Well.... you're not going to believe this....

      Here is a ST340016A which braved over 100,000 power on hours and 65,000 start/stop counts (not power cycles though) with no bad sectors. It ran 24/7 for many years. It only has 153 power cycles, though. I don't think it would last so long if it had thousands of cycles. And the bearings on the head stack? Are you making reference to the contact start/stop slider? I wasn't aware that the heads had "bearings" unless you mean the cushion of air that they sit on ("air bearing"), or the material the heads are made of, ferrite wrapped in a wire of coil?

      When you power OFF the HDD when not in use, you save the HDD some power-on hours at the expense of power cycles. Both POH and power cycles are not good for the HDD when you reach a certain number, so I think it's good to balance them.
      I suppose power cycles, or thermal cycles, aren't good for drives with metal platters because metal expands and contracts as it heats up and cools down. With glass platters, this isn't an issue.

      In my experience, both the Barracuda ATA IV and 7200.7 don't like high POH. In this case, 16000 POH seems like very little, but it's actually quite a bit for these HDDs. From what I've seen from the 7200.7 line, after 10k to 15k hours, they become much more likely to fail. I think it has to do something with the spindle motor bearings just not being able to handle it. Yours taking much longer to spin up does sound a bit like spindle motor bearings problem. But of course, it could also be a spindle motor driver is faulty on one phase on the circuit board.
      Well, I understand that the multiplatter Seagate drives tend to be susceptible to seized FDB spindle bearings, usually because of being mishandled. I don't think the spindle motor chip is the issue as it isn't the notorious SMOOTH chip but that doesn't preclude the window of failure, of course. I think it might take longer to spin up because of oxidized power pins, but that's a big if. It's true that not every drive is intended to run 24/7. As for my old dead ST340016A, before it made that clacking noise, it made a much quieter, rhythmic seeking noise, as if it was trying to read the service area but couldn't. It does not take much longer to spin up - only slightly longer.

      7200.7 - 7200.10 drives have another issue, where the coating of film used to protect the magnetic layer starts falling off and sticking to the read/write heads. You can imagine what happens next. I think I have a dead ST380011A (no bad sectors, 21,500 power on hours, 1,700 power cycles) that fell victim to that problem. It spins up normally but doesn't seem to be capable of reading the service area, and eventually the heads seem to be stuck in an endless loop of seek failure.

      What I like about these older HDDs like the ATA IV and 7200.7 is they still use the old parallel recording technology. So even if you don't use the HDD in a very long time, the chances of the data going corrupt is much smaller than on the high data density perpendicular technology HDDs of today.
      Longitudinal recording technology sounds better on paper, but most failures I see are because of bad sectors or head crashes, or PCB failures. Although I wouldn't entirely rule out the notion. Even on old longitudinal drives, you should occasionally refresh data so as to prevent it from going corrupt.

      Also, I want to warn people that flash drives are not very good for long-term backups either. They can die out of the blue too. I had a 4 GB die on me with barely any use. Interesting thing is that it has a Samsung flash IC, so one would think it should have been a reliable flash drive. Yet, it just quit without any prior signs. Interesting thing the OS could still see the flash drive and even still see the files on the flash drive. It just couldn't read anything nor write anything to the flash drive.
      SSDs aren't impervious to failure either, but I will admit they have a come a long way. I have a 8GB Sansa flash drive with major performance issues, but all the data is still intact for whatever reason.

      So, as always, the most secure way to prevent data loss is to keep multiple backups. And if your most important data isn't too much and can fit on a standard 700 MB CD, then by all means burn it on a CD. I've yet to have a burned CD fail on me.
      Mariushm, a while back, posted in a thread expressly stating that the CD-Rs and CD-RWs (and DVD-R/+RWs) available to the consumer aren't designed to last forever either. They have a thinner layer of chemical substrate and reflective material compared to those used for movies, music, older CD-Rs, and interactive entertainment. So there is really no surefire solution except to backup ad nauseam.

      You are probably right, but I think there are particular brands and models that really do fail more than others. The Toshiba 2.5" laptop HDDs come mind here.
      Yeah, Toshiba drives are shit. I remember almost nine years ago that I had a MK802GAA from an 80GB Microsoft Zune die the second day out of the box. Of course it could have been dropped at the store or factory, but somehow I think it died because Toshiba is plain shit. I have a working MK8022GAA with no issues in an 80GB iPod Classic, but how much longer it will last is anyone's guess. I had a couple other of those, but the crap 80GB Samsung Spinpoint HS081HA drives in them developed tons of bad sectors. One after a whopping 30 hours of use, the other after about 300 hours of use. They were not mishandled.
      Last edited by Wester547; 10-04-2016, 11:16 AM.

      Comment


        #23
        Re: &lt;plays taps&gt; 500GB bites the dust...

        Yeah, I was also going to say...though 100K hours beat me by a "bit"...
        I still have my old RAID5 array with 4 120GB disks - two maxtors and two seagates. The two maxtors exceeded 70K POH (after correcting the reporting "bug"), and the two seagates are around 50K POH. All four disks seem still just fine, I ended up redistributing them to other machines (and hope that they don't die as they're no longer RAIDed).

        It was just a pity that the 60K POH 500G bought the farm. The drive doesn't really sound much different than any other spinning up drive, I don't think...

        My worst POH drives?
        3. a 400MB Maxtor. The drive died in a month after I bought it. It was obviously dropped by the computer manufacturer (a mom&pop shop)
        2. A 500GB HGST. I didn't put many hours on it but was trying to qualify it for my RAID. Died after 5 power on cycles, no longer spins up.
        1. a 30GB Quantum. Died after 8 hours, bearing was toast.

        All of these were luckily RMA'ed.

        I haven't had much experience with Toshiba, I have a few right now but have not been pumping hours on them. Time will tell, but I have to say I've gotten drives from most manufacturers at this point and they all suck in one way or another, so just get the cheapest and backup, backup, backup (and RAID).

        Comment


          #24
          Re: &lt;plays taps&gt; 500GB bites the dust...

          7200.7s seem to be one of the worst! And noticed that 99 percent of them are made in China when most HDDs aren't... I had to return one in a real short time in 2005, because it was making unusual squeaking sounds...
          ASRock B550 PG Velocita

          Ryzen 9 "Vermeer" 5900X

          16 GB AData XPG Spectrix D41

          Sapphire Nitro+ Radeon RX 6750 XT

          eVGA Supernova G3 750W

          Western Digital Black SN850 1TB NVMe SSD

          Alienware AW3423DWF OLED




          "¡Me encanta "Me Encanta o Enlistarlo con Hilary Farr!" -Mí mismo

          "There's nothing more unattractive than a chick smoking a cigarette" -Topcat

          "Today's lesson in pissivity comes in the form of a ziplock baggie full of GPU extension brackets & hardware that for the last ~3 years have been on my bench, always in my way, getting moved around constantly....and yesterday I found myself in need of them....and the bastards are now nowhere to be found! Motherfracker!!" -Topcat

          "did I see a chair fly? I think I did! Time for popcorn!" -ratdude747

          Comment


            #25
            Re: &lt;plays taps&gt; 500GB bites the dust...

            Originally posted by RJARRRPCGP View Post
            7200.7s seem to be one of the worst! And noticed that 99 percent of them are made in China when most HDDs aren't... I had to return one in a real short time in 2005, because it was making unusual squeaking sounds...
            The failed ST340016A in question was made in Singapore (as are all Barracuda ATA IV drives). I believe the noises you reference may actually be the routine short S.M.A.R.T. self test that most 7200.7 drives are programmed to do within the first 8 hours of operation (there is no way to turn it off outside of user activity). The only caveat is that it causes the seek count (seek error rate) to go up quite a bit. I've seen 7200.7+ drives manufactured in both Thailand, China, Singapore with no detectable pattern among the rash of failures regarding country of origin.

            Comment


              #26
              Re: &lt;plays taps&gt; 500GB bites the dust...

              Originally posted by Wester547 View Post
              Well.... you're not going to believe this....

              Here is a ST340016A which braved over 100,000 power on hours and 65,000 start/stop counts (not power cycles though) with no bad sectors.
              I believe it.
              Not saying that it is impossible for the ATA IV and 7200.7 to get such high POH count, but rather that they just start being more problematic past 10k-15k. Again, that is in my experience briefly helping the IT dept. of a small office complex of 40-50 desktop computers with mostly Seagate 7200.7 drivers.

              Originally posted by Wester547 View Post
              And the bearings on the head stack? Are you making reference to the contact start/stop slider?
              Oops, sorry, I mean the actuator arm (on which the head stack is attached to).

              Originally posted by Wester547 View Post
              I suppose power cycles, or thermal cycles, aren't good for drives with metal platters because metal expands and contracts as it heats up and cools down. With glass platters, this isn't an issue.
              Actually, early glass-platter IBM drives (particularly Deskstar, aka "Deathstar") had most of their problems exactly because of the glass platters - the magnetic coating on top of the platters shrunk and expanded much more than the platters themselves, which is why many IBM Deskstar HDDs ended up with so much "pixie dust" inside them when they really failed badly.

              Or perhaps one could view this as an improved method over format C: command.

              Originally posted by Wester547 View Post
              Mariushm, a while back, posted in a thread expressly stating that the CD-Rs and CD-RWs (and DVD-R/+RWs) available to the consumer aren't designed to last forever either. They have a thinner layer of chemical substrate and reflective material compared to those used for movies, music, older CD-Rs, and interactive entertainment. So there is really no surefire solution except to backup ad nauseam.
              Well, there is no eternal data storage, period. Even stone scripts will eventually erode. But back on the topic of todays tech: CD-Rs have a guaranteed shelf life of up to 50 years. Same goes for DVD-Rs. But due to higher data density, I find DVDs are much more intolerant of big scratches, so I don't trust them as much.

              Originally posted by Wester547 View Post
              Yeah, Toshiba drives are shit. I remember almost nine years ago that I had a MK802GAA from an 80GB Microsoft Zune die the second day out of the box. Of course it could have been dropped at the store or factory, but somehow I think it died because Toshiba is plain shit.
              The worst ones, IMO, were the 120-300 GB 2.5" models found in latops and PS3. We had a big stack of them (the dead Toshiba HDDs) at the repair shop I worked at. In comparison, the Seagate Momentus HDDs rarely gave us problems. I still have a ST9402115AS (40 GB, 2.5") with 1700-and-odd bad sectors that still works. Was thought to be defective due to bad SATA connector and someone tossed it across the room in our shop in the junk pile. I decided to test it one day, and after a bit of clicking and trying to "fix itself", it eventually came online and started working. I took it home and even used it in a computer as the main HDD for about half a year before the computer got BGA issues with the SB.

              Anyways, that aside, I think the 3.5" Toshiba HDDs are made by someone else nowadays (I can't remember who Toshiba merged with), so the 3.5" HDDs may actually be quite alright. Something tells me Hitachi, but I think someone should definitely check that before taking my work for granted.

              Originally posted by RJARRRPCGP View Post
              7200.7s seem to be one of the worst!
              If you think the 7200.7 are the worst HDDs, then perhaps you should stop using your computer right now and turn it off, because it could fail any second.
              Last edited by momaka; 10-06-2016, 07:17 PM.

              Comment


                #27
                Re: &lt;plays taps&gt; 500GB bites the dust...

                Originally posted by momaka View Post
                I believe it.
                Not saying that it is impossible for the ATA IV and 7200.7 to get such high POH count, but rather that they just start being more problematic past 10k-15k. Again, that is in my experience briefly helping the IT dept. of a small office complex of 40-50 desktop computers with mostly Seagate 7200.7 drivers.
                Well, higher data density drives with more platters and heads are probably more prone to failure than the lower density drives with only one platter and one or two heads.

                Oops, sorry, I mean the actuator arm (on which the head stack is attached to).
                Oh, failed actuator. I guess that makes sense. Although I find it interesting that the other (my) ST340016A's actuator stopped working with significantly less use than the one (formerly my brother's which was gifted to me beforehand) that's somehow still in working condition. I guess it really is the luck of the draw, which HDD fails first. I'm starting to think that start/stop counts or power cycles are bad for the actuator, at least relative to power on hours. Wait a sec.... THIS is why head parking is a BAD thing...

                Actually, early glass-platter IBM drives (particularly Deskstar, aka "Deathstar") had most of their problems exactly because of the glass platters - the magnetic coating on top of the platters shrunk and expanded much more than the platters themselves, which is why many IBM Deskstar HDDs ended up with so much "pixie dust" inside them when they really failed badly.
                Yes, the issue was with antiferrously coupled media sprinkling off the platters and contaminating the head stack, resulting in many a head crash and bad sectors galore. And many, many failed 75GXP and 60GXP drives, and a few other models IIRC. 120GXP and 180GXP didn't suffer as much because IBM introduced a form of wear-leveling (with those drives) which allowed the heads to periodically move their position over the platters so they didn't "dwell" over any one spot for too long (significantly minimizing the chance of a head crash). My guess is that enabling APM would work to the same effect. They were still somewhat failure prone, though. I have one 41GB 120GXP that still works (but it has a worn ceramic ball bearing which is very noisy - probably bad lube) with 15,000 power on hours and 6,000 power cycles on it. It has 7 reallocations but is completely clean otherwise. Some of those 75GXP drives failed so epically that upon disassembling the drives, it was found that the magnetic layer had totally fallen off the glass platters and they were totally transparent and visible to the naked eye. Ghetto.

                Well, there is no eternal data storage, period. Even stone scripts will eventually erode. But back on the topic of todays tech: CD-Rs have a guaranteed shelf life of up to 50 years. Same goes for DVD-Rs. But due to higher data density, I find DVDs are much more intolerant of big scratches, so I don't trust them as much.
                So you don't think they exaggerate the shelf life, even a tad? I don't think scratches are very good for discs to begin with, and possibly even worse for the optical laser pickup assembly, as the laser has to work harder in order to read scratched discs and so will wear much faster that way.

                The worst ones, IMO, were the 120-300 GB 2.5" models found in latops and PS3. We had a big stack of them (the dead Toshiba HDDs) at the repair shop I worked at. In comparison, the Seagate Momentus HDDs rarely gave us problems. I still have a ST9402115AS (40 GB, 2.5") with 1700-and-odd bad sectors that still works. Was thought to be defective due to bad SATA connector and someone tossed it across the room in our shop in the junk pile. I decided to test it one day, and after a bit of clicking and trying to "fix itself", it eventually came online and started working. I took it home and even used it in a computer as the main HDD for about half a year before the computer got BGA issues with the SB.
                Well, the newer Seagate Momentus drives are actually really problematic for a complex scad of reasons. Not quite as bad as Toshiba but in my experience, Western Digital, Hitachi, and Seagate mobile drives are actually very failure prone.

                Anyways, that aside, I think the 3.5" Toshiba HDDs are made by someone else nowadays (I can't remember who Toshiba merged with), so the 3.5" HDDs may actually be quite alright. Something tells me Hitachi, but I think someone should definitely check that before taking my work for granted.
                Western Digital/Hitachi, yes.
                Last edited by Wester547; 10-06-2016, 07:58 PM.

                Comment


                  #28
                  Re: &lt;plays taps&gt; 500GB bites the dust...

                  Originally posted by Wester547 View Post
                  Well, the newer Seagate Momentus problems are actually really problematic for a complex scad of reasons. Not quite as bad as Toshiba but in my experience, Western Digital, Hitachi, and Seagate mobile drives are actually very failure prone.
                  Mobile HDDs in general are very failure prone, the combination of the shock/vibration they are subjected to and the much higher operating temperatures due to their smaller size and fact that they are often mounted in small poorly ventilated compartments, shortens their life significantly compared to desktop drives (except maybe series with known issues of early failure). SSDs are becoming far more common and as far as I'm concerned the sooner they fully replace mechanical HDDs in laptops the better, a laptop is not a good environment for device with close tolerances spinning at 5,400-7,200 rpm.

                  Comment


                    #29
                    Re: &lt;plays taps&gt; 500GB bites the dust...

                    Originally posted by Wester547 View Post
                    Well, higher data density drives with more platters and heads are probably more prone to failure than the lower density drives with only one platter and one or two heads.
                    What do you mean by this, only use really slow, low capacity, outdated drives?

                    Given equicapacity platters, of course the one with the larger number of platters will hold more data (and potentially faster).

                    Given two equicapacity drives, the one with more platters will have lower density platters, thus it will have lower data density.

                    So I'm confused what you mean here.

                    Comment


                      #30
                      Re: &lt;plays taps&gt; 500GB bites the dust...

                      Originally posted by eccerr0r View Post
                      What do you mean by this, only use really slow, low capacity, outdated drives?
                      No, of course not. It was just a general statement. But I'll explain since Pandora's Box has now been opened.

                      BTW, whether they are "slow" or not is somewhat debatable, seeing as how those older drives actually have excellent random access and seek performance. It is their data throughput that puts them far behind modern drives and very low disk space that makes them unusable for modern applications. But anyway...

                      Lower data density means the read errors are much lower than that of a larger drive. The drive can essentially correct errors so quickly that you wouldn't even notice them (unless of course the magnetic layer is bad and thus your data, then all bets are off...). The fact that the flying height needs to be lower and more precise with modern drives (especially to achieve higher data throughput) means that the chance of a head crash is higher and thereby failure is more likely. The advantage of a single platter vs. multiplatter drive is that the target zones are smaller and there are less heads to position simultaneously on single platter drives. Not to mention multiplatter drives generate more vibration and heat. It's true that a single 500GB hard drive with two 250GB platters will yield lower data density than a single 500GB drive with one 500GB platter. The kind of comparison I was thinking of was a very small drive (40GB, or a single platter drive with 20GB per side/track) vs. a very large drive (1TB, two 500GB platters).

                      In an ideal world, we wouldn't even need to use mechanical HDDs. Like electrolytic capacitors, they are prone to failure by way of their very construction. In an ideal world, we would have very cheap, high storage volume SSDs (comparable to HDDs in capacity) whose bugs and issues have been ironed out. But nothing is perfect, especially electronics (the debate of HDD vs. SSD reminds me a bit of the electrolytic capacitor vs. solid polymer debate...).

                      As for the reliability of mobile HDDs, the ones I've seen die were kept cool and weren't abused in any shape or form. I suppose they are more fragile though, seeing as how they are smaller, but more rigid at least (higher operating shock / non-operating shock spec than their 3.5" counterparts)...
                      Last edited by Wester547; 10-06-2016, 11:46 PM.

                      Comment


                        #31
                        Re: &lt;plays taps&gt; 500GB bites the dust...

                        this morning as i decided to reorganise my data storage, i just had two samsung f3 1tb hd103sj drives that i had in windows software raid 0 become undetectable upon power on. the culprit? dirty or tarnished sata goldfingers on the hard drive. wiped the goldfingers down with ipa and the drives worked fine after that.

                        almost gave me a scare and heart attack as i thought i had two bricked drives! i luv those samsung f3 1tb drives as they are the fastest 1tb drives ever made even faster than the wd caviar black 1tb. because of that, they are perfectly suited for making a 2tb raid 0 array for pure throughput. i use them for heavy bittorrenting on my 250/100mbps fibre line.

                        i wonder how many ppl had supposedly bricked or undetectable hard drives and lost all their data and had to rma their drives beacuse they didnt know any better about bad goldfinger sata contacts. fuck you rohs srsly...

                        Comment


                          #32
                          Re: &lt;plays taps&gt; 500GB bites the dust...

                          Originally posted by Wester547 View Post
                          In an ideal world, we wouldn't even need to use mechanical HDDs. Like electrolytic capacitors, they are prone to failure by way of their very construction.
                          Well, in the ideal world, even CapXon electrolytic capacitors wouldn't fail, because they would be made with ideal (pure) aluminum.

                          That said, SSDs/flash have their own set of problems. Particularly, from what I've read, the "memory cells" that make up the data on your SSD have become extremely small now. So the problem with that is they can store very limited amount of charge, which means data can get corrupt easier.

                          Anyways, what needs to be realized here is that you can't fit unlimited data on an object that takes on limited space, regardless of the method used (i.e. be it mechanical or electrical). Currently, we are hard-limited by atomic sizes. So you can't have data bit take less space than an actual atom (in fact, not even equal to with current technology). But we are getting somewhat close, so don't expect to see drive capacities continue to grow like they have in the past 20 years. (Just like with CPUs, the way we hit a barrier at a few GHz.)

                          Originally posted by Wester547 View Post
                          Yes, the issue was with antiferrously coupled media sprinkling off the platters and contaminating the head stack, resulting in many a head crash and bad sectors galore.
                          ...
                          Some of those 75GXP drives failed so epically that upon disassembling the drives, it was found that the magnetic layer had totally fallen off the glass platters and they were totally transparent and visible to the naked eye.
                          Yup, that's what I mean by "magic pixie dust"

                          Originally posted by Wester547 View Post
                          I have one 41GB 120GXP that still works (but it has a worn ceramic ball bearing which is very noisy - probably bad lube)
                          Not bad lube. Just nature of the design. Ball bearings get noisy when they wear out, just like car bearings... or if you have a somewhat recent LG washer (made in the past decade or so), then you can have bad bearings in that too... but I believe washers/laundry is a little too far off-topic to continue the discussion here and consider it OK.

                          Originally posted by Wester547 View Post
                          So you don't think they exaggerate the shelf life, even a tad?
                          No, I think it is possible for a burned CD to last 50 years. Just store it right (away from *any* light, high heat, and in a proper case so it doesn't warp), and it should last. I have CDs that I burned that are close to 15 years old now. Still a long way to go before they get to 50, but I've never had problem with them so far, and unlike a HDD or flash drive (or any other electronic data device), I don't expect they will die a sudden death either. Most likely CD/DVD drives would be long gone before that, and the old CD/DVD drives might be too old then and possibly develop laser issues, even if unused.
                          Last edited by momaka; 10-11-2016, 06:45 PM.

                          Comment


                            #33
                            Re: &lt;plays taps&gt; 500GB bites the dust...

                            Well, speaking of dirty SATA contacts, I wonder if that's what's wrong with the OP's hard drive... like kaboom said in another thread, this RoHS junk can "look" fine but be bad.

                            Originally posted by momaka View Post
                            Well, in the ideal world, even CapXon electrolytic capacitors wouldn't fail, because they would be made with ideal (pure) aluminum.
                            In an ideal world, Sacon FZ capacitors wouldn't have failed either (yes, that is a shameless plug at my avatar ).

                            I don't really think impurities in the aluminum are the issue anymore. Many of the Taiwanese brands are actually sourcing high purity aluminum foil through a Korea-Japan joint venture known as K-JCC (and you would be surprised which brands they list there on the "main customers" page.... Teapo, Lelon, Hermei.... some of the worst brands ever). My guess is that the issue could be narrowed down to either bad electrolyte (the same issue that KZG has) or just poor QC. Of course impurities in the aluminum, such as too much copper, magnesium, iron, and zinc, etc, can slowly penetrate through the native oxide barrier and porous anode oxide layer, and that would be bad news for the electrolyte as the aforesaid impurities may form couples with the aluminum and cause a galvanic, gaseous reaction with the ions in the solution. But that's a fairly slow reaction (although heat would certainly accelerate it).

                            What I also noticed is the huge flurry of "bad" capacitors that started flooding from the Chinese and Taiwanese brands in the early 2000s / late 1990s came right around the time when the highly H2O base electrolyte was adopted to increase conductivity and decrease price. That type of electrolyte (as well as the highly basic electrolyte that many Chinese brands use, which aluminum easily dissolves into) needs special additives and depolarizers to prevent excessive H2, hydration, and foil corrosion. I think this is the reason for all those "high leakage current" faulty capacitor failures, unless the shoddy electrolyte has simply broken down and cannot correct defects in the oxide layer anymore (or unless the oxide layer has dissolved into the solution, or has been subject to corrosion because of incompatible cleaning agents or a toxic environment). All those reasons above for faulty capacitors are also reason enough for unused electrolytics to bulge.

                            That said, SSDs/flash have their own set of problems. Particularly, from what I've read, the "memory cells" that make up the data on your SSD have become extremely small now. So the problem with that is they can store very limited amount of charge, which means data can get corrupt easier.
                            Yes, they have a limited number of read/write cycles. Mechanical HDDs don't have that problem (but at least you won't get the click of death with SSDs ).

                            Anyways, what needs to be realized here is that you can't fit unlimited data on an object that takes on limited space, regardless of the method used (i.e. be it mechanical or electrical). Currently, we are hard-limited by atomic sizes. So you can't have data bit take less space than an actual atom (in fact, not even equal to with current technology). But we are getting somewhat close, so don't expect to see drive capacities continue to grow like they have in the past 20 years. (Just like with CPUs, the way we hit a barrier at a few GHz.)
                            Well, the issue isn't with unlimited data IMHO. It's with being able to store what data you have reliably in the long term. I'm not asking for it to last forever, I just wish there was a truly reliable storage medium.

                            Not bad lube. Just nature of the design. Ball bearings get noisy when they wear out, just like car bearings... or if you have a somewhat recent LG washer (made in the past decade or so), then you can have bad bearings in that too... but I believe washers/laundry is a little too far off-topic to continue the discussion here and consider it OK.
                            Well, it's to my knowledge that ball bearings that sound distinctively whiney are a byproduct of dried out lubricant. The oil has gone from the grease so the ball bearings are rolling dry. As I understand it, the grease has a shelf life even without any use. It's only a matter of time (even if many years) before the oil gradually leaks out of the grease (sealed ball bearings will last longer than shielded ball bearings in this respect, and shielded bearings are what the DC 2BB fans we know and love use). That said, I do wonder which would last longer, a well maintained sleeve bearing fan or a well designed double ball bearing fan.

                            No, I think it is possible for a burned CD to last 50 years. Just store it right (away from *any* light, high heat, and in a proper case so it doesn't warp), and it should last. I have CDs that I burned that are close to 15 years old now. Still a long way to go before they get to 50, but I've never had problem with them so far, and unlike a HDD or flash drive (or any other electronic data device), I don't expect they will die a sudden death either. Most likely CD/DVD drives would be long gone before that, and the old CD/DVD drives might be too old then and possibly develop laser issues, even if unused.
                            Why would they develop laser issues even if unused? Unless it has something to do with the optical pickups "losing" their intensity as they sit in storage.
                            Last edited by Wester547; 10-11-2016, 08:26 PM.

                            Comment


                              #34
                              Re: &lt;plays taps&gt; 500GB bites the dust...

                              Originally posted by Wester547 View Post
                              Yes, they have a limited number of read/write cycles. Mechanical HDDs don't have that problem (but at least you won't get the click of death with SSDs ).
                              I love the click-of-death, though - at least you know the drive is toast and pretty much that's that. With an SSD/flash drive that suddenly dies without any signs, you start wondering if it's your computer, the cables, the connectors, the power supply... there's no telling until you check the drive on another machine.
                              With click-of-death, you clean all of the connectors for the pre-amps, and if that doesn't save the HDD... sorry! Chuck it or send it off to data recovery.

                              Originally posted by Wester547 View Post
                              Well, the issue isn't with unlimited data IMHO. It's with being able to store what data you have reliably in the long term. I'm not asking for it to last forever, I just wish there was a truly reliable storage medium.
                              That's what I was trying to get to, though (but failed ): we are getting to a point where we are starting to sacrifice reliability in order to gain more storage space.

                              Originally posted by Wester547 View Post
                              Well, it's to my knowledge that ball bearings that sound distinctively whiney are a byproduct of dried out lubricant. The oil has gone from the grease so the ball bearings are rolling dry.
                              That too. But when ball-bearings get very worn out, even if you put new grease/oil in them, they will dry out again rather quickly.

                              Originally posted by Wester547 View Post
                              That said, I do wonder which would last longer, a well maintained sleeve bearing fan or a well designed double ball bearing fan.
                              Double BB, for sure. Sleeve bearings wear out rather fast, especially if the fan shaft is oriented horizontally. That said, I rotate my sleeve bearing fans 90 degrees if they have been operated for too long one way in a horizontal direction.

                              Originally posted by Wester547 View Post
                              Unless it has something to do with the optical pickups "losing" their intensity as they sit in storage.
                              From what I remember reading, I think that's exactly it. Lasers are consumable items, so it's only a matter of when they will fail/wear out (not IF).
                              Last edited by momaka; 10-11-2016, 08:49 PM.

                              Comment


                                #35
                                Re: &lt;plays taps&gt; 500GB bites the dust...

                                The click-of-death actually scares me.... the last thing I want to do is power up my computer and suddenly hear "wheeeeee...... cha-chunk cha-chunk cha-chunk cha-chunk", with no warning whatsoever.

                                Double BB, for sure. Sleeve bearings wear out rather fast, especially if the fan shaft is oriented horizontally. That said, I rotate my sleeve bearing fans 90 degrees if they have been operated for too long one way in a horizontal direction.
                                So, even if the sleeve bearing fan has an excellent amount of lubricant and is regularly disassembled for cleaning and maintenance (restoring the shaft to proper operation), and positioned vertically, 2BB fans will last longer?

                                From what I remember reading, I think that's exactly it. Lasers are consumable items, so it's only a matter of when they will fail/wear out (not IF).
                                I would think they would wear out much faster if you constantly read and burned discs, though, especially with a weak pick-up.

                                Comment


                                  #36
                                  Re: &lt;plays taps&gt; 500GB bites the dust...

                                  Originally posted by Wester547 View Post
                                  Well, speaking of dirty SATA contacts, I wonder if that's what's wrong with the OP's hard drive... like kaboom said in another thread, this RoHS junk can "look" fine but be bad.
                                  thats exactly it. on visual inspection, the goldfingers look clean but when i wiped it with a tissue that had ipa on it, the tissue became quite black and dirty. probably something to do with a physics phenomena whereby humid air creates a damp coating on metallic surfaces which attracts dust/dirt. the price to pay for living in a warm and humid tropical climate i guess.

                                  next, back on the topic of hard drives. after all being said, i still think hard drives are still the most economical method of storage. flash storage is still expensive on a cost per gigabyte basis. for optical storage, the cost of a blu ray burner can cost you as much as a used 2tb hard drive. add to that, u need 80 single layer blu ray discs to have the same storage capacity as a 2tb hard drive. so as far as cost effective/efficient backup media goes, im sticking to raid 1 style hard drives for backing up my "prawn" collection...

                                  also, i discovered this post on a local forum a few days ago. it was dated august 14th:
                                  Originally posted by creatorzz
                                  Hey Guys,

                                  I've installed for my friend a NAS about a year or 2 back and 2 drives has returned bad sectors with the RAID-5 compromised. There's a total of 4 x RED Pro 4tb in the NAS with 1 of them throwing bad sectors/errors since friday.

                                  I started backing up their data but today morning, a 2nd drive started having errors.

                                  Does anyone know how to read the health info and any advise please. I'm still struggling to copy out terabytes of data.

                                  What i would like to learn is why 2 drives will have issues under a week? can both drive be RMAed?

                                  Thank you so much in advance!

                                  Drive 3 & 4 SMART info


                                  im guessing nas boxes often use crap caps and/or poor build quality in their power supplies or power bricks. the ripple then kills the hard drives. from my personal experience, wd hard drives seem sensitive to the quality of power from your power supply. using a junk psu is the fastest way to kill a high data density drive.

                                  i had a wd black 640gb drive develop bad sectors after 2.5 years of 12x7 use. i was a nub back then and it was used with a fuh-joo-yoo thermaltake psu. the replacement 750gb black also died after 2 years of 12x7 usage with the fuh-joo-yoo psu.

                                  now just fyi, the red pro in that guy's nas isnt like the green line of drives. the red non-pro drives are just green 5400rpm drives rated for 24x7 nas use. however, the red pros are 7200rpm and are like caviar blacks but rated for 24x7 usage. red pro drives are also more expensive than black drives for the same capacity, so this rules out poor hard drive build quality.

                                  so i guess modern high density drives are really sensitive to ripple and noise from the power supplied to the drive. if the power is even a tiny bit noisy, it messes up big time with the weak signals picked up by the heads before they go to the pre-amp. if the drive reads bad or corrupted data about the drive geometry from the hpa during power on, u get a bricked drive on power on. i also guess that guy found out the hard way raid 5 isnt a reliable form of backup. seen a number of members here complaining of simultaneously failed drives on their raid 5 array.
                                  Last edited by ChaosLegionnaire; 10-13-2016, 05:28 AM.

                                  Comment


                                    #37
                                    Re: &lt;plays taps&gt; 500GB bites the dust...

                                    I've scraped the underboard contacts and reseated the SATA connectors (which "appears" to be gold plated) many times, the drive is still deader than a doornail.

                                    Comment


                                      #38
                                      Re: &lt;plays taps&gt; 500GB bites the dust...

                                      What about contacts between board and headstack amp pins on bottom of the HDD? Those often get very corroded too.

                                      Comment


                                        #39
                                        Re: &lt;plays taps&gt; 500GB bites the dust...

                                        Yeah, those are the "underboard" contacts that connect to the head and spindle motors (excuse the nontechnical term)... no change in behavior at all... One would imagine the act of removing and replacing the board is somewhat like reseating and could change the behavior somewhat? Hmm... it's never something easy.

                                        Comment

                                        Working...
                                        X