Announcement

Collapse
No announcement yet.

Easier Hardware-based raid management

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Easier Hardware-based raid management

    Poweredge 1800, dual xeon 3ghz HT. Yes it's old but it serves my purpose (SMB, DLNA, dedicated timed Print client [cron job]). bought it for $200 off my old boss many years back.

    Ok so linux Raid is good, I've been using it for quite some time. But this time it's just not working right. I tried adding a matching drive and partition to the array and it said that even though I made matching partitions, mdadm seems to think the joining on is "too small" (albeit partition has same number of blocks). I use number of blocks + 1 and it adds it, but the second partition now has one less needed to add, and refuses to add.

    I have a SAS Dell Raid card (LSI Based) PCI-E x8 I haven't been using in forever. It's completely hardware (Can't do Raid 5, but I just need 1)

    This card is Sata II, onboard is Sata I. It's LSI based, has a good amount of cache, which actually worries me cause there is no battery option on the card and no UPS on the server. Mind you I don't often write, just keep it for my tools, software, iso images (for windows installs mostly), games, liquid mind mp3's to sleep (minidlna to a roku 1 with rca outs hooked to my receiver)

    Is there any way to manage this raid card remotely with a linux tool to do the equivalent? like...with a web interface? I know there are command line ones. Mind you this is ubuntu server 18.04, no gui.

    it only has ipmi 1.5, which is limited, so, the only other possible is a DRAC card, and I don't want to pay for one

    did a bunch of research and returned with nothing. Anyone know of an alternative solution?
    Cap Datasheet Depot: http://www.paullinebarger.net/DS/
    ^If you have datasheets not listed PM me

    #2
    Re: Easier Hardware-based raid management

    I know there is for windows for LSI/3ware stuff....but your curveball is linux. Have you checked LSI's site for such packages? They tend to be well supported under linux.
    <--- Badcaps.net Founder

    Badcaps.net Services:

    Motherboard Repair Services

    ----------------------------------------------
    Badcaps.net Forum Members Folding Team
    http://folding.stanford.edu/
    Team : 49813
    Join in!!
    Team Stats

    Comment


      #3
      Re: Easier Hardware-based raid management

      Last support I see for it is 16.04. At least ubuntu doc does list support for perc 5/6. Thanks, i'll see where this goes
      Cap Datasheet Depot: http://www.paullinebarger.net/DS/
      ^If you have datasheets not listed PM me

      Comment


        #4
        Re: Easier Hardware-based raid management

        Originally posted by Uranium-235 View Post
        Poweredge 1800, dual xeon 3ghz HT.

        Ok so linux Raid is good, I've been using it for quite some time. But this time it's just not working right. I tried adding a matching drive and partition to the array and it said that even though I made matching partitions, mdadm seems to think the joining on is "too small" (albeit partition has same number of blocks). I use number of blocks + 1 and it adds it, but the second partition now has one less needed to add, and refuses to add.
        This is part of The Joys of RAID -- when things go south, they do so in unexpected ways (leaving you with no access or no protection).

        Is there any way to manage this raid card remotely with a linux tool to do the equivalent? like...with a web interface? I know there are command line ones. Mind you this is ubuntu server 18.04, no gui.

        it only has ipmi 1.5, which is limited, so, the only other possible is a DRAC card, and I don't want to pay for one
        How completely do you want to be able to manage it? E.g., LoM can probably let you talk to the RAID BIOS in those cases where the box won't even boot! Any Linux (Windows) tool would require the array to be in some sort of bootable state before you could even begin to access it.

        [Most RAID BIOS interfaces aren't really GUI but, rather, screen-based text interfaces]

        Comment


          #5
          Re: Easier Hardware-based raid management

          Well again it does have ipmi 1.5 support which is hardly useful for gui. A drac card adds ipmi 2.0 support, but I don't know if either of these extend to the dell sas management (I know it can be done on other, newer dell systems)
          Cap Datasheet Depot: http://www.paullinebarger.net/DS/
          ^If you have datasheets not listed PM me

          Comment


            #6
            Re: Easier Hardware-based raid management

            3Ware/AMMC has linux remote support (and if nothing else CLI through SSH), as do any of the dell PERC cards via Openmanage.
            sigpic

            (Insert witty quote here)

            Comment


              #7
              Re: Easier Hardware-based raid management

              Originally posted by Uranium-235 View Post
              Well again it does have ipmi 1.5 support which is hardly useful for gui. A drac card adds ipmi 2.0 support, but I don't know if either of these extend to the dell sas management (I know it can be done on other, newer dell systems)
              What I'm trying to get at is how "inconvenient" will you allow ALL aspects of the RAID to be.

              In other words, if the box is sited in the next town over, then you'd want to be able to remotely troubleshoot it even if it fails to boot. (You might see if you can find a PC Weasel 2000 on eBay.)

              On the other hand, if it's hiding in a closet in your basement and fails to boot, you might grumble a bit but you COULD drag a monitor and keyboard down there to sort out the problem (presumably, this being a rare event).

              I suspect I have PERC 5's or 6's hiding in my box of SCSI HBAs if you decide they can address your needs. (PM) I can't comment on Linux support as I don't run Linux.

              You might also consider upgrading the server to one with more advanced remote support capabilities. I see lots of 2850's going to the scrapper, lately.

              Comment


                #8
                Re: Easier Hardware-based raid management

                Originally posted by Curious.George View Post
                What I'm trying to get at is how "inconvenient" will you allow ALL aspects of the RAID to be.

                In other words, if the box is sited in the next town over, then you'd want to be able to remotely troubleshoot it even if it fails to boot. (You might see if you can find a PC Weasel 2000 on eBay.)

                On the other hand, if it's hiding in a closet in your basement and fails to boot, you might grumble a bit but you COULD drag a monitor and keyboard down there to sort out the problem (presumably, this being a rare event).

                I suspect I have PERC 5's or 6's hiding in my box of SCSI HBAs if you decide they can address your needs. (PM) I can't comment on Linux support as I don't run Linux.

                You might also consider upgrading the server to one with more advanced remote support capabilities. I see lots of 2850's going to the scrapper, lately.
                I have a perc5 in my parents' server... It runs Debian. Openmanage works fine- just add the custom repository and install. Easy.
                sigpic

                (Insert witty quote here)

                Comment


                  #9
                  Re: Easier Hardware-based raid management

                  Originally posted by ratdude747 View Post
                  I have a perc5 in my parents' server... It runs Debian. Openmanage works fine- just add the custom repository and install. Easy.
                  Yes, but if the box doesn't BOOT, can you remotely access it? That was the nature of my question to the OP (if the box is located someplace else, you want to be able to troubleshoot EVERYTHING remotely -- hence the PC Weasel suggestion)

                  Comment


                    #10
                    Re: Easier Hardware-based raid management

                    Originally posted by ratdude747 View Post
                    I have a perc5 in my parents' server... It runs Debian. Openmanage works fine- just add the custom repository and install. Easy.
                    I did! Dell has a (technically unsupported) ubuntu openmanage for 16.04.4. I already found that in my searching days ago but had no luck with 18.04

                    so I put the card in and installed 16 on raid 1 and it added fine. I can't login though. I can if I use "Manage webserver". I'm curious, the manage webserver says something about no browser JRE, is that required? I just tried to install JRE 8 and even though it's in IE, it still says no JRE, and still will not login

                    it dosen't say "wrong password" (it's obviously correct since I can use it the manage webserver). It just loads the login page again
                    Attached Files
                    Cap Datasheet Depot: http://www.paullinebarger.net/DS/
                    ^If you have datasheets not listed PM me

                    Comment


                      #11
                      Re: Easier Hardware-based raid management

                      ^ No idea. I never did get that part to work. I just SSH'd in and used the CLI. It's a cop out, but it got the job done. Bonus part is since you have a dell server, now you have all the hardware sensors too. Nice stuff.

                      Edit- it's an old repo, worked fine for debian jessie at least. For a server I don't want bleeding edge... too much breakage and other BS.
                      Last edited by ratdude747; 08-01-2018, 09:19 PM.
                      sigpic

                      (Insert witty quote here)

                      Comment


                        #12
                        Re: Easier Hardware-based raid management

                        I guess, the best thing I can do now to prevent any raid failure without linux raid is get a UPS. Under linux raid, it got power outages all the time, but mostly idle, only streaming a dlna music stream to my roku 1. They never did a recheck, but i'm sure the hardware raid would (although I read this card apparently has 512M cache, but it's disabled). I think i'm going to keep it disabled since it only has one user, and dosen't have any intensive work load. Only real load is EXT compression writing, and thats nothing

                        ok, it used to have two 675W dell PSUs's till both died on me. Now it has a Corsair 750W (80 Gold efficiency rating)

                        Two Irwindale 3ghz cpu's are 110W each. Two WD black drives. This was built for having 5 SCSI 15k drives.

                        Think 400w UPS is enough?
                        Last edited by Uranium-235; 08-01-2018, 10:33 PM.
                        Cap Datasheet Depot: http://www.paullinebarger.net/DS/
                        ^If you have datasheets not listed PM me

                        Comment


                          #13
                          Re: Easier Hardware-based raid management

                          Originally posted by Uranium-235 View Post
                          I guess, the best thing I can do now to prevent any raid failure without linux raid is get a UPS. Under linux raid, it got power outages all the time, but mostly idle, only streaming a dlna music stream to my roku 1. They never did a recheck, but i'm sure the hardware raid would (although I read this card apparently has 512M cache, but it's disabled). I think i'm going to keep it disabled since it only has one user, and dosen't have any intensive work load. Only real load is EXT compression writing, and thats nothing
                          You can add a sacrificial disk (including a RAM disk) to act as the temporary store for that purpose (silly to put that traffic through the regular array).

                          IIRC, the PERC*/e cards have a battery to backup the on-card cache. Not sure if that is also true on the PERC*/i cards (I think the space where the battery sits is occupied by connectors to the drive cage).

                          If the battery is missing or dead, the cache will automatically default to "write-through" -- so, nothing is "at risk" in the cache in the event of an unceremonious power loss.

                          Think 400w UPS is enough?
                          Note that Marketese for UPSs is to publish their VA ratings, nor "power" ratings. E.g., a "400W capable" UPS will probably be sold as a "750 (VA)".

                          Regardless, it can't hurt to measure the actual load. Or, use a UPS that has that capability available (either via front panel display or network interface).

                          Comment


                            #14
                            Re: Easier Hardware-based raid management

                            How did it go with the data, where you able to recover it?

                            As for the UPS please read this: http://www.apc.com/us/en/faqs/FA158939/
                            The problem with PFC supplies is that they can draw their full rated power when charging the main capacitor.
                            So for that reason it is best to dimension the UPS with that in mind.
                            "The one who says it cannot be done should never interrupt the one who is doing it."

                            Comment


                              #15
                              Re: Easier Hardware-based raid management

                              Thanks for the info and no. I have to look at my laptop, and other computers to try to find whatever data I can

                              This isn't a perc card, it's an SAS 5/ir. You can see the memory chip. In fact I read there is an lsi program thay can enable the caching policy, but since it's low usage I'm not going to do it
                              Last edited by Uranium-235; 08-02-2018, 06:29 AM.
                              Cap Datasheet Depot: http://www.paullinebarger.net/DS/
                              ^If you have datasheets not listed PM me

                              Comment


                                #16
                                Re: Easier Hardware-based raid management

                                I think this is a good one. Though i'm not sure it can take a PFC power supply, I hear of conflicts

                                https://www.amazon.com/APC-Battery-P...WX7Q4QVSSYQR4Z
                                Cap Datasheet Depot: http://www.paullinebarger.net/DS/
                                ^If you have datasheets not listed PM me

                                Comment


                                  #17
                                  Re: Easier Hardware-based raid management

                                  It's generally recommended to use a sine-wave UPS units with PFC supplies.
                                  But if going for APC those are quite expensive.
                                  "The one who says it cannot be done should never interrupt the one who is doing it."

                                  Comment


                                    #18
                                    Re: Easier Hardware-based raid management

                                    Originally posted by Per Hansson View Post
                                    It's generally recommended to use a sine-wave UPS units with PFC supplies.
                                    But if going for APC those are quite expensive.
                                    Just don't "buy retail". If you can't find a business, hospital, school/university or recycler that doesn't, TODAY, have 6 or 8 UPSs sitting waiting to be "processed", then you aren't looking very hard! Usually, they decide it isn't worth the RETAIL cost -- businesses don't "shop around" like hobbyists -- of replacing the batteries.

                                    (I suspect I can find a dozen or more 1KVA units without looking too hard. And, next week, there will be a different dozen. I.e., keep looking until you find what you want. I've been accumulating SMT750s https://www.amazon.com/APC-Smart-UPS.../dp/B074P4NJWL as they are a nice small size with decent capacity to power 4-packs of monitors)

                                    Going rate, here, is $5 regardless of size. Esp if you pull the batteries and let them claim the "core cost" for recycling the lead therein.

                                    Comment


                                      #19
                                      Re: Easier Hardware-based raid management

                                      Originally posted by Uranium-235 View Post
                                      Thanks for the info and no. I have to look at my laptop, and other computers to try to find whatever data I can
                                      Now, think about that... you're relying on ad hoc copies you may have casually created, over time, to recover data that your storage system intended to ensure data integrity has lost. You've spent time tinkering with it and you're still going to have to invest time to chase down those "copies".

                                      By contrast, I rely on being able to easily make as many copies of whatever bits of data I deem important, on as many different media that I find convenient (at the time). I.e., copy to another spindle in the same machine. Or a removable medium. Or, a network share exported from a second machine. Or, a NAS.

                                      And, I don't have to replicate ALL of the data, just the stuff that I consider important enough NOW to copy them over.

                                      Because performance isn't an issue (these are backups, not live copies), I can choose to use media that are normally offline. Or, that are hosted by "low end" machines that, conveniently, consume very little power and may not even sport "user interfaces"! USFF atoms, repurposed "thin clients", etc.. No need for gigabytes of DDR3 -- just need enough for the OS to boot and mount a generic file system.

                                      The downside of this is that you have to have some way of keeping track of what's where.

                                      You can do this by adopting a discipline of always copying entire volumes (labeling one Stuff_1 and the other Stuff_2). Or, any other mnemonic aid that helps you keep track of your data.

                                      I got tired of having to duplicate entire disks: Games_1A, Games_1B, Games_2A, ... WinApps_1A, WinApps_1B, WinApps_2A, WinApps_3A... So, I just catalogued all of the files in a database. Note that this can be as simple as "ls -alR > $CATALOGS/WinApps_1A.filelist.txt". And, when you want to find something, just arrange for all of those *.filellist.txt to reside in a common directory and "(cd $CATALOGS; grep nameoffile *)".

                                      Put the catalog in someplace easily accessible (from all hosts on your network) and it's relatively easy to find what you want and where it is hiding.

                                      The problem with this approach is it requires you to maintain file names (/Taxes/2015 and /MyFiles/My2015Taxes could have the same content yet show up to different searches).

                                      And, it gives you no way of verifying that the file is intact when you eventually try to access it. (you could write a little script that reads lines from WinApps_1A.filellist.txt, computes the hash of your choice on the file and stores the hash alongside the pathname of the file; then, check this when you later access the file)

                                      Now, all you need to do is backup that "database" (text file, in this case) and you can locate any file you want (and verify its integrity). No need to keep EVERYTHING in a RAID array spinning "just for convenience".

                                      [How often does your array ACTUALLY recover data that might have been otherwise corrupted? If it is "often", then maybe you need to reconsider the media you're using...]

                                      Comment


                                        #20
                                        Re: Easier Hardware-based raid management

                                        Originally posted by Curious.George View Post
                                        Just don't "buy retail". If you can't find a business, hospital, school/university or recycler that doesn't, TODAY, have 6 or 8 UPSs sitting waiting to be "processed", then you aren't looking very hard! Usually, they decide it isn't worth the RETAIL cost -- businesses don't "shop around" like hobbyists -- of replacing the batteries.

                                        (I suspect I can find a dozen or more 1KVA units without looking too hard. And, next week, there will be a different dozen. I.e., keep looking until you find what you want. I've been accumulating SMT750s https://www.amazon.com/APC-Smart-UPS.../dp/B074P4NJWL as they are a nice small size with decent capacity to power 4-packs of monitors)

                                        Going rate, here, is $5 regardless of size. Esp if you pull the batteries and let them claim the "core cost" for recycling the lead therein.
                                        Yeah easy for you to say. Also usps that have been used, you never know the component and capacitor condition especially if they've been running 24/7


                                        I've been looking for a server psu thay has fans on front and back (opposed to the 759 that has it on top, not good for air flow

                                        But newegg has mostly module psus. Few atx ive seen are still apfc, even if newegg doesn't say so, the website does.

                                        Not sure what to do here. Can a non sine-wave really cause that much harm?
                                        Attached Files
                                        Cap Datasheet Depot: http://www.paullinebarger.net/DS/
                                        ^If you have datasheets not listed PM me

                                        Comment

                                        Working...
                                        X