Anyone running SSD's yet?

Collapse
X
 
  • Time
  • Show
Clear All
new posts

  • Per Hansson
    replied
    Re: Anyone running SSD's yet?

    Originally posted by Per Hansson
    OMG, My Seagate Cheetah 15K.7 300GB 15K RPM drive connected to a LSI 8704ELP SAS RAID controller with 128MB cache gets slower the more I write too it.
    Obviously mechanical HDD's sucks!!!



    PCBONEZ; Incase this is before your morning coffee the above was meant as sarcasm.
    And as I can't be bothered to write a whole book on how SSD's work again why don't you read that entire article over at Anandtech that you linked?
    It's an absolutely incredible resource on SSD technology.
    It's actually an anthology series and is a fantastic read for anyone interested in how SSD's really work...
    Here are some numbers from two SSD's
    The first is from my parents HTPC, it uses WinXP and thus does not have TRIM support.
    It is a Kingston SSDNow V Series 40GB SSD that is now 2 years and 2 months old.
    It is connected to a Epox 8RDA3+ mainboard with a Silicon Image 3112 1.5Gbps SATA controller...


    Note the yellow "access time" graph?
    No?
    It's because it's below 0.1ms so it can't be plotted, it's just a flat line!

    Next up is my friends Intel X25-M G2 80GB SSD, it sits in his laptop which runs Win7 x64.
    It's nVidia GPU failed so these numbers are with it connected to my own system with a 3.0Gbps ICH9R controller, using Windows AHCI drivers.


    Note the yellow "access time" graph?
    No?
    It's because it's below 0.1ms so it can't be plotted, it's just a flat line!

    Here is a Intel Solid State Drive Toolbox picture from it:



    He uses it for everything, including saving x264 movies to it that are 4>8GB in size and watching those on his TV.
    Since the laptop has HDMI (Well he did untill the nVidia GPU failed anyway)
    Yup only 1.01TB host writes (E1). It actually increased to this from 920GB today.
    Because I used "Intel Solid State Drive Toolbox" to run a "full diagnostics scan" to it which writes to all cells, no problems reported...

    Since E4/60=2522 hours (same as "09") the counters have never been reset for the life of this drive.
    Thus for the life of his drive it has been used to E2/1024 = 0.596%
    Attached Files
    Last edited by Per Hansson; 02-26-2012, 02:56 PM.

    Leave a comment:


  • Per Hansson
    replied
    Re: Anyone running SSD's yet?

    Originally posted by PCBONEZ
    I have no confidence that their controller can actually do lossless compression [of every kind of file] to factor of 10 to 20:1 at 100-200Mb/s
    - which is basically what they are claiming.
    .
    They can claim it all they want - that doesn't make it true.
    .
    The processor on your MOTHERBOARD would probably have trouble achieving that.
    .
    [Unlike some of you what I'm smoking is perfectly legal.]
    .
    If the data can't be compressed (i.e. it already is compressed) then the Sandforce technology will not be any faster than a normal SSD.
    That is reflected in the performance charts.
    If you write compressible data to it, like a Windows installation.
    Then it can achieve ca 550MB/sec sequential transfer speeds
    But if you write JPG, h/x264 movies or other data which is impossible to compress.
    Then the sequential transfer speed drops to that of normal SSD's.
    That is around 200>300MB/sec

    Oh, and note that only ONE SSD controller uses this tech, the Sandforce one.
    And a small percentage DO have problems with random BSOD, other users have no problems.
    They are very fast but I would not recommend them.
    However incidentally Intel did release a SSD based on the Sandforce controller just weeks ago.
    So probably they have worked most kinks out.
    As it is OCZ did also release a firmware update that curbed most of those BSOD issues a few months ago (but not all)

    But the other "normal" SSD's that uses Intel's own in-house controller, the Samsung controller or Marvell.
    They don't use this technology with data compression at all.
    And they are very very reliable.

    Leave a comment:


  • mariushm
    replied
    Re: Anyone running SSD's yet?

    The cheap, super fast and low processor usage NTFS compression that's built into your file system can reduce most text data to about 50%. The controller with hardware compression can do a bit better.

    Just copy some text files and some random dll files into a folder, right click on it, properties, and check the "Compress folder" option. After the files are compressed you will see both the normal and compressed size of each file in their properties panel.

    The controller can do compression and may also be able to do deduplication aka if it spots that a 512 bytes cell has the same signature after compression, it will just reference the previous location and use a few bytes instead of 200-512 or whatever size is the compressed chunk.

    Leave a comment:


  • PCBONEZ
    replied
    Re: Anyone running SSD's yet?

    Originally posted by mariushm
    Here's the information about Write Amplification: http://en.wikipedia.org/wiki/Write_amplification



    The Intel X25-M is reported to have a WA as low as 1.1, and the Sandforce controllers can go as low as 0.5 with the right data (the one that can be losslessly compressed as it comes - obviously it will do worse than already compressed data such as mp3,avi etc)



    what have you been smoking today?

    The compression algorithms in controllers are lossless, just like the ones used by this website to compress the pages before they're sent to you.
    If the compression was similar to mp3 or movies the pages would be deteriorated when they arrive to your browser.
    I have no confidence that their controller can actually do lossless compression [of every kind of file] to factor of 10 to 20:1 at 100-200Mb/s
    - which is basically what they are claiming.
    .
    They can claim it all they want - that doesn't make it true.
    .
    The processor on your MOTHERBOARD would probably have trouble achieving that.
    .
    [Unlike some of you what I'm smoking is perfectly legal.]
    .
    Last edited by PCBONEZ; 02-26-2012, 09:55 AM.

    Leave a comment:


  • mariushm
    replied
    Re: Anyone running SSD's yet?

    Originally posted by PCBONEZ
    You did pretty good up until you got the disk full.
    ~ Then you blew it.

    The situation that pdf talks about is call Write Amplification.
    It shows Write Amplification at a factor of 5.
    Because many files are larger than one block it's actually larger than 5.
    It can be anywhere between 10 and 20.
    -
    So in 'real drives' that have been once through'ed
    - that 4Gb DVD becomes somewhere between 40Gb and 80Gb.
    [And that 20Gb a day you like becomes 200-400Gb/day.]
    -
    Here's the information about Write Amplification: http://en.wikipedia.org/wiki/Write_amplification



    The Intel X25-M is reported to have a WA as low as 1.1, and the Sandforce controllers can go as low as 0.5 with the right data (the one that can be losslessly compressed as it comes - obviously it will do worse than already compressed data such as mp3,avi etc)

    Those drives at don't have numbers that bad are using file compression routines in the controller which means that movie you saved and any images you save to the SDD are degraded in quality simply by saving them to an SSD.
    Same goes for MP3's which are -already- degraded in quality by using the MP3 compression before you dumped it to an SSD.

    .
    what have you been smoking today?

    The compression algorithms in controllers are lossless, just like the ones used by this website to compress the pages before they're sent to you.
    If the compression was similar to mp3 or movies the pages would be deteriorated when they arrive to your browser.

    Leave a comment:


  • PCBONEZ
    replied
    Re: Anyone running SSD's yet?

    SSD tech in it's current form looks like band-aid piled on band-aid to me.
    .
    Aren't you guys the same ones that piled off the PRM fan boy bandwagon to tell me I was full of shit when I said PRM is going suck?
    -
    That was a couple years ago and lately I've had NUMEROUS opportunities to say "I told you so" over that one.
    -
    I'm pretty confident this is going to go the same way.
    In 2-3 years we will see.
    .

    Leave a comment:


  • Per Hansson
    replied
    Re: Anyone running SSD's yet?

    Originally posted by PCBONEZ
    Those drives at don't have numbers that bad are using file compression routines in the controller which means that movie you saved and any images you save to the SDD are degraded in quality simply by saving them to an SSD.
    Same goes for MP3's which are -already- degraded in quality by using the MP3 compression before you dumped it to an SSD.
    .
    http://en.wikipedia.org/wiki/Write_amplification
    http://www.anandtech.com/show/2899/3
    .
    My god, do you actually think this?
    Obviously the Sandforce SSD tech you speak of is using a non-destructive lossless data compression technique.
    You just come up with so unbeliavable stuff that it's not even funny anymore!

    Leave a comment:


  • PCBONEZ
    replied
    Re: Anyone running SSD's yet?

    Originally posted by mariushm
    Damn... how the hell do you still not get it yet...

    Let's do it again as per Anandtech's article (http://www.anandtech.com/show/2738/8) but expand the hypothetical SSD drive from one block to one that has 3 blocks with 8 pages:

    [_|_|_|_|_]
    [_|_|_|_|_]
    [_|_|_|_|_]

    Page Size 4KB
    Block Size 5 Pages (20KB)
    Drive Size 3 Blocks (60KB)

    Initially they write the 4 KB doc file:

    [x|_|_|_|_]
    [_|_|_|_|_]
    [_|_|_|_|_]

    Then they write the dog picture that's 8 KB:

    [x|x|x|_|_]
    [_|_|_|_|_]
    [_|_|_|_|_]

    Now the doc file is deleted:

    [d|x|x|_|_]
    [_|_|_|_|_]
    [_|_|_|_|_]

    Now user needs to write the 12 KB wallpaper picture, which in their hypothetical example with 1 block, it would force the first block to be erased. With this drive that has 3 blocks, the first block DOES NOT GET ERASED, the 12 KB picture gets written somewhere in a random block:

    [d|x|x|_|_]
    [_|_|_|_|_]
    [x|x|x|_|_]

    So the difference is :

    case with 1 block : an erase cycle was forced
    case with 3 blocks : erase cycle didn't happen

    Wear leveling is done in the background and is basically something like this... Let's say user writes another 4 KB document and that goes in the first block

    [d|x|x|x|_]
    [_|_|_|_|_]
    [x|x|x|_|_]

    Now let's say user wants to write a 10 KB wallpaper picture, which will go in the second block or the third block (but the controller will try to use empty blocks whenever possible, to spread data around):

    [d|x|x|x|_]
    [x|x|_|_|_]
    [x|x|x|_|_]

    When the drive is idle, and the drive notices blocks are starting to get all filled up, the controller moves the data from a block that has deleted pages so that it will be able to do an erase cycle, if needed:

    [d|d|d|d|_]
    [x|x|x|x|x]
    [x|x|x|_|_]

    The first block is still not erased, it's just marked as available for an erase cycle. If user now wants to write a file that's larger than 2 pages, the controller will be forced to do an erase cycle on the first block, as there's no block available with more than 2 empty pages.



    Wrong.

    A 64GB SSD drive has 64 GB of actual memory cells (64 GB x 1024 MB x 1024 KB x 1024 Bytes) just like DDR1/DDR2/DDR3 memory but exposed to the operating system is 64 x 1000 x 1000 x 1000 so there's a difference of about 4.5 GB that the controller uses as swap blocks and internal memory.

    If you fill the ~60GB of formatted disk space on the SSD, when you'll delete something and then try to fill the drive again, the controller will actually write the new incoming data in a part of that extra 4GB+ that's put aside.

    So if you have the drive absolutely full, delete everything and start writing again to it and fill it up again, that doesn't mean each memory block will be erased once as the controller will use at some degree those 4-5 GB of extra room hidden away.

    When the drive is idle, the controller will start to move pages from blocks with deleted pages (from the time when drive was full and you deleted data to make room for your new data) to some blocks in those 4-5 GB of extra hidden room and then mark the blocks with deleted pages as erasable.

    As per the wear leveling algorithm, it will also move pages from blocks that are full with regular data to other blocks, so that the erase cycles will be evenly spread throughout the memory cells.

    PS... If you read the last paragraph on that page, you'll see it says at the bottom of the page that the drive becomes slow. But this article was written before TRIM and other algorithms were implemented. If you had bothered to click on to the next pages, you would have seen that they explain how SSD drives manage to avoid the "getting slower" situation.
    You did pretty good up until you got the disk full.
    ~ Then you blew it.

    The situation that pdf talks about is call Write Amplification.
    It shows Write Amplification at a factor of 5.
    Because many files are larger than one block it's actually larger than 5.
    It can be anywhere between 10 and 20.
    -
    So in 'real drives' that have been once through'ed
    - that 4Gb DVD becomes somewhere between 40Gb and 80Gb.
    [And that 20Gb a day you like becomes 200-400Gb/day.]
    -
    Those drives at don't have numbers that bad are using file compression routines in the controller which means that movie you saved and any images you save to the SDD are degraded in quality simply by saving them to an SSD.
    Same goes for MP3's which are -already- degraded in quality by using the MP3 compression before you dumped it to an SSD.

    .
    http://en.wikipedia.org/wiki/Write_amplification
    http://www.anandtech.com/show/2899/3
    .
    Last edited by PCBONEZ; 02-26-2012, 09:04 AM.

    Leave a comment:


  • Per Hansson
    replied
    Re: Anyone running SSD's yet?

    OMG, My Seagate Cheetah 15K.7 300GB 15K RPM drive connected to a LSI 8704ELP SAS RAID controller with 128MB cache gets slower the more I write too it.
    Obviously mechanical HDD's sucks!!!



    PCBONEZ; Incase this is before your morning coffee the above was meant as sarcasm.
    And as I can't be bothered to write a whole book on how SSD's work again why don't you read that entire article over at Anandtech that you linked?
    It's an absolutely incredible resource on SSD technology.
    It's actually an anthology series and is a fantastic read for anyone interested in how SSD's really work...
    Attached Files
    Last edited by Per Hansson; 02-26-2012, 05:31 AM.

    Leave a comment:


  • shovenose
    replied
    Re: Anyone running SSD's yet?

    it's faster than normal for me today, at least it feels that way
    site is usually pretty slow, not different at all today

    Leave a comment:


  • Scenic
    replied
    Re: Anyone running SSD's yet?

    Originally posted by PCBONEZ
    -- The point where MWI vs Host Writes goes flat and horizontal <=> when the gasket blew.
    After that the transfer rates on every one of those drives was no more than the ATA specs.
    Some of them -barely- beat ATA-33.
    THAT IS THE END OF THE DRIVE BEING FASTER THAN A DISC DRIVE.
    And that's when having an SSD Drive becomes POINTLESS. - It's NOT faster anymore. - ZERO advantage to SSD.
    -
    The drive you like was just over ATA-100 after the curve went horizontal.
    (Much better than the others but still sucky for SSD. OTOH: Looking at some of his posts I think that guy doesn't know how to find the correct drive speed and that's why his was so much different than the others.)
    BUT CONGRATS! - Of your 478 TB you moved ~ 60TB fairly fast,,, then 418TB at the speed of a 10 year old IDE interface.
    I'm so impressed.....
    ^ that alone shows how little you actually know about SSDs.
    What they're testing is the endurance. NOT the speed.

    The constant/linear write speed they're logging ISN'T EVEN WHERE A SSD IS FASTER than a mechanical harddrive.
    Sure, a modern higher-end SSD manages ridiculously high linear write/read speeds of 200MB/s or more, but that's just an added bonus.

    Where mechanical harddrives totally suck balls is at small files, i.e. 4k or less.
    And that's where SSDs generally shine, with read/write speeds of around 15MB/s or more, where a regular HDD manages around 300KB/s at best.
    That's the main reason why the whole system feels much faster when using a SSD: The shitloads of small files and data queries in just about every system.
    And that's exactly what ISN'T of value for an endurance test like that, and therefore isn't even logged. They only log the write speed to see at which point (in relation to SMART health values etc.) the constant/linear write speed takes a nosedive, indicating shit hits the fan.

    All I've addressed with my post is your BS statement that a SSD lasts 2-3 years at best and that you can't recover any data once it's "dead" / "worn out" (not usable anymore -> read only). Your post twists everything in any possible way to fit your opinion, stating pointless comparisons along the way. Great job.

    As for the 4KB speeds: I currently don't have any "real" SSDs at hand to do a comparison, but even a flash-based CF card (in my case a Sandisk Ultra II 1GB) already shows what I mean (attachments).
    The last screenshot shows the linear read speeds, just to show these don't have anything to do with the small file access speeds.

    If anyone who's still reading this (despite the shitstorm happening) has both, a real SSD and a HDD, feel free to grab HDTune and do an access time comparison between the two and post a screenshot.
    (You can run multiple instances of HDTune for a side-by-side comparison.. just BTW)

    Originally posted by PCBONEZ
    If what you are really after is performance for games or photo shop or whatever then you are better off setting up a system with a RAM drive.
    [Meaning -not- the Flash memory type.]
    .
    It's not going to start faster but once you are 'working' your transfer rates will be up in the Gb/s range.
    .
    There are programs that will allow you to do that with the RAM slots on the motherboard.
    Being as there are many chipsets that support 8Gb and more of RAM now days it should pretty easy.
    .
    Yeah.. great idea.. till you notice todays games are usually 10GB or more in size, generally being in the 20GB range when installed. High-res textures need lots of space.

    Show me just ONE consumer motherboard that supports 32GB RAM or more and doesn't cost more than a complete average gaming system. You won't find any.
    The flagship models (= too expensive to even consider already) top out at 32GB. Most "normal" boards have a 16GB limit. Totally moot point using a RAM disk for anything that involves gaming, unless you're talking about old games (well over 5 years), for which a RAM disk is totally overkill anyways if you have a modern system.

    -------

    PS: totally off-topic: is the badcaps forum ridiculously slow for anyone else today? I'm barely getting 30KB/s from the badcaps server :|
    Attached Files
    Last edited by Scenic; 02-25-2012, 01:21 PM.

    Leave a comment:


  • mariushm
    replied
    Re: Anyone running SSD's yet?

    Damn... how the hell do you still not get it yet...

    Let's do it again as per Anandtech's article (http://www.anandtech.com/show/2738/8) but expand the hypothetical SSD drive from one block to one that has 3 blocks with 8 pages:

    [_|_|_|_|_]
    [_|_|_|_|_]
    [_|_|_|_|_]

    Page Size 4KB
    Block Size 5 Pages (20KB)
    Drive Size 3 Blocks (60KB)

    Initially they write the 4 KB doc file:

    [x|_|_|_|_]
    [_|_|_|_|_]
    [_|_|_|_|_]

    Then they write the dog picture that's 8 KB:

    [x|x|x|_|_]
    [_|_|_|_|_]
    [_|_|_|_|_]

    Now the doc file is deleted:

    [d|x|x|_|_]
    [_|_|_|_|_]
    [_|_|_|_|_]

    Now user needs to write the 12 KB wallpaper picture, which in their hypothetical example with 1 block, it would force the first block to be erased. With this drive that has 3 blocks, the first block DOES NOT GET ERASED, the 12 KB picture gets written somewhere in a random block:

    [d|x|x|_|_]
    [_|_|_|_|_]
    [x|x|x|_|_]

    So the difference is :

    case with 1 block : an erase cycle was forced
    case with 3 blocks : erase cycle didn't happen

    Wear leveling is done in the background and is basically something like this... Let's say user writes another 4 KB document and that goes in the first block

    [d|x|x|x|_]
    [_|_|_|_|_]
    [x|x|x|_|_]

    Now let's say user wants to write a 10 KB wallpaper picture, which will go in the second block or the third block (but the controller will try to use empty blocks whenever possible, to spread data around):

    [d|x|x|x|_]
    [x|x|_|_|_]
    [x|x|x|_|_]

    When the drive is idle, and the drive notices blocks are starting to get all filled up, the controller moves the data from a block that has deleted pages so that it will be able to do an erase cycle, if needed:

    [d|d|d|d|_]
    [x|x|x|x|x]
    [x|x|x|_|_]

    The first block is still not erased, it's just marked as available for an erase cycle. If user now wants to write a file that's larger than 2 pages, the controller will be forced to do an erase cycle on the first block, as there's no block available with more than 2 empty pages.

    #2: As soon as the capacity of the disc has been written once every block will have to be juggled.
    You might not like that but you can't get around it.
    That's just how the memory in those drives works.
    Wrong.

    A 64GB SSD drive has 64 GB of actual memory cells (64 GB x 1024 MB x 1024 KB x 1024 Bytes) just like DDR1/DDR2/DDR3 memory but exposed to the operating system is 64 x 1000 x 1000 x 1000 so there's a difference of about 4.5 GB that the controller uses as swap blocks and internal memory.

    If you fill the ~60GB of formatted disk space on the SSD, when you'll delete something and then try to fill the drive again, the controller will actually write the new incoming data in a part of that extra 4GB+ that's put aside.

    So if you have the drive absolutely full, delete everything and start writing again to it and fill it up again, that doesn't mean each memory block will be erased once as the controller will use at some degree those 4-5 GB of extra room hidden away.

    When the drive is idle, the controller will start to move pages from blocks with deleted pages (from the time when drive was full and you deleted data to make room for your new data) to some blocks in those 4-5 GB of extra hidden room and then mark the blocks with deleted pages as erasable.

    As per the wear leveling algorithm, it will also move pages from blocks that are full with regular data to other blocks, so that the erase cycles will be evenly spread throughout the memory cells.

    PS... If you read the last paragraph on that page, you'll see it says at the bottom of the page that the drive becomes slow. But this article was written before TRIM and other algorithms were implemented. If you had bothered to click on to the next pages, you would have seen that they explain how SSD drives manage to avoid the "getting slower" situation.
    Last edited by mariushm; 02-25-2012, 01:31 PM.

    Leave a comment:


  • PCBONEZ
    replied
    Re: Anyone running SSD's yet?

    If what you are really after is performance for games or photo shop or whatever then you are better off setting up a system with a RAM drive.
    [Meaning -not- the Flash memory type.]
    .
    It's not going to start faster but once you are 'working' your transfer rates will be up in the Gb/s range.
    .
    There are programs that will allow you to do that with the RAM slots on the motherboard.
    Being as there are many chipsets that support 8Gb and more of RAM now days it should pretty easy.
    .

    Leave a comment:


  • PCBONEZ
    replied
    Re: Anyone running SSD's yet?

    Originally posted by mariushm
    No, you're wrong.
    #1, I didn't write it.
    #2, It's correct.

    Originally posted by mariushm
    Real world drives have hundreds of thousands of blocks with lots of pages, so the controller doesn't have to erase blocks, it just finds a random block with enough empty pages and writes there.
    Bull shit.
    Part of what these leveling algorithms you keep bring up do is 'spread it around'.
    The result is every block gets filled once and then after that every single write requires juggling a block every single time.
    If you have a 64Gb that will be the case as soon as you've moved 64Gb through it.
    You can't get around it.
    That's how NAND type memory works.

    Originally posted by mariushm
    Just like with the test with writing tons of data to the drive, that example is also an example taken to extreme to illustrate the worst case scenario.
    No, it illustrates what is going to happen no matter what you do as soon as the capacity of the drive has been written ONCE.

    Originally posted by mariushm
    Real world SSDs are more like 512 bytes per page, 64 KB per block - a 64 GB SSD drive would have 1,048,576 blocks, each with 128 pages.
    A 4-8 KB file will be easily placed in a block with a few empty pages, no need to read pages from a block and write them someplace else.
    #1: That doesn't change a thing about what the document explains. They used easy numbers so people don't have to whip out their calculator. "Real world" drives do exactly the same thing with different numbers.
    #2: As soon as the capacity of the disc has been written once every block will have to be juggled.
    You might not like that but you can't get around it.
    That's just how the memory in those drives works.
    .
    SSD might get better but currently it's still flaky technology AFAIC.
    .

    Leave a comment:


  • mariushm
    replied
    Re: Anyone running SSD's yet?

    No, you're wrong.

    The example is with a hypothetical drive that has only one block with 5 pages, so if the drive has to write 12 KB of data, the hypothetical drive is FORCED to read the 8 KB of data from the memory block, erase the block, then write that original 12 KB of data, plus the previous 8 KB of data. Real world drives have hundreds of thousands of blocks with lots of pages, so the controller doesn't have to erase blocks, it just finds a random block with enough empty pages and writes there.

    Just like with the test with writing tons of data to the drive, that example is also an example taken to extreme to illustrate the worst case scenario.

    Real world SSDs are more like 512 bytes per page, 64 KB per block - a 64 GB SSD drive would have 1,048,576 blocks, each with 128 pages.
    A 4-8 KB file will be easily placed in a block with a few empty pages, no need to read pages from a block and write them someplace else.


    ps. and i've tried to explain precisely this in my previous post , #43
    Last edited by mariushm; 02-25-2012, 10:19 AM.

    Leave a comment:


  • PCBONEZ
    replied
    Re: Anyone running SSD's yet?

    Originally posted by acstech
    It matters because some of us are trying to make an informed decision on whether or not to buy an SSD, and / or recommend them to our customers.

    Hypothetical scenario:

    Say I have a 3TB Seagate HD where I keep all my data, but I want some extra speed to boot the OS and run programs. That's all I want to do; install Windows 7 and Linux to it, and run programs off it. All the data will be stored on the Seagate.

    How much gets written to the disk just booting it up and running typical programs? Would this be enough to noticeably slow down the SSD before, say, 5 years of use?

    The way I understand it, if you get a larger (higher capacity) drive, for a given usage (amount of data written) it will last longer than a smaller, lower capacity drive. If we knew usage, we could calculate the useful lifetime, and therefore buy the correct drive, right?
    Read through this.
    http://www.anandtech.com/show/2738/8
    .
    Using that 'drive' as the example:
    .
    If the drive has been 'once throughed' and all the blocks have some old pages then every 4k of write sent from the PC can easily become 20k of writing on the SSD.
    .
    So lets say you copy a 4Gb DVD to your drive to make a few copies or maybe to watch a movie later.
    Or maybe you download the latest version of some Linux distro to make an install DVD, and it's 4Gb.
    It's entirely conceivable that the 4Gb of file turns into 20Gb of writing.
    - And there goes a -whole day- of that 20Gb/day allotment people keep talking about.
    .
    .
    MS TechNet gives 150 pages/sec as reasonable for page-file writes on a modern PC.
    - I'm not claiming that's accurate but it's all I could find in writing anywhere and it sounds reasonable.
    - The idea of 5 fold increase holds for -any- kind of file though.
    That's 4kb pages x 150 = 600kb/s
    [It doesn't do the writes continuously, writes are in bursts, that number is how fast it builds up though.]
    If the drive has some hours (blocks with defunct files) on it that 600kb/s can quickly become up to 5x that [or 3Mb/s] of writing on the SSD.
    .
    The same thing can happen when you save lots of small files - like a web page with a lots of 4k gif or whatever image files in it.
    Small files + SSD = Bad.
    .
    The chips they use in SSD might be high tech but the logistics of moving the files around is still crude and clunky.
    (Admittedly, it's also crude and clunky on disc drives, but at least they don't have to write 20k to save 4k.)
    .
    Last edited by PCBONEZ; 02-25-2012, 09:54 AM.

    Leave a comment:


  • PCBONEZ
    replied
    Re: Anyone running SSD's yet?

    The 'loop hole' type warranty suggests they don't have a lot of confidence in that design.
    .
    I don't either.
    .

    Leave a comment:


  • mariushm
    replied
    Re: Anyone running SSD's yet?

    Ok, maybe "warrantied" was a poor choice of words.

    What I wanted to say is that Intel designed their wear leveling algorithms are designed in such a way that even if you write 20 GB a day on your drive, the speed decrease will be minimal.

    20 GB a day is probably an erase every 4 days or about 100 erase cycles of each memory block in a year. In 5 years, that's 500-700 out of 2-3000 erase cycles the memory blocks support.

    The wear out indicator would probably show an 65-70 out of 100 and the write speed will probably decrease from 150 MB/s to around 125 MB/s, in an attempt to minimize the erase cycles in time and prolong the drive's life.

    Leave a comment:


  • PCBONEZ
    replied
    Re: Anyone running SSD's yet?

    .
    This is another outright lie.
    Originally posted by mariushm
    The Intel drives are warrantied to last for at least 5 years without signs of slowing down even when the user does about 20 GB of writes daily, which is sort of a heavy user.
    I linked to the warranty and specs (from the horse's mouth) in the post just before that one.
    Intel does NOT warranty the things to some minimum speed.
    They don't even -imply- they guarantee a minimum speed.

    .
    It's one of those deliberate 'loop hole' warranties that Lawyers dream up.
    .
    Intel's warranty on those only obligates them to meet 'advertised specs' for 5 years.
    The specs say "UP TO" whatever speed. There is NO minimum given.
    - In so far as the drive's speed(s) it's the same BS warranty you get from ISP's about their connection speed. "UP TO."
    Pretty worthless....
    -
    Additionally Intel's Specs for the drives has the standard disclaimer such that they can be changed at any time without notice.
    That means the warranty can be changed at any time without notice.
    .
    Last edited by PCBONEZ; 02-25-2012, 07:24 AM.

    Leave a comment:


  • mariushm
    replied
    Re: Anyone running SSD's yet?

    Originally posted by acstech
    It matters because some of us are trying to make an informed decision on whether or not to buy an SSD, and / or recommend them to our customers.

    Hypothetical scenario:

    Say I have a 3TB Seagate HD where I keep all my data, but I want some extra speed to boot the OS and run programs. That's all I want to do; install Windows 7 and Linux to it, and run programs off it. All the data will be stored on the Seagate.

    How much gets written to the disk just booting it up and running typical programs? Would this be enough to noticeably slow down the SSD before, say, 5 years of use?

    The way I understand it, if you get a larger (higher capacity) drive, for a given usage (amount of data written) it will last longer than a smaller, lower capacity drive. If we knew usage, we could calculate the useful lifetime, and therefore buy the correct drive, right?
    For any typical user, the scenario where the drive starts to slow down or become unable to write data to it will never happen.

    A typical Windows installation will write to the C: drive (where the operating system is installed anything between about 10 MB and a few hundred MB a day). This is the average, as sometimes you want to install a 4-10 GB game and just for kicks you maybe want to see how it runs from a SSD drive or maybe you watch Netflix and the Silverlight plugin caches the 400-1GB per hour of video to the C: drive which happens to be a SSD.

    The Intel drives are warrantied to last for at least 5 years without signs of slowing down even when the user does about 20 GB of writes daily, which is sort of a heavy user... as I said above in real life on a 64-120 GB drive, the average use is about 500-1 GB a day.

    Basically here's the issue with the SSD drives.

    The data on them is grouped in small "pages" which are (for this example) 512 bytes. Several pages are grouped together into a block of memory, usually 16, 64, 128 or 512 KB.

    The memory is in such way designed that it can only write a page at a time, mark a page as deleted, but if it wants to write data in a deleted page, it has to read all the large block in memory, erase it and then write back the pages.

    So let's say this is a block of pages:
    [_|_|_|_|_|_|_|_] - 8 pages of 512 bytes = 1 block of 4 KB

    When you write a 1 KB file in it, you have this:

    [_|_|_|x|x|_|_|_] - 2 pages used

    Now let's say you edit that file and bring it down to 200 bytes... that means the old file is deleted and a new file is created. Only the memory chip doesn't work like that, the controller just marks the two old pages as deleted and creates a new page :

    [_|x|_|D|D|_|_|_] - 1 page used, 2 pages marked as deleted

    Now, if the drive would have to write a 3 KB file on the disk, it can't write it in this block because there's only 5 pages available (5 x 512 = 2.5 KB available and 2 are deleted)

    If this drive was full to the brim, the controller would have no choice but to read the single page in memory, issue an erase of the whole block and then write the original page and the 3 KB you want so you'll have something like this:

    [X|x|x|x|x|x|x|_] - 7 page used, X = the previous single page, the 6 pages of the 3 KB file.

    This is in a way bad, because these memory blocks support only 2500-3000 erase cycles during their lives - when they reach so many cycles, these blocks become read only.

    In real world, the SSD drives can never be full to the brim, because they hide away from you about 10% of the chips' memory and use that as a sort of swap place.

    So, let's go back to the case where your block looks like this:

    [_|x|_|D|D|_|_|_] - 1 page used, 2 pages marked as deleted

    If the hard drive was full and this was the last block available, the controller would simply pick one of the blocks in the 10% of memory it hid away and then write your 6 KB to that block of memory that was hid away from you. This block would still be available for when you wish to write anything between 1 page and 5 pages of data.

    As the SSD drive is idle, for example when you watch a movie or you're away the computer, the SSD drive crawls the memory blocks and looks for blocks that are candidates to be erased, for example blocks that look like this:

    [D|_|x|D|D|D|D|D] - 1 page used, 6 pages marked as deleted, 1 free

    When a page like this is found, the drive reads the page and writes it somewhere else and marks the whole block as "candidate for erasing" but it doesn't actually erase it.

    At the very extreme case where the drive is full or there's no more blocks with enough free pages to store data, the controller picks a block that's candidate for erase and does an erase cycle on it and makes it available. So actual erase cycles are avoided as much as possible.

    All this is something called wear-leveling - the controller does its best to make sure each memory block is hit by erase cycles as little as possible.

    With a 64 GB drive, if you write 1 GB a day, one memory block will have to do an erase cycle in about 55 days, and keep in mind each memory block has anything between 2500 and 3500 erase cycles and even when one memory block is done the controller will transparently switch it with one of the blocks in the 10% reserved area that may still have about 500-1000 erase cycles left.
    Last edited by mariushm; 02-24-2012, 09:01 PM.

    Leave a comment:

Related Topics

Collapse

  • FALKLAN
    Running Line Voltage and Integrating LED Pods - Automotive
    by FALKLAN
    Howdy everyone! I hope all is well where ever you are!

    I'm attempting to integrate a couple of LED pods into an automotive reverse lamp circuit. The problem being, while the vehicle is running there is a 8.5v current present on the reverse lamp power wire. Needless to say is that all LED pods and lights dimly illuminate while the 8.5v is present, so the LEDs are always powered on while the vehicle is running. When the vehicle is shifted into reverse, the voltage changes to 12 volts.

    I am uncertain how to convert this line into a switched 12v output line for the LED pods....
    02-05-2025, 12:30 AM
  • rumpumpel1
    ThinkPad X1 Carbon 4th Gen running only at 400 MHz
    by rumpumpel1
    Oberservation:
    X1 Carbon 4th Gen runs very slowly. Task manager shows the CPU running at 400 MHz.
    In rare cases (1 out of 20) it runs at normal speed.
    It does so when running on battery or only with the charger.
    When both are connected - charger and battery (50% charged) - the system switches between charger and battery once per second.

    The laptop does exactly the same thing when running Windows or Linux, so OS level problems can be ruled out.

    Flashing CleanME and a different BIOS made no difference.

    Intel's XTU utility...
    09-18-2023, 06:13 AM
  • Kambi13
    Samsung NP370E4K No backlight running on battery
    by Kambi13
    Hi all, need some help. I have a Samsung NP370E4K-KD2BR, which as a 5th Gen Intel CPU that works fine when running with the charger plugged in, but has soon as I remove it and it's running on battery the screen backlight goes off.

    Tried different battery - no change
    Tried different screen - no change

    Removed the plastic film from the screen inverter and looked for pwm signal which as about 3.3v when running with the charger plugged in, when I take it off the voltage drops down to 2.232v, so my best guess is that the EC is not providing the correct voltage....
    09-04-2023, 11:06 AM
  • Skgod
    Asus Rog strix g15 g513ih hn0865 and it's having display issues while running on igpu
    by Skgod
    Hey I am having Asus Rog strix g15 g513ih hn0865 and it's having display issues while running on igpu. Works fine when I open any game or whatsapp windows application. The issues are :
    Black screen, freeze
    The weird thing is that when as long as the laptop is running on igpu the problem keeps happening. But as soon as I open any game or whatsapp windows application, the laptop works completely fine.
    When the screen freeze or black screen I used to move the display lid up and down and it usually works like that.
    So can anyone tell me what would be the possible cause?...
    10-16-2024, 08:11 PM
  • Comp_Pro
    820-01949 Running Slow Fast Fan ASD Software Download?
    by Comp_Pro
    Hello all,

    I'm trying to repair this A2251 macbook pro. Board is 820-01949. The original issue was liquid damage which all cleaned up well and the board was not heavily damaged. Unfortunately after liquid damage cleaning the computer is running extremely slow and the fan is running at full speed. Even the cursor is lagging. Because of this I can't use the built in hardware diagnostic to give me a clue as to what is happening. Does anyone know if Apple Service Diagnostic software is available for this model? Any other clues which could help me out? Seems to me there is an issue with...
    11-30-2023, 05:33 PM
  • Loading...
  • No more items.
Working...