That's what I did most of the time. My "dedicated" RAM/HDD/PSU test rigs were not meant for testing every single system component that passed through the shop, but only the occasional component that I couldn't verify in any other PC/system. Otherwise, yeah, I see what you are saying now - you thought I was taking the RAM out of every system and testing it into that test PC. - Ha, no way I'd do that. Takes too much time and we had way too many systems per day. Indeed the connectors won't stand a chance with that kind of "traffic" through them.
We were looking for a way of testing "pulls" -- to decide what to hang onto (i.e., use to build future machines) vs. discard now (known to be defective).
Machines appear in batches of 100 - 2,000 (I'm looking at 3 pallets of Optiplex 990's, presently -- about 500 machines). We want to isolate those units that are worth restoring by quickly discarding those that are "less desirable": machines that have seen physical abuse; may be odd/one-of-a-kind (not worth the effort to build just one-off); machines that will require a bit of work to make cosmetically presentable (e.g., folks who like putting decorative stickers on their workstations).
So, we quickly triage the machines in a lot and pull off the known discards.
But, all will typically have some components that are easy to salvage, with little effort: disks, RAM, batteries (discard as hazardous waste), drive sleds, certain cables, etc.
But, you have no way of knowing if those components are operational -- without testing them! It's silly to find space to store defective parts!
We regularly use "spare" machines as test fixtures for other things (e.g., monitor test/burn-in, disk imaging, etc. So, why not for RAM testing?
This seemed like a straight-forward approach: let a flunky spend his time loading RAM SIMM/DIMMs into slots for sizing and testing. Discard all failing units and label the "likely good" units (so we can have a consistent label on all memory modules instead of having to figure out how each manufacturer chose to label each particular memory module).
This approach works well for the test stations that are used for disks -- because we can easily discard/replace the cabling that connects the disks-under-test" to the tester, thereby not "wearing out" the test fixture. But, there's no go-between for the memory: the SIMM/DIMM sockets are part of the motherboard so you have to replace the motherboard when the sockets start to be unreliable.
So, we now keep RAM pulls in large tubs, crudely sorted by speed (based on physical dimensions of the module, location of key, etc.); bare disks stacked on shelves, roughly sorted by capacity; etc. Using any of these components becomes a crap-shoot: install in the system you're building, test the system as a whole, discard anything that looks suspect and replace with something "new".
I wrote an application that tabulates notable parts of each machine ("does it have serial ports? how many video interfaces? NICs? how much RAM? etc."), exercises them and tracks the results in a database. This is PXE-booted (so, no media to handle/misplace/worry about version number, etc.) and the "system" automatically has a unique identifier for each machine tested: its MAC!
So, while a single machine may take hours to "process" completely -- it will only take minutes of a person's time to tether it to the test system (via network cable). And, as each machine only needs a "user interface" for the few minutes it takes to configure its BIOS to PXE boot, I can have one keyboard/mouse/monitor that moves from machine to machine instead of having to attach a monitor, mouse and keyboard to each machine (and leave them sitting there, throwing off heat while each machine runs the test application).
When it comes time to build a machine, the "system" can serve up images (from the database) that the "application" simply copies onto each machine's hard disk (either immediately after testing or at a later date). Slap a label on the machine (so you don't have to power it on at a later date to figure out its status, OS, driver versions, etc.) So, I only have to build one image for an "Optiplex 990" to handle those ~500 machines!
But, the effort (of developing the application and the "system") only makes sense when you have dozens/hundreds of machines to process -- and many identical configurations. Folks having to deal with "one of these and three of those" would find the effort to be overkill.
I "process" >30 machines in a single day! Attached are some photos of the "giveaways" headed out the door. The photos are ~10 years old, which explains the "vintage" of the machines. Typically, they are retired (given away) after 1.5 - 3 years. E.g., we're retiring i5's and i7's, now.
The initial obvious approach of setting up dedicated machines to test disks, memory, monitors, etc. quickly showed the flaw in relying on "consumer kit" for test fixtures (i.e., the "test fixtures" couldn't stand up to the sort of use -- connectors and cables -- to which they were subjected).
Instead, stuff components that you hope to be operational into a machine and test the machine as a "finished assembly" (since each cable/connector will eventually need to be mated prior to final assembly, mate them exactly once!). If something fails, toss it in the scrap pile and move on to the next subassembly/component.
Say what?
I'd want some of that.
Aren't i5 and i7 machines still pretty capable for work or home use?
Main Driver: Intel i7 3770 | Asus P8H61-MX | MSI GTS 450 | 8GB of NO NAME DDR3 RAM (2x4GB) | 1TB SATA HDD (W.D. Blue) | ASUS DVD-RW | 22" HP Compaq LE2202x (1920x1080) | Seasonic S12II-620 PSU | Antec 300 | Windows 7 Ultimate with SP1
But, the effort (of developing the application and the "system") only makes sense when you have dozens/hundreds of machines to process -- and many identical configurations. Folks having to deal with "one of these and three of those" would find the effort to be overkill.
Well, that's exactly the difference between the place where you work and where I worked - we were dealing with individual customers' computers. Thus, no way we could rip all the components out of every computer and dump it on a shelf, then use whatever is convenient later - most customers expect to get their computers back the way they were before, unless of course, a component needed to be changed to fix the machine.
Aren't i5 and i7 machines still pretty capable for work or home use?
They are. Heck, even the "old" Core 2 Duo/Quad CPUs are plenty for regular home/office use.
But in the eyes of upper management (particularly in big organizations and governments), newer is always better. So they swap equipment every 2-4 years, even if it doesn't make sense. From a financial planning standpoint, however, it does make sense: you know exactly how much funds will need to be raised per year to meet that goal. It also gives the IT dept. a steady workflow. Otherwise, they'd barely have anything to do (i.e., this swapping allows them to justify their paycheck.)
<sarcasm>
no they're not capable for home use, they don't fit in pocket...
</sarcasm>
or
<sarcasm>
my employer is so cheap they cant afford to upgrade this old core2duo with windows xp...
</sarcasm>
I need to replace my old Core2 Duo for cheap, it's board is dying, too many bad caps on it else it works perfectly for my uses... :-(
Bought this joblot off eBay in hope to fix up to go with the PS3 I "repaired" last week. Out of the 5 controllers got 3 working perfectly. Two needed batteries, one had a broken USB port and a dodgy analogue mechanism but managed to salvage all but the battery from the completely dead controller. Headset and mic both working it appears.
Not really a score but decent enough a deal, if I don't ruin the 4th controller (battlefield 4 pde controller) would have been a good deal better. Was working just needed pads / buttons cleaning but was a bit of a nightmare assembly wise. Just got fed up with it and tried brute forcing it tearing some cables so just binned it.
Anyhow was more of a test for me as come across loads of PS3 controllers at boot sale for peanuts but having not worked on them never bothered. But perhaps worth picking up now as they still do decent money and are fairly easily repairable.
Aren't i5 and i7 machines still pretty capable for work or home use?
IMnsHO, even machines older than i5's are "capable for work or home use". People seem to over-buy hardware and then burden it with poor (or poorly configured) software. Regardless, most machines are still faster than the user actually needs (excepting gamers, etc.)
I just (last night) retired a Dell XPS 600 (2.8GHz dual core w/8G and 1T spinning rust) that I'd been using for 3D CAD, modeling, etc. Yeah, I can render some of my models faster with newer iron but I can also WORK SMARTER and not need to render as often! In that case, my personal meatware is the limiting factor!
I write most of my software and formal (i.e., camera ready) documentation on far slower machines -- including laptops (e.g., I use an HP 8730w when away from the office).
Even rendering 3D animations (brutal on CPU) doesn't really require lots of muscle; just plan on letting a machine work on it while you're busy doing something else!
But, employers have IT-droids convincing them that they need to "clean house" every 18-36 months and upgrade everything. So, lots of kit ends up headed for the tip that could otherwise see continued use.
Rescued a Dell 3007wfp recently because it wasn't worth some IT guy's time to sort out how to make it work on <whatever-OS> (noting, of course, that said IT guy probably was instrumental in getting employer to move to <whatever-OS>!)
Well, that's exactly the difference between the place where you work and where I worked - we were dealing with individual customers' computers. Thus, no way we could rip all the components out of every computer and dump it on a shelf, then use whatever is convenient later - most customers expect to get their computers back the way they were before, unless of course, a component needed to be changed to fix the machine.
Ours is a hybrid situation. Machines are sourced by companies donating them (in large numbers) as they, typically, do their periodic upgrades. E.g., we've had 18-wheelers show up packed with kit gathered from "regional offices" around the country. We sort the units and repair/refurbish as required. We then distribute them to individuals -- who "expect to get their computers back the way they were before". Usually, because they have no skills (or other resources) on which to draw to maintain the machines.
As we are "sourcing" the computers to those clients, having their configurations/images available on the server means we can restore to "as delivered" condition with a couple of mouse clicks. Clients are encouraged to keep personal files separate from the OS/applications as we make only a modest effort to restore the stuff they've added after delivery (and NO effort to reinstall any applications they may have added).
So, a "repair" involves imaging the system disk "as received". Restoring the original "as delivered" image to see if that fixes their problem (most "problems" are software related). Fix any hardware that (rarely) has failed (no guarantee that you'll get the same make/model components -- or even PC! -- returned to you!). Then, grep the "as received" image for any obvious personal additions and try to put them back where they were.
To be clear, we're not an IT department. The kit and the "repairs" are essentially "no charge" so the onus is on the recipient to minimize their personal exposure to loss!
If we see you too often, we start increasing the time it takes for you to have your machine returned to you -- until you learn not to be such a drain on our resources! As I (nor any of the other folks involved) am not paid for my time, I can put whatever conditions I want on the services that I donate, when I donate them, etc. No "boss" controlling my time/effort.
Piss me off and I'll innocently smile -- as I stiff you! ("Gee, I'm sorry I wasn't able to recover all those photographs you said were stored on the machine. The disk was damaged -- so I just replaced it...")
They are. Heck, even the "old" Core 2 Duo/Quad CPUs are plenty for regular home/office use.
But in the eyes of upper management (particularly in big organizations and governments), newer is always better. So they swap equipment every 2-4 years, even if it doesn't make sense. From a financial planning standpoint, however, it does make sense: you know exactly how much funds will need to be raised per year to meet that goal. It also gives the IT dept. a steady workflow. Otherwise, they'd barely have anything to do (i.e., this swapping allows them to justify their paycheck.)
In many cases there is good reason for "short" replacement cycles in the corporate world:
-Many companies have a crap ton of security/monitoring/remote management software that always runs in the background and bogs down the system requiring more powerful hardware than for a normal home user to run the same applications, for example my work laptop uses over 3GB of ram and around 30% CPU (I5-5300U) just sitting on the desktop with no programs running with all the crap they have running in the background.
-Most corporate desktops run 24/7 and laptops run at least 8hrs. a day 5 days a week and often more (many companies want PC always on and connected to the network so the can receive updates) and are often handled carelessly, this combined with the normal affect of age means that after a few years reliability drops significantly and the downtime/loss of productivity caused by broken equipment often costs more than the savings of keeping it longer (particularly in large companies when only a few percent increase in failures means hundreds or thousands of more people who can't do their work due to broken PCs).
-There are the tax implications of depreciating assets (in this case IT equipment) over a certain period of time depending on the industry and applicable state/federal taxes. If the tax code says they can deprecate equipment to 0 over X years and write off the value it often makes sense to replace that equipment when that interval hits rather than keeping it and loosing the tax savings.
-Power savings of newer equipment may also be a major factor (especially when it comes to data centers), while say a 25-50% reduction in power draw might not mean much to a home user where it only means a few dollars (or possibly cents depending on electricity rates and amount of usage) on their monthly power bill, but to a major company with thousands of computers (which as noted often run 24/7) this can equal thousands or tens of thousands of dollars a month.
All these factors together often justify the intervals used, but means that a lot of useable equipment ends up on the secondary market which is great for people that want cheap PCs and don't care about having the "latest and greatest" (not to mention that even a few years old corporate grade PC is often more reliable than much of the newer "consumer grade" crap).
In many cases there is good reason for "short" replacement cycles in the corporate world:
<snip>
- Many (most?) IT departments don't have much technical "depth". Many of the staff are effectively trained as "Windows supporters" and not "real" IT. Show them something that doesn't have an x86 inside -- or run Windows -- and they're at a complete loss. The few souls who may have the necessary technical depth to make informed decisions/implementations often aren't the ones who have the "ears" of management.
- The "No-one-ever-got-fired-for-buying-IBM" effect; it's safer to just keep moving forward with "the latest and greatest" (and blame any problems that your organization encounters on the provider(s)) than it is to take a deliberate "stand" and cling to a technology that "has been working well for us" (exposing yourself to a potential future problem which is then "blamed" on your decision NOT to upgrade).
- It's not YOUR money! So, why NOT upgrade all this kit? If the peripherals are no longer supported (drivers), replace them, as well! I've yet to see a business lump all IT needs into a single budget (i.e., money spent on equipment comes at the expense of hiring additional staff, etc.)
- Out-facing interfaces. I.e., if your key suppliers/customers have "moved forward", there is pressure on you to do likewise ("Gee, we can't read your OLD MSWord2010 documents; why don't you upgrade that software?" -- which usually means OS and hardware upgrades)
- Sheer ignorance. People THINK they need something without actually knowing their options. And, not having the skillsets to evaluate those options, they rely on folks who APPEAR to "know what they're talking about" for guidance (comfortable that they've abrogated responsibility for their decision to someone else!)
- Pop culture. All the cliche advice you see/hear in the media telling you what you "need" (e.g., antivirus software, automatic update services for your OS and apps, etc.). See above.
All these factors together often justify the intervals used, but means that a lot of usable equipment ends up on the secondary market which is great for people that want cheap PCs and don't care about having the "latest and greatest" (not to mention that even a few years old corporate grade PC is often more reliable than much of the newer "consumer grade" crap).
Having dropped enough money into "computer stuff" over the past 40 years to buy a house, I've come to realize that "new" just means "yet another need to upgrade applications, reconfigure/relearn, etc.". As every DOLLAR spent comes out of MY pocket -- along with every HOUR -- I think really hard before upgrading (hardware OR software).
For most consumers, an "appliance" that lets them send mail and browse the web is probably more than they'll ever need! <frown> But, no one wants to "brag" about having JUST that!
This week , I've got for free one Sony HCD-GRX8 complete with speakers . Really funny , but it's the same problem as last week's one .. All belts are melted ... A battle against Asphalt , lol , and it's fully working .
Lol, funny to see fairly modern system like this have problem with belts this early in its life. I have seen tape decks from the late 70's and early 80's still with original belts and working (though slipping a bit). The rubber these new systems are using must be really low quality stuff. Either that or mishandled during installation (typically contamination by grease and oil will eat the belts away pretty quickly.)
Not a bad find otherwise. The speakers from some of these modern systems, especially Sony, seemed to be tuned pretty decently. For a small size room, they are perfect and offer plenty of deep bass. Though depending on the music you listen to, that may or may not be a good thing. For music with lots of real (non-synthetic) drum kicks, I find oldschool paper cone speakers to sound a little better.
Bought this joblot off eBay in hope to fix up to go with the PS3 I "repaired" last week. Out of the 5 controllers got 3 working perfectly. Two needed batteries, one had a broken USB port and a dodgy analogue mechanism but managed to salvage all but the battery from the completely dead controller. Headset and mic both working it appears.
Pretty good.
Considering there is still a good market for the genuine PS3 controllers. Just about a year ago when I found my PS3 and went to look for a controller at my local GameStop, I was told it'd cost me $60 (or something like that) for a USED PS3 controller! My eyes nearly fell out when the counter guy told me that. That's why I bought these two: https://www.badcaps.net/forum/showth...706#post771706
... and to say the least, they are nowhere near as good as a genuine PS3 controller. So what you found is a pretty good deal.
I got two PS2s, broken, for $16.90 (65RON). Apparently one has a broken laser ribbon (SCPH-70004b - v12) and both have the fan connector broken off. I'll see if soldering the fan directly to the motherboard will work.
Surprise though, the other is a SCPH-77007a, NTSC-J console from Taiwan. It was a pretty nice find, considering it's also modded with a Modbo 4 v1.99.
As for the 70004b, I'll probably fix the laser, the fan, and do the resistor fix on the focus and tracking coils, then mod it with a Modbo 750 (I know there's FMCB but I'm tired of it. )
I'll get another one today, for $9.10 (35RON). This one the seller says it works but won't read DVDs. Should be an easy fix considering I still have spare slim lasers and bulk DVD drives for them harvested from other dead slims (MB dead). Who knows, I might find a chip in it too!
That and I got a brand new Hama K210 keyboard as well, in box, w/ manual and everything for $9.62 (37RON). It's pretty nice - slim, white top and black bottom. Oh, and it's USB.
Lol, funny to see fairly modern system like this have problem with belts this early in its life. I have seen tape decks from the late 70's and early 80's still with original belts and working (though slipping a bit). The rubber these new systems are using must be really low quality stuff. Either that or mishandled during installation (typically contamination by grease and oil will eat the belts away pretty quickly.)
Haha .. Yes , very strange indeed , And to assume anyone would give something for free ? , lol .. I bet this customer was afraid from the loud heavy clicks of the 3 CD's mechanism , as well as the running crazy single dynamo of the Cassettes , lol . Standby of course killed his last hopes !! .
Seven belts exactly , 2 of them in the Cdroms , and yes , i'm really amazed to find such quality in a Sony ..You should see me all covered with ..Asphalt , lol ..
Not a bad find otherwise. The speakers from some of these modern systems, especially Sony, seemed to be tuned pretty decently. For a small size room, they are perfect and offer plenty of deep bass. Though depending on the music you listen to, that may or may not be a good thing. For music with lots of real (non-synthetic) drum kicks, I find oldschool paper cone speakers to sound a little better.
Yes , their sound is pretty genuine and amazing . Not bad at all with the other two surrounding . I'll leave the system in the shop , very robust and tough inside .
modern "Rubber" if that's even what it is, is complete shit.
i have loads of examples of it turning into sticky slime.
the most annoying was the grips on my electric screwdriver!
Got a psp 1000 and some games for £12, working just joystick cover missing, some technics speakers sb-3110 but they look completely different to what I can see online (I'll post some pics shortly) a action cam sj7000 plus 32gb microsd card for a quid £1!!
My brother in law the twat got the best deal, and for silliest reason.
He got a readynas duo with 2 x 1.5tb drives for a quid £1, I picked it up but he had already bought it. Didn't have a clue what it was, knows nowt about computers. His reason was it had a USB port and wanted to see if he could charge his phone off it. Me was a bit salty
Well, that's exactly the difference between the place where you work and where I worked - we were dealing with individual customers' computers. Thus, no way we could rip all the components out of every computer and dump it on a shelf, then use whatever is convenient later - most customers expect to get their computers back the way they were before, unless of course, a component needed to be changed to fix the machine.
They are. Heck, even the "old" Core 2 Duo/Quad CPUs are plenty for regular home/office use.
But in the eyes of upper management (particularly in big organizations and governments), newer is always better. So they swap equipment every 2-4 years, even if it doesn't make sense. From a financial planning standpoint, however, it does make sense: you know exactly how much funds will need to be raised per year to meet that goal. It also gives the IT dept. a steady workflow. Otherwise, they'd barely have anything to do (i.e., this swapping allows them to justify their paycheck.)
In many cases there is good reason for "short" replacement cycles in the corporate world:
-Many companies have a crap ton of security/monitoring/remote management software that always runs in the background and bogs down the system requiring more powerful hardware than for a normal home user to run the same applications, for example my work laptop uses over 3GB of ram and around 30% CPU (I5-5300U) just sitting on the desktop with no programs running with all the crap they have running in the background.
-Most corporate desktops run 24/7 and laptops run at least 8hrs. a day 5 days a week and often more (many companies want PC always on and connected to the network so the can receive updates) and are often handled carelessly, this combined with the normal affect of age means that after a few years reliability drops significantly and the downtime/loss of productivity caused by broken equipment often costs more than the savings of keeping it longer (particularly in large companies when only a few percent increase in failures means hundreds or thousands of more people who can't do their work due to broken PCs).
-There are the tax implications of depreciating assets (in this case IT equipment) over a certain period of time depending on the industry and applicable state/federal taxes. If the tax code says they can deprecate equipment to 0 over X years and write off the value it often makes sense to replace that equipment when that interval hits rather than keeping it and loosing the tax savings.
-Power savings of newer equipment may also be a major factor (especially when it comes to data centers), while say a 25-50% reduction in power draw might not mean much to a home user where it only means a few dollars (or possibly cents depending on electricity rates and amount of usage) on their monthly power bill, but to a major company with thousands of computers (which as noted often run 24/7) this can equal thousands or tens of thousands of dollars a month.
All these factors together often justify the intervals used, but means that a lot of useable equipment ends up on the secondary market which is great for people that want cheap PCs and don't care about having the "latest and greatest" (not to mention that even a few years old corporate grade PC is often more reliable than much of the newer "consumer grade" crap).
Thanks for the info dump. Very informative.
I've been in the BPO sector for 8 years now. Not with I.T. mind you but just a humble agent for one of our clients and it has always bothered me that when I started with the company, we jumped from C2D-based Dells to S1156 i5-based Dells to S1155-based HPs and to the current S1150-based Dells. Next thing I know were going to be on Kabylake or Coffeelake CPUs. :P
The programs we use rarely change but our hardware gets upgraded in a short amount of time.
Too bad though that in my country the people who bid for the company's "old stuff" that gets disposed resell these in the secondary market at near brand new prices or even higher.
Main Driver: Intel i7 3770 | Asus P8H61-MX | MSI GTS 450 | 8GB of NO NAME DDR3 RAM (2x4GB) | 1TB SATA HDD (W.D. Blue) | ASUS DVD-RW | 22" HP Compaq LE2202x (1920x1080) | Seasonic S12II-620 PSU | Antec 300 | Windows 7 Ultimate with SP1
I've tested over 30 pairs of RAM modules on it, and they all passed.
I can, because I had a few RAM modules that I tested extensively many times and they always passed. I kept them for this exact reason - if this machine started misbehaving, I could always verify if it was the slots or not.
That's why the slot insertion limit looks like bullshit. The only connectors where I heard of a limit like that, are with the Intel LGA sockets, LOL.
Even then, there's a good chance that the socket 775 motherboard still will be stable even when I change the processor again...
"¡Me encanta "Me Encanta o Enlistarlo con Hilary Farr!" -Mí mismo
"There's nothing more unattractive than a chick smoking a cigarette" -Topcat
"Today's lesson in pissivity comes in the form of a ziplock baggie full of GPU extension brackets & hardware that for the last ~3 years have been on my bench, always in my way, getting moved around constantly....and yesterday I found myself in need of them....and the bastards are now nowhere to be found! Motherfracker!!" -Topcat
"did I see a chair fly? I think I did! Time for popcorn!" -ratdude747
now what CPU did you doubleclock from 600 to 1.2? I'm warming up the BS flag on that one.
Yep, the BS flag was already thrown here... A 600 Mhz OC usually wasn't possible until the Core 2! (Except for Pentium 4, of course) (Or at least an Athlon 64, at least with the 2004 or 2005 revision)
"¡Me encanta "Me Encanta o Enlistarlo con Hilary Farr!" -Mí mismo
"There's nothing more unattractive than a chick smoking a cigarette" -Topcat
"Today's lesson in pissivity comes in the form of a ziplock baggie full of GPU extension brackets & hardware that for the last ~3 years have been on my bench, always in my way, getting moved around constantly....and yesterday I found myself in need of them....and the bastards are now nowhere to be found! Motherfracker!!" -Topcat
"did I see a chair fly? I think I did! Time for popcorn!" -ratdude747
Yep, the BS flag was already thrown here... A 600 Mhz OC usually wasn't possible until the Core 2! (Except for Pentium 4, of course) (Or at least an Athlon 64, at least with the 2004 or 2005 revision)
And then we learn the CPU in question isn't even CISC. Intel
Things I've fixed: anything from semis to crappy Chinese $2 radios, and now an IoT Dildo....
"Dude, this is Wyoming, i hopped on and sent 'er. No fucking around." -- Me
Excuse me while i do something dangerous
You must have a sad, sad boring life if you hate on people harmlessly enjoying life with an animal costume.
Sometimes you need to break shit to fix it.... Thats why my lawnmower doesn't have a deadman switch or engine brake anymore
Three different manufacturers. And, these are "reputable" manufacturers. I'm sure folks making components geared specifically to the bottom feeders of the consumer kit market have worse performance (perhaps even using recycled/reconditioned components?).
If you consider the design of SIMM/DIMM sockets, the reason for the limit is pretty obvious: the connector relies on deforming metal "contacts" to make the electrical connection to the SIMM/DIMM. Each insertion cycle "exercises" the metal leading to eventual metal fatigue and failure.
Note that a contact need not "break" for the socket to have failed. Rather, if it loses a significant portion of its springiness, it may not be able to provide a solid enough connection to the finger on the module (i.e., higher mating ELECTRICAL resistance). Likewise, the manufacturer specifies the performance in a rang of environmental factors: temperature variation (hint: things expand and contract), vibration, etc.
How many times would a PC manufacturer expect a consumer to replace (reseat) the memory modules in their PC over its "lifetime"?
IT folks are largely ignorant of these limits -- "SEEMS like it is working..." Would you want the controller in the DaVinci robot performing your surgery to have sockets with that (poor) sort of "durability"?
[I designed a bit of kit that required the "memory module" to be removable dozens of times daily -- over the course of many operational years. The connectors cost $30. Each!]
As I said up-thread: you can undoubtedly encounter cases with much higher cycle counts. Just like you can run a 1.8V core at 1.9V. Or overclock a CPU.
Those of us charged with actually designing that kit, however, have to meet specific numerical targets for a product's performance. And, those targets apply over a range of operating conditions -- not cherry-picking some particular set of conditions to be most conducive to increasing some other design limit.
[There's a reason cold aisle temperatures are so important in data centers!]
Comment