Announcement

Collapse
No announcement yet.

unknown (solid?) caps on MSI "military class" graphics card?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • kevin!
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by CapPun View Post
    ok thx but so you sure about it? (how did you figure the brand when there's no logo???)


    btw I've never seen or read about a GPU failing (or an intel CPU for that matter) hope that dont happen :/

    from 2011 to 2017 I used a GTX 580 these things heat up like crazy like above 80°C in gaming (even in well-cooled giant Cooler Master HAF case with 4 fans including 20mm top fan)
    plus it was a single-fan with a tiny fan, brand was EVGA but the card looked like a basic generic card yet it worked without a problem for all those years till I finally sold it after upgrade

    here for the MSI 1070 Aero (ITX version) reviews show temps go up to the mid-70's in Furmark tops & they say those are good temps for a 1070
    In nippon chemicon they are easy to identify, when there is no logo, what nippon chemicon does instead is that in the blue strip they leave an unpainted line. This feature only has nippon chemicon in solid caps without logo.
    with this photo I will solve your doubts.
    MSI always mounts genuine nippon chemicon, so do not worry, they are 100% original nippon chemicon.
    Attached Files
    Last edited by kevin!; 04-23-2019, 10:35 PM.

    Leave a comment:


  • momaka
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by CapPun View Post
    what if the manufacturing's 100% automated?
    It doesn't matter. Machines wear out, so something made at the beginning of the day/week/month/year could differ from one made at the end of the same period. Also, the quality of materials can vary a bit as well (even with very strict QC). Because of that, nothing made will be 100% identical and perfect. Ever.

    Originally posted by CapPun View Post
    like Asus claims they call it "Auto Extreme Super Alloy Power II" like here: https://www.asus.com/Graphics-Cards/DUAL-GTX1070-O8G/
    Oh, please, those things are just marketing names. Like "3D clean" or "3D white" toothpaste (seriously, try reading some of those cosmetics marketing names and ask yourself what any of those actually mean.)

    Originally posted by CapPun View Post
    With humans out of the equation does this guarantee a no-lemons card
    No.

    Originally posted by CapPun View Post
    so is there any I mean ANY way you can think of that a user can tell if they got lucky or unlucky when buying a new card?
    No.

    Originally posted by CapPun View Post
    say suspicious results from an app like Furmark. or unusual but subtle anomalies from a FLIR reading. or just close visual inspection of the card - anything?
    No.
    Last edited by momaka; 04-15-2019, 11:25 PM.

    Leave a comment:


  • CapPun
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by momaka View Post
    Again, it has to do with variations in manufacturing.
    what if the manufacturing's 100% automated? (like Asus claims they call it "Auto Extreme Super Alloy Power II" like here: https://www.asus.com/Graphics-Cards/DUAL-GTX1070-O8G/ )

    with humans out of the equation does this guarantee a no-lemons card (in which case Asus would quickly put competitors out of business because why haven't MSI or Gigabyte also switched to full automation?) or is there still risk?

    That means, 90-95% of the stuff they make can be sold as a working product. The rest (5-10%) is usually discarded because they don't work properly even after "binning". Now, of those 90-95%, not all of the cards will be created equal. Perhaps only 50-60% are made close to "perfect" (i.e. how it was designed to be and last for 5-6 or more years), and 20-25% is "good" but not great (maybe last only 2-4 years). The last 10-15% are what you'd call the "lemons" - i.e. cards that function properly and pass QC, but just barely, and thus end up diying quickly after purchase (a few weeks... or months... or a year... or two... you name it.)

    And because there is no telling what you're going to get, that's why you may get a card that works very long even at high temperatures, or you may get a real "lemon" that dies rather quickly even if you kept the temperatures fairly reasonable.
    so is there any I mean ANY way you can think of that a user can tell if they got lucky or unlucky when buying a new card?

    say suspicious results from an app like Furmark. or unusual but subtle anomalies from a FLIR reading. or just close visual inspection of the card - anything?

    Leave a comment:


  • momaka
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by CapPun View Post
    what I dont get it why would it take so long (1h?) for a card to reach temp equilibrium? the card itself ain't big it's just a small mass less then 1kg (half that for mine) ain't much to heat up
    usually when I run Furmark the GPU temps stabilise around 4-5mn in so dont that mean it has reached equilibrium?
    Not all PC cases have good ventilation. So the 1 hour test allows for everything inside to heat up to proper temperature. Granted the reviews above were done open-air, 15 minutes should be enough. But still, depending on the heatsink design and how fast it removes heat, not all GPUs will come to thermal equilibrium in 4-5 minutes.

    The mass of the boards is small, sure. But when you have just a few watts here and there of heat generated on the PCB (i.e. not from the GPU chip), then it make longer than that. Plus, 1 hours is a more realistic load if someone was to use the hard for, say, a gaming session.

    Originally posted by CapPun View Post
    also you said the high temps will kill the GPU over time but the VRM temps are even higher so basically this means the GPU has lower tolerance then the VRM's ?
    Again, all Silicone has the same temperature "tolerance".
    But the reason why GPUs are more sensitive has to do with mechanics - particularly how the silicone core (the black, shiny, rectangular part on which you apply thermal paste) of the chip is attached to the substrate (the square PCB that is then attached to the video card PCB.) All modern GPUs and CPUs are of the flip-chip type:
    https://en.wikipedia.org/wiki/Flip_chip
    And what often fails with flip-chip technology is the solder balls ("bumps") between the silicone core and the PCB substrate - mostly due to thermal stress and different expansion rates of the silicon core and the substrate. Once that happens, you can't really do anything (permanent) about it. The underfill material will prevent a reflow to completely and properly re-bond the silicone core back to the PCB substrate. That's why once a chip fails, a "successful" reflow will only fix it usually temporarily.

    And then there is also the junction-to-case thermal resistance that I talked about earlier, which will vary across various silicone semiconductors due to using different case packages and configurations. That's why the maximum temperature for different GPUs and CPUs may also be different (for example the Core 2 Quad Q6600 has a maximum allowable case temp of 66.2C, while for the Athlon II X4 640 it is 71C.)

    Originally posted by CapPun View Post
    speaking of GPU lifespan this is my previous card:
    https://www.bhphotovideo.com/images/...580_744401.jpg

    it has EVGA brand on it but it's essentially a basic ("founder"?) edition with the typical mini-fan & shit cooling
    got it in 2012 & used it mainly for reguler gaming & Aida64 said it easily go above 80°C in gaming *_* yet it lasted over 6y even the fan was intact (and those mini fans are suppose to be very flimsy) then I recently sold it for a cheap price apparently the customer hasn't complained since
    so how could such a power hungry card (generic one at that) with shit cooling last that long?
    Again, it has to do with variations in manufacturing.
    Usually, the card will be designed to at least meet the warranty period and maybe work for a few years after that... and more often, designed for around 3-4 years, as that's when usually most people upgrade their hardware these days. So for the card to work that long, they usually design it with a bit of "overhead"... i.e. most cards will probably make it to 5-7 years without issue. But I say "most", because manufacturing is not a perfect process. In general, every manufacturer tries to crank a product with a least 90% success rate, if not closer to 95%. That means, 90-95% of the stuff they make can be sold as a working product. The rest (5-10%) is usually discarded because they don't work properly even after "binning". Now, of those 90-95%, not all of the cards will be created equal. Perhaps only 50-60% are made close to "perfect" (i.e. how it was designed to be and last for 5-6 or more years), and 20-25% is "good" but not great (maybe last only 2-4 years). The last 10-15% are what you'd call the "lemons" - i.e. cards that function properly and pass QC, but just barely, and thus end up diying quickly after purchase (a few weeks... or months... or a year... or two... you name it.)

    And because there is no telling what you're going to get, that's why you may get a card that works very long even at high temperatures, or you may get a real "lemon" that dies rather quickly even if you kept the temperatures fairly reasonable.

    But it's a fact that at the end of the day, more thermal stress + more thermal cycling will kill a flip chip faster. So if your card lasted for 6 years running regularly at 60-80C, consider how much longer it could have lasted if you never let it overheat that much.

    Leave a comment:


  • CapPun
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    that sucks in summer it can go above 35°C in my room :/ but usually it's around 25°C

    what I dont get it why would it take so long (1h?) for a card to reach temp equilibrium? the card itself ain't big it's just a small mass less then 1kg (half that for mine) ain't much to heat up
    usually when I run Furmark the GPU temps stabilise around 4-5mn in so dont that mean it has reached equilibrium?

    also you said the high temps will kill the GPU over time but the VRM temps are even higher so basically this means the GPU has lower tolerance then the VRM's ?



    speaking of GPU lifespan this is my previous card:
    https://www.bhphotovideo.com/images/...580_744401.jpg

    it has EVGA brand on it but it's essentially a basic ("founder"?) edition with the typical mini-fan & shit cooling
    got it in 2012 & used it mainly for reguler gaming & Aida64 said it easily go above 80°C in gaming *_* yet it lasted over 6y even the fan was intact (and those mini fans are suppose to be very flimsy) then I recently sold it for a cheap price apparently the customer hasn't complained since
    so how could such a power hungry card (generic one at that) with shit cooling last that long?

    Leave a comment:


  • momaka
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by CapPun View Post
    so with such a difference in temps I reckon one of them 2 reviews dont know what they're doing so which of them 2 do you think got it right?
    Well, it's not that one of them didn't do things right, but rather just a difference in the methods used for measuring the temperature.

    The guru3d used a thermal camera which only measures temperatures at the surface of things. Thus, the GPU chip temperature will be higher than what they measured. But for things like the VRM, where they measured temps on the back of the card, those temperatures are probably more or less realistic, give or take 3-5C (because, as I mentioned, it all depends on PCB design - if there are lots of vias connecting the VRM copper tracks, then PCB temperatures would be pretty even on both sides of the PCB.)

    That said, what I don't like about the guru3d review is that they simply ran a test for at least 15 minutes (supposedly). But I'd like to see the card tested for 1 hour at 100% load. This will give enough time for everything on the card to reach thermal equilibrium more or less. Also, the guru3d review doesn't mention what was the room temperature.

    Room temperature alone can impact things quite a bit. For example, most of my cards that I run with the fans set on one constant speed (typically my low and mid-range cards with a 50-70 mm fan running at 7V), the maximum temperature will rise proportionately with the room temperature. One of my early modded cards, an old GeForce 6800 XT, runs about 51C when my room temperature is around 20C in early winter. Late in summer when I get 30C room temperatures, that card runs about 60-62C - i.e. proportionately hotter. Of course, this is because I have the fan set to run at a constant speed. With an active cooling profile on newer GPUs, the temperature may not change that much with rise in ambient temperatures. But then the fan would be turning faster and thus be louder.

    This is why it is very important to also note and include room temperatures in reviews, like the realhardwarereviews website did. On that matter, I think their review is more realistic, because it shows what GPU temperatures you will get at what room temperature. Moreover, they "prime" the card to sit for a good amount of time "at the desktop" before performing tests on it. This is good, because a "cold" card will otherwise yield lower temperatures. The only thing that I would say is not realistic about that review (or probably most reviews anyways) is that they always use a very cool ambient temperature. Some people will say 20C is on the warm side, but I beg to differ. Most people run their homes closer to 23-25C... though that also depends on culture and physical location. But all in all, 20C is still unrealistic, because most PCs (even those with lots of airflow) will still get hotter than the ambient room temperature. Thus, I'd say that 25C would be a more realistic test temperature. Better yet, I think 30C should be the norm for testing things, as not all cases have good airflow. Moreover, some people just live in hot climates and 30C ambient temperature for them is not unusual. So hardware really should be designed to work in at least 30 or 35C environments. Of course, for most of these video cards, that would mean even more massive coolers (if not some form of liquid-air hybrid cooling) - which would be expensive, and many manufacturers simply would rather not do that.

    Originally posted by CapPun View Post
    is that tiny card really as efficient in cooling as the 1st reviewer claims or just mediocre like the 2nd reviewer says?
    Given its small heatsink and PCB size... yes, it does excellent for what it is.
    But unfortunately, those temperatures are still way too high and will kill the GPU chip after some time. If you want very long life out of a GPU, mid-50C for the GPU should be your target... and in most cases, that would mean liquid cooling.

    Originally posted by CapPun View Post
    and do you reckon those VRM temps in the 1st review are for real? cause such low VRM temps are super rare when bigger cards have VRM heating up much more then the GPU itself
    For the measurements they show at the back of the card... yes, they do appear realistic. But again, given PCB construction, it's quite possible the actual components on the VRM are running 5-10C hotter.

    Leave a comment:


  • CapPun
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by momaka View Post
    Well, that resolution certainly isn't great, but should be enough to see any hot components on the board - or at least the "hot spots" if the components are too small to see.
    gr8! I actually found a flir camera it's low res 80x60 but it's one of those mega-rare ones that dont run on proprietary batteries (it uses standard AAA batteries which tbh every camera should use but I know the only reason they use proprietary batteries for most of their products is for profit/planned obsolescence)

    it's the FLIR TG130 might buy that one after all I prefer a way of measuring temps without touching anything :/

    So it goes both ways: a cooler can help cool the VRM better, but it could also make it hotter. It really depends if the cooler is adequate
    so what about the card in the OP then does it look like a good overall cooler (for an ITX card) or not?

    MSI 1070 Aero ITX
    https://www.guru3d.com/index.php?ct=...e171002a287bc1

    https://www.guru3d.com/index.php?ct=...e171002a287bc1

    I ask cause when looking for review I seen two contradictory ones with regard to temps & since you seem to know your stuff maybe you can make some sense of it that'd be awsome

    so the first review is from guru3D (same site I posted those hsf 2 pics from)

    they say the card's temps stay below 70°C when on load - not just the GPU but also the VRM's !!! (which is unusual cause usually VRM temps go over 80°C for much larger cards like the MSI gaming series or Asus strix yet here's this little ITX card pulling off the impossible hence why I'm suspicious)
    https://www.guru3d.com/articles_page..._review,9.html


    BUT
    on the other hand here's another review which says GPU temp hits 76°C underload (that's 10°C hotter then the GPU temp in the other review)
    no mention of VRM temps but I reckon they're even hotter
    they didn't use an IR cam but the GPU already got a sensor anyway
    https://www.realhardwarereviews.com/...c-gtx-1070/10/


    so with such a difference in temps I reckon one of them 2 reviews dont know what they're doing so which of them 2 do you think got it right?
    is that tiny card really as efficient in cooling as the 1st reviewer claims or just mediocre like the 2nd reviewer says?

    nb. and do you reckon those VRM temps in the 1st review are for real? cause such low VRM temps are super rare when bigger cards have VRM heating up much more then the GPU itself
    Last edited by CapPun; 04-04-2019, 09:13 PM.

    Leave a comment:


  • momaka
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by CapPun View Post
    ? that sux - figure they could use that method at least on them super-long cards (more then a foot long) else what's the point in making them so dangerously long
    Well, that's exactly why they make them long - the whole board acts as a heatsink for the MOSFETs / VRM.

    Most PC board PCBs nowadays a typically made of fiberglass or similar equivalents. While fiberglass is not a very good conductor, the copper traces on the board are. And since the GPU VRM needs to provide a lot of current, the traces going between the GPU VRM and the GPU chip are very thick. So essentially, those big copper traces can be thought of as heatsinks (and that, they are.) The larger they are, the more heat they can take from the MOSFETs and VRM inductors.

    Originally posted by CapPun View Post
    I reckon soldering would be involved? (and no risk of using low quality solder I guess so I'd have to shell out enough $ to buy a Weller brand or something)
    Not at all (and actually, you won't be able to solder the thermocouple wire, even if you tried.)

    Type-K thermocouple thermometers basically measure temperature with a set of "wires" that are welded at the end. The weld, when heated, generates a small voltage, which then the thermometer senses and shows the temperature accordingly.

    These days, you can find type-K thermometers pretty cheaply. Look up TM-902C on eBay or AliExpress - it's a very common meter. I am using one myself. I think they can be found for even less than $5 shipped now.

    Originally posted by CapPun View Post
    I actually considered buying a thermal camera but the only affordable ones are resolution 80x60 tops dunno if that's enough to get a good 'IR snapshot' of something like a graphics card?
    Well, that resolution certainly isn't great, but should be enough to see any hot components on the board - or at least the "hot spots" if the components are too small to see.

    Originally posted by CapPun View Post
    ??? so for this you gotta touch the back of the card right? since touching the top of the VRM's directly is impossible cause the heatsink & fans are in the way.
    Not in my case, as my modded heatsink didn't cover the MOSFETs or associated coils at all - thus I could touch them. But the stock cooler covers everything. And since the entire stock cooler runs at more or less at least 60C, I can be pretty sure the heatsink will be "dumping" heat onto any parts of the board that will usually run cooler. On the other hand, any parts that may run hotter will be clamped to the cooler's temperature, more or less.

    So it goes both ways: a cooler can help cool the VRM better, but it could also make it hotter. It really depends if the cooler is adequate for the video card and if the fan profile isn't set at some silly curve that doesn't kick the fans harder until things are already cooking (but unfortunately, most are so that video cards can get better scores on reviews for being silent/quiet.)

    Originally posted by CapPun View Post
    so is the temperature front & back of the VRM (or CPU or anything other chip for that matter) the same? I mean graphics cards ain't exactly made of metal the PCB's made of some weird superhard plastic thingie so is the temp on either side oif the PCB still (more or less) the same?
    It depends.
    As mentioned above after your first quote, PCBs do have copper traces, so those can act like heatsinks. That said, modern PC board PCBs have almost always at least 4 layers (i.e. 4 levels with copper tracks/traces.) If the layers are connected well, with lots of vias (such as is often done on VRM output traces), then the temperature between the two connected layers will be pretty close. If one layer is on one side of the card and the other on the other side and they are connected with lots of vias, the temperature will be similar on both side. If not, then there's no telling.

    Leave a comment:


  • CapPun
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by momaka View Post
    None that I know of. This technique takes too much board space.
    ? that sux - figure they could use that method at least on them super-long cards (more then a foot long) else what's the point in making them so dangerously long

    That said, if you have a card that doesn't show VRM temps (most don't), you can do as sjt suggested and attach a thermocouple.
    I'd have to look that up first lol

    I reckon soldering would be involved? (and no risk of using low quality solder I guess so I'd have to shell out enough $ to buy a Weller brand or something)

    I actually considered buying a thermal camera but the only affordable ones are resolution 80x60 tops dunno if that's enough to get a good 'IR snapshot' of something like a graphics card?

    I often simply use the "highly scientific" finger test - if I can hold my finger on it, it is running below 60C (which is the temperature that is considered instantly damaging to skin cells and thus where most humans start to feel pain from it.) With that said and going back to my HD4850 video card example.
    ??? so for this you gotta touch the back of the card right? since touching the top of the VRM's directly is impossible cause the heatsink & fans are in the way. so is the temperature front & back of the VRM (or CPU or anything other chip for that matter) the same? I mean graphics cards ain't exactly made of metal the PCB's made of some weird superhard plastic thingie so is the temp on either side oif the PCB still (more or less) the same?

    Leave a comment:


  • momaka
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by CapPun View Post
    is there any known video cards so far that have done that? which brand? (preferably good brands like MSI / Asus / Gigabyte)
    None that I know of. This technique takes too much board space. Manufactures just rather slap a heatsink on top of the the FETs cases / VRM to help them cool a few degrees just enough to keep things operational - that is, on the high end GPUs anyways. On low and mid-range GPUs, the VRMs are rarely stressed much and thus often don't need any kind of cooling, apart from what the PCB can heatsink.

    Originally posted by CapPun View Post
    and in the end does this mean the VRM's heat up less then the GPU? (under full load like furmark)
    "heat up" can mean two things here:
    1) temperature: i.e. how hot a part runs
    -or-
    2) power dissipation: how much heat is given off a part per unit of time

    For 1)... it doesn't matter if it's the GPU chip or VRM or any other silicone/transistor part. All Si semiconductors are limited to about 125C before they start to receive permanent damage. That 125C figure is for the core/die of a transistor, and not the case temperature. Because there is thermal resistance between case and core/die, that means the overall case temperature should be much lower.

    For 2)... obviously the GPU chip is the part that should dissipate the majority of the power here (but that doesn't mean it should run hotter - if you have a good heatsink on it, at least ). The VRM just converts voltages, but also dissipates some heat due to inefficiency. Typical VRM efficiency, however, is more than 90%, and not uncommon to see it above 95% on high-end video cards. On that note, if you take a high-end high-power GPUs (like an RTX2080, or VEGA64... or even an R9 2/390 for example), where you have let's say 200-250W of max power draw, a VRM with 95% efficiency will dissipate close to 10-12.5W of power. This is quite a bit of heat for just the PCB to get rid of (if you don't think so, consider a small hot glue gun, which typically will reach above 100C at that same power draw). So for video cards like that, the VRM heatsink may actually be necessary... but again, unless it is soldered to the board to directly get rid of the heat, it won't be very efficient. It probably is just enough to keep things "operational" (but for how long, that's another question.) IMO, many high-end video cards today are designed to run right "on the edge" of their components' acceptable limits when under 100% load, which is why they fail so often when loaded that way (think: BitCoin mining cards.)

    Originally posted by CapPun View Post
    between VRM & GPU which is designed for higher heat tolerance?
    As mentioned above... all Silicone semiconductors are limited to more or less 125C internal temperature. Depending on case thermal resistance, the maximum temperature a part is "designed for" will vary. This is why often see GPU manufacturers say the maximum core temperature is, say, 90C. This means that at 90C, some parts of the GPU chip inside may already be at close to 125C. Meanwhile, a MOSFET may be rated to dissipate something like 1W @ 50C max (just making some numbers here). Thus, if this MOSFET is to reach higher than 50C temperature, then the power dissipation shall be derated (how much depends on the MOSFET itself - something that is in its datasheet.)

    Originally posted by CapPun View Post
    that's another question I had actually how do you measure VRM temps? so far I never seen a video card that shows VRM temp they only show GPU temp (so no sensors on VRM) why is that?
    I think some newer video cards do actually have a VRM temperature sensor. Typically, it should only be the high-end high-power ones, as they are likely to run into trouble.

    That said, if you have a card that doesn't show VRM temps (most don't), you can do as sjt suggested and attach a thermocouple.

    I often simply use the "highly scientific" finger test - if I can hold my finger on it, it is running below 60C (which is the temperature that is considered instantly damaging to skin cells and thus where most humans start to feel pain from it.) With that said and going back to my HD4850 video card example.
    Before: the whole video card would run too hot to touch with its stock cooler and fan cooling profile. GPU temps are regularly above 60C, even in idle mode (and 80-85 at full load), which means the entire PCB is running to at least around 60C, as the stock cooler is attached to everything.
    After my heatsink mod: GPU temps are around 40C idle, and 55-65C max under load (depending on ambient temperature), but VRM coils tend to remain only warm - I'd guesstimate around 50-55C max under load.
    Last edited by momaka; 04-03-2019, 07:15 PM.

    Leave a comment:


  • stj
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    tape or glue a k-probe to a fet or coil - coils can get hot too.

    but why?
    unless your going to plan on replacing the fets with ones that have a lower on-resistance.

    Leave a comment:


  • CapPun
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by momaka View Post
    If those MOSFETs really need extra cooling, then small heatsinks should be soldered to the PCB where the Drain on each MOSFET is soldered.
    is there any known video cards so far that have done that? which brand? (preferably good brands like MSI / Asus / Gigabyte)

    and in the end does this mean the VRM's heat up less then the GPU? (under full load like furmark)

    between VRM & GPU which is designed for higher heat tolerance?

    I have a reflown HD4850 here that I fitted/modded with a large Xbox 360 CPU heatsink for the GPU chip. The RAM chips and MOSFETs are not covered with anything, like they normally would be with the stock cooler. Nevertheless, both the RAM and the MOSFETs in the VRM run cooler despite not having anything on them. Why? Simply because the GPU chip is dissipating all of the heat through its heatsink and not dumping it back on the RAM and VRM, which is what the stock cooler does when its tiny fan (and silly fan "cooling" profile) can't deal with removing the heat from the stock cooler.
    that's another question I had actually how do you measure VRM temps? so far I never seen a video card that shows VRM temp they only show GPU temp (so no sensors on VRM) why is that?

    and how do you measure VRM temps on a video card?

    Leave a comment:


  • eccerr0r
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Yes "spacer" -- to reduce the amount of torque applied to the oversized heatsink and heat spreader of the GPU chip. If there was a gap around the center chip instead, you could squeeze one side and bend the board, applying stress to the spreader and the die underneath. The RAM chips already provide some of the height needed and reduce the amount of "stuff" needed to keep the board from flexing when the heatsink is depressed on an edge.

    Leave a comment:


  • momaka
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by CapPun View Post
    ok what about the VRM's then - is it better for those to be cooled by the main cooler or by a small separate heatsink?
    None.

    They should cool through the PCB only. That's because MOSFETs have the lowest thermal resistance from their Drain tabs, which are often soldered to the board. The epoxy case has rather lousy thermal conduction, and that's why it doesn't really matter much if the MOSFETs have a heatsink on top of them or not. If those MOSFETs really need extra cooling, then small heatsinks should be soldered to the PCB where the Drain on each MOSFET is soldered. This will give the best cooling performance. Anything else is mostly just a gimmick.

    I have a reflown HD4850 here that I fitted/modded with a large Xbox 360 CPU heatsink for the GPU chip. The RAM chips and MOSFETs are not covered with anything, like they normally would be with the stock cooler. Nevertheless, both the RAM and the MOSFETs in the VRM run cooler despite not having anything on them. Why? Simply because the GPU chip is dissipating all of the heat through its heatsink and not dumping it back on the RAM and VRM, which is what the stock cooler does when its tiny fan (and silly fan "cooling" profile) can't deal with removing the heat from the stock cooler.
    Last edited by momaka; 03-31-2019, 05:51 PM.

    Leave a comment:


  • CapPun
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by retardware View Post
    Do they get hot at all?
    Not all RAMs actually get hot.

    To me this pad looks more like a spacer.
    "spacer"?

    Originally posted by momaka View Post
    No, it would be fine.

    BGA memory chips like these are generally designed to cool through the solder balls on the under side. Thus, with a properly-designed PCB with large ground and Vcc planes, the memory chips should cool themselves through the PCB a lot better than through a thick thermal pad.

    With that said, I find often that if the GPU chip is itself cooled well, then this will also allow the whole card's PCB to run cooler, which in turn will cool the memory chips better. In fact, some of these "combined" coolers for both GPU chip and memory sometimes do more harm than good in terms of cooling, as they tend to spread the heat from the GPU chip to the rest of the card rather than actually remove it in the air. But generally, this mostly applies to high-end and hot-running cards with under-sized coolers.
    ok what about the VRM's then - is it better for those to be cooled by the main cooler or by a small separate heatsink?

    Leave a comment:


  • momaka
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by CapPun View Post
    HOWEVER I also noticed that the mem chips are only PARTIALLY covered by the heatsink
    ...
    so my question is, will this create a dangerous "thermal imbalance" between the half of the memory chips that's in contact the heatsink & the other half that's not in contact?
    No, it would be fine.

    BGA memory chips like these are generally designed to cool through the solder balls on the under side. Thus, with a properly-designed PCB with large ground and Vcc planes, the memory chips should cool themselves through the PCB a lot better than through a thick thermal pad.

    With that said, I find often that if the GPU chip is itself cooled well, then this will also allow the whole card's PCB to run cooler, which in turn will cool the memory chips better. In fact, some of these "combined" coolers for both GPU chip and memory sometimes do more harm than good in terms of cooling, as they tend to spread the heat from the GPU chip to the rest of the card rather than actually remove it in the air. But generally, this mostly applies to high-end and hot-running cards with under-sized coolers.
    Last edited by momaka; 03-25-2019, 09:10 PM.

    Leave a comment:


  • retardware
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by CapPun View Post
    look at the VRAM chips each is only half covered by the thermal pad & heatsink
    Do they get hot at all?
    Not all RAMs actually get hot.

    To me this pad looks more like a spacer.
    If it is actually intended to cool hot RAM chips, i think it could be an epic failure.

    But that depends on perspective.
    Maybe the calculations/experiments showed that the onset of stress damage (thermal, expansion, die, balls and bonds cracking etc) shortly after end of warranty period makes sure that demand for new boards will be present then?
    Wouldn't this be clever?

    Leave a comment:


  • CapPun
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by brethin View Post
    Looks good to me, nothing your pointing out makes any difference in the cooling off that card.
    I meant the memory chips

    look at the VRAM chips each is only half covered by the thermal pad & heatsink - what about the other half it will be a lot hotter than the covered half right?
    like a thermal imbalance or something


    and what about that 1 VRAM chips which is only 1/4 covered what's the use of covering it at all

    Leave a comment:


  • brethin
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    Originally posted by CapPun View Post
    alrite then I've another question it's about the cooling system itself (of that card)

    I notice that not only the GPU but also the VRM & the memory chips are covered by the heatsink

    HOWEVER I also noticed that the mem chips are only PARTIALLY covered by the heatsink:




    looks like stupid design right?
    notice how the heatsink & thermal pads only touch HALF the memory chips (and for one of them, only 1/4 of it!)


    so my question is, will this create a dangerous "thermal imbalance" between the half of the memory chips that's in contact the heatsink & the other half that's not in contact?

    basically, will half the chip still run super hot while the other half is colder?

    or instead will this still cool the entire chips even if only half is in contact with the heatsink?
    Looks good to me, nothing your pointing out makes any difference in the cooling off that card.

    Leave a comment:


  • CapPun
    replied
    Re: unknown (solid?) caps on MSI "military class" graphics card?

    alrite then I've another question it's about the cooling system itself (of that card)

    I notice that not only the GPU but also the VRM & the memory chips are covered by the heatsink

    HOWEVER I also noticed that the mem chips are only PARTIALLY covered by the heatsink:




    looks like stupid design right?
    notice how the heatsink & thermal pads only touch HALF the memory chips (and for one of them, only 1/4 of it!)


    so my question is, will this create a dangerous "thermal imbalance" between the half of the memory chips that's in contact the heatsink & the other half that's not in contact?

    basically, will half the chip still run super hot while the other half is colder?

    or instead will this still cool the entire chips even if only half is in contact with the heatsink?

    Leave a comment:

Working...
X