Re: Ratdude's main rig V3, maybe
See the bye-bye astro thread... and since you're VIP, the wheels V2 thread... A large chunk of my savings is going bye bye.
Ratdude's main rig V3, maybe
Collapse
X
-
Re: Ratdude's main rig V3, maybe
Don't worry ratdude, I'm in the same boat as you. I got 12 cores and 8 gigs for $50, I don't really care what people have to say about it being sub-optimal. I'm aware its not the fastest, but it was quite cheap to build and with 2 9800gt's it will run Autodesk Inventor 2013 no problem and can run FarCry 3 on high settings just fine.
And to answer a question earlier in the thread, I would much rather drive an old Mack truck everyday than a 2010 Camaro.Leave a comment:
-
-
Re: Ratdude's main rig V3, maybe
And closing the thread is my way of saying "I don't care what anybody has to say, screw it all". If you interpret such a move any other way, then you're mistaken.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
the thanks I get is a bunch of jeers telling me to sledgehammer it and build a slapped together POS. I'm fine with criticism, but only constructive criticism relevant to the build in question. Like "I personally would have routed that cable there" or "I wouldn't have used that card, but this card might have been a better choice", not "I think you messed up by building this rig, you should toss it and kill your hobby".
If you close/delete the thread we will assume you agree with us and have realized what you're doing doesn't make senseLeave a comment:
-
Re: Ratdude's main rig V3, maybe
First it was my D630. Now it's this. Nobody gets me or my vision. Maybe I need to start putting black sheep stickers on my stuff since nobody likes them.
I'm misunderstood... always have been, looks like I always will be.
Heck, V2 never got this bad of a rap... Granted, most everything worked in it (unlike this rig that has had some hiccups).
I think the fundamental problem is that where you think I'm bitching, I'm really stating a challenge. I don't mind challenges... but apparently mentioning them = bitching.
Ok, so some if the challenges have been kinda frustrating (where a company as mainstream as AMD can't even make a workstation chip suite for a 64 bith CPU that completely works with any 64 bit OS). Ok, no windows vista/7 support, I can see that. But no support for linux (even when the board was new), which is very much a workstation/server OS, that blows my mind. 3ware and company had support for linux back then... why not AMD? Why? [/end rant]
This very well may be my last project thread. Heck, maybe even my last project post. If all that's going to happen is my threads get pissed on, then I'm not going to waste my time. Every lengthy post (esp. with pictures) I put a lot of work into... and the thanks I get is a bunch of jeers telling me to sledgehammer it and build a slapped together POS. I'm fine with criticism, but only constructive criticism relevant to the build in question. Like "I personally would have routed that cable there" or "I wouldn't have used that card, but this card might have been a better choice", not "I think you messed up by building this rig, you should toss it and kill your hobby". From what's been said, I've been getting a lot of the latter, and that doesn't do me any good.
Here's my current plan:
1. I'll leave the thread open for now, but if it gets bad, I'm closing it.
2. If I end up converting it to a K8WE rig, I'll put it in a new thread, this time being specific on what the rig is and isn't supposed to be.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
Maybe you should not be butthurt when like half a dozen people have tried to get you to understand that what you're doing DOES NOT MAKE SENSE!Leave a comment:
-
Re: Ratdude's main rig V3, maybe
I could care less about IPMI. It's all about BUILD QUALITY.
I take it nobody out here share's my vision... Maybe I should quit posting build threads.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
Atom is the new Celeron. Same power & price as the good stuff but a lot slower. Have you built a D525 system that idles at 17W wall?
The 1155 P8B-M has IPMI and iSCSI boot capability which RD might want until he sees the price.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
you can do even better by getting a mini ITX server board /w an Atom D525
One thing that makes server boards so expensive isn't 'higher grade parts'...its the remote management software. Built in firmware connected to a specific NIC port grants someone who has rights to reboot, flash the bios, or, if supported, take PS2-controller-emulation over the system, not requiring any kind of remote software
But Ratdude747 doesn't need it.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
you can do even better power-wise by getting a mini ITX server board /w an Atom D525 (Or Celeron 870something-ivy bridge)
One thing that makes server boards so expensive isn't 'higher grade parts'...its the remote management software. Built in firmware connected to a specific NIC port grants someone who has rights to reboot, flash the bios, or, if supported, take PS2-controller-emulation over the system, not requiring any kind of remote softwareLast edited by Uranium-235; 05-27-2013, 09:27 AM.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
Old server boards are only good if you have free power. If power costs money than go with anything 1155 such as the Asus P8B-M. No point in saving money on an aging server only to pay the difference to the Power Company.
The point of server boards is to prevent downtime. Of course this goes beyond just the board. You need power control, access control, storage control, service intervals, redundant connectivity, and staff. Short on any one of these and you're wasting your time buying a server board. You're not going to get the 5 nines the board promises so why bother with old slow junk when new not slow not junk is dirt cheap.
I run 3 servers with MSI h61 boards and Celeron G630. Two servers take 17W at idle & 42W at full CPU and GPU. The other server has more hard dries so it takes 36W at idle and idles all the time. That's 3 servers at about 100W average and they have fabulous uptime, more uptime than I need because I can take them down any time I want.
The 3 Core 2 servers these replaced took 65W at idle and 130W at full CPU and GPU. That big difference caused an immediate and large drop in room heat. Summer's coming pplz!
Lesson: buy server hardware when the desktop hardware is not making the grade. My desktop hardware makes the grade.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
You'd think so, but not really.
There is a bit more attention paid to making them because due to all the extra hardware on the boards, there's more layers in the pcb, there's more data integrity stuff to be taken care of, more signal routing issues.
There's a bit more attention to how hot components (processors,vrm circuitry etc) are positioned on the board and making sure the heatsink fins don't impede the air flow - the whole system is designed with the assumption that fast moving air is blown over 24/7 while the system runs.
Often, since it's known in advance what processors will work on the board, and the bios is stripped by anything allowing overclocking, they can size the heatsinks and the vrm to handle just about a few percents above the maximum ever possible on a board.
Again, heatsinks would be a bit undersized because they know there's gonna be forced cooling from noisy large fans 24/7 over the board.
In contrast, desktop boards (not talking about the really budget stuff, just play 80-150$ board) often have vrm much beefier than server boards simply because they have to support future processors, they expect users will be overclocking etc etc heatsinks are sized adequately because you can't rely on cpu fan blowing air over the vrm if user uses water cooling.
Server motherboards also won't be made to consider air humidity and temperature, because it won't matter if user is in cold canada, hot australia, wet India, the server's going to be in a cold/air conditioned datacenter.
It's actually expected for motherboards to fail, these things are not really designed for maximum reliability and long warranty. They're more or less designed for minimum downtime, so that a datacenter technician can easily swap chassis (just eject drives and plug them in another chassis prepared in advance)
Here's my Gigabyte motherboard: http://www.newegg.com/Product/Produc...irtualParent=1
Look at the warranty.. 3 years.
Now look at server motherboards...
intel server mb , 1 year warranty: http://www.newegg.com/Product/Produc...82E16813121410
supermicro mb : 1 year parts , 3 year labor: http://www.newegg.com/Product/Produc...82E16813182240
tyan server 3 years : http://www.newegg.com/Product/Produc...82E16813151270
asus 3 years : http://www.newegg.com/Product/Produc...82E16813131816
so what do you gain by going with server boards? Why don't they offer 5-10 years?
As a fun fact...on server hardware, what fails most often is actually onboard network cards and sas controllers... that can happen to any board.
Like i said, if you compare old server hardware with the current consumer hardware, consumer hardware wins. If you compare modern hardware, it is a bit more reliable. Is it really worth the extra price?
Depends on what you do with the hardware, but for home/workstation use it's not really worth it.
Last.. look on newegg.com at server motherboards. Look how many are full of 1-2 stars.
They're full of incompatibilities, like pci express slots not accepting video cards, onboard video cards with 2-4 megs of ram that only do 1280x1024, ram incompatibility or stupid stuff like the last two out of 8 memory slots not working until you do bios updates (really, quality control?!)...i could go on by i already wrote a ton of text.
now if he was looking for a workstation board, that would be a different story. I built a single Xeon Sandy Bridge WS system for my sisters ex boyfriend and that thing was tits. It was an asus board with ICH10R raid. But Intel ICH raid 1 is actually not that bad, and the software that manages it is pretty good (was intel matrix storage-preferred, now intel rapid storage). It took ECC DDR3, though it would not go into ECC mode cause the sticks didn't have a certain flag in the SPD profile. The board had no Enable/Disable ECC, it was 'automatic', and if your sticks did not have ECC in SPD, ECC would be disabled. That sucked but the system /w 16gb of ram & 2 500G WD blacks raid 1 as OS and 2 WD blacks 1.5gb as project storage (professional audio editing software). In a supermicro case with a 4-drive hotswap bay. Thing was awesome
oh yeah win7 pro 64-bit
go to newegg, go to server boards, look for anything with onboard audio, there is where things get closer to workstation than regular plain server boardsLast edited by Uranium-235; 05-26-2013, 01:36 AM.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
They are also aimed at critical environments - one where the user expects the board not to fail, unlike a consumer who upgrades their boards every few years. Thus, they are built to higher quality standards, and not with planned obsolescence in mind like all consumer gear.
There is a bit more attention paid to making them because due to all the extra hardware on the boards, there's more layers in the pcb, there's more data integrity stuff to be taken care of, more signal routing issues.
There's a bit more attention to how hot components (processors,vrm circuitry etc) are positioned on the board and making sure the heatsink fins don't impede the air flow - the whole system is designed with the assumption that fast moving air is blown over 24/7 while the system runs.
Often, since it's known in advance what processors will work on the board, and the bios is stripped by anything allowing overclocking, they can size the heatsinks and the vrm to handle just about a few percents above the maximum ever possible on a board.
Again, heatsinks would be a bit undersized because they know there's gonna be forced cooling from noisy large fans 24/7 over the board.
In contrast, desktop boards (not talking about the really budget stuff, just play 80-150$ board) often have vrm much beefier than server boards simply because they have to support future processors, they expect users will be overclocking etc etc heatsinks are sized adequately because you can't rely on cpu fan blowing air over the vrm if user uses water cooling.
Server motherboards also won't be made to consider air humidity and temperature, because it won't matter if user is in cold canada, hot australia, wet India, the server's going to be in a cold/air conditioned datacenter.
It's actually expected for motherboards to fail, these things are not really designed for maximum reliability and long warranty. They're more or less designed for minimum downtime, so that a datacenter technician can easily swap chassis (just eject drives and plug them in another chassis prepared in advance)
Here's my Gigabyte motherboard: http://www.newegg.com/Product/Produc...irtualParent=1
Look at the warranty.. 3 years.
Now look at server motherboards...
intel server mb , 1 year warranty: http://www.newegg.com/Product/Produc...82E16813121410
supermicro mb : 1 year parts , 3 year labor: http://www.newegg.com/Product/Produc...82E16813182240
tyan server 3 years : http://www.newegg.com/Product/Produc...82E16813151270
asus 3 years : http://www.newegg.com/Product/Produc...82E16813131816
so what do you gain by going with server boards? Why don't they offer 5-10 years?
As a fun fact...on server hardware, what fails most often is actually onboard network cards and sas controllers... that can happen to any board.
Like i said, if you compare old server hardware with the current consumer hardware, consumer hardware wins. If you compare modern hardware, it is a bit more reliable. Is it really worth the extra price?
Depends on what you do with the hardware, but for home/workstation use it's not really worth it.
Last.. look on newegg.com at server motherboards. Look how many are full of 1-2 stars.
They're full of incompatibilities, like pci express slots not accepting video cards, onboard video cards with 2-4 megs of ram that only do 1280x1024, ram incompatibility or stupid stuff like the last two out of 8 memory slots not working until you do bios updates (really, quality control?!)...i could go on by i already wrote a ton of text.Last edited by mariushm; 05-25-2013, 09:33 PM.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
They are also aimed at critical environments - one where the user expects the board not to fail, unlike a consumer who upgrades their boards every few years. Thus, they are built to higher quality standards, and not with planned obsolescence in mind like all consumer gear.Last edited by c_hegge; 05-25-2013, 08:46 PM.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
Servers also spend their lives in a chilled room with stable power and nice cooling. Not designed to be in a normal room.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
The only arguments for more stability on servers is the fact that memory uses ECC and/or is registered which reduces the number of memory errors and the fact that the systems are theoretically better cooled (very noisy high power fans forcing lots of air through them).
The first argument is debatable nowadays. Transfer errors happen all the time and ECC only catches corruption IN the memory modules themselves and repairs it. Corrupted data can also happen on the pcb lanes between the cpu and the memory slots (or between chipset and slots or chipset and cpu in case of this motherboard) in which case cpu or chipset requests another transfer to get the proper data or corrects the issue.
There's no question about it: data can become corrupted IN memory, so there's a benefit to ECC memory but statistically such in memory errors are rare, let's say 1 bit in 10^12 or something like that, let's say at best one bit every 30-40 days of 24/7 operation.
Data gets damaged far more often (don't ask me to give you white sheets/app notes etc, don't have these handy) between ram and cpu and here, we can start to discuss what is more likely to get corrupted data:
1. a system that has 16 memory slots all filled up with modules with data going from ram to chipset to cpu and from there to the other cpu for synchronization of caches and all that (so going throgh about 3-5 hypertransport links and a fifth of pcb) OR
2. a system that has 4 memory slots filled with 1 or 2 modules, moving data about 10-20 cm directly to the processor in a straight line, through high speed differential pairs
This is only the beginning. Keep in mind that the old hardware and the new hardware is like comparing 10 mbps to 1 gbps network cards.
A modern processor, just like a modern network card will have much better "brains" to filter noise and errors coming through the differential links.
To drive those 16 memory modules and to understand the signals, ddr1 needs something like 2.5-2.8v and the frequency maxes out at 400 Mhz (200mhz x 2)... there has to be quite a lot of voltage difference for the chipset to understand those bits of data and if theres some noise on the data path, the data may become corrupted easier.
In contrast, modern processors have better signal processing capabilities, just like 1gbps network cards have dsp units that engineers maybe didn't even imagine they would be possible in consumer hardware today.
The modern processor has the northbridge integrated, it talks directly to ram, it can handle even 2133 Mhz bus and the ram itself works at 1.35v-1.5v - i hope it's obvious the memory controller is such modern processors is much more capable to analyze the signal coming in at these speeds and filter and process it.
If you use 1333 Mhz or 1600 Mhz modules with such modern processors, for the processor it's a piece of cake to understand the data and the processor can tolerate much more noise on data lines before it needs to request a re-send from the memory.
We're not even going into how many individual memory chips are on 1GB ddr1 ECC modules compared to a modern 2-4 GB DDR3 module.
DDR1 1GB ecc is usually double sided, 18-20 chips on a module... you can get 4 GB DDR3 modules that have only 8 chips on them, like this one for example: http://www.newegg.com/Product/Produc...82E16820301188
Do the math: what's gonna have errors sooner 20 chips x 16 modules (320 failure points), or 8 chips x 4 modules = 32 failure points?
Sigh... this is only memory.
next we have on server boards tons of chips that all require 1.8v, 2.5v, 3.3v, 5v or a mix of these so you need power converters throughout the board... you have chips that are made with high geometry so they waste power therefore generate a lot of heat (guess what happens to capacitors around those heatsinks)
Modern motherboards use better dc-dc converters (newer more efficient chips), the power supplies themselves give better voltages
Consumer hardware simply has fewer points of failure.
It's mostly designed to work on 12v, everything is more integrated, northbridge is in cpu, southbridge is directly connected to cpu, most devices you plug are powered directly from 12v, everything is much simpler, with less possibility of failure, you only have pci express (serial data through differential links) and usb (again serial data through differential links) and sata (again serial data though differential links) compared to ata/scsi/pci/pci-x all but scsi being parallel buses susceptible to noise.
Even cheaper boards come with polymer capacitors nowadays, with solid state dc-dc converters for cpu, more efficient therefore less heat generated (in contrast to server hardware which is designed with forced cooling in mind, high rpm coolers are supposed to blow air over vrm circuitry - this is painfully obvious in the motherboard layout)Leave a comment:
-
Re: Ratdude's main rig V3, maybe
Whatever you say... But you don't know the history of a used board.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
I disagree. A used server board, like what has been used in these main rigs, will always be far more reliable than any band new consumer board. So even if his rig isn't as modern as yours, it's gonna be far more reliable than any cheap newegg build.Last edited by c_hegge; 05-25-2013, 05:26 PM.Leave a comment:
-
Re: Ratdude's main rig V3, maybe
That's fine, but if somebody is always complaining and going on about how they have no money and how they're a broke college student and how they want to move out and how they hate living with their parents blah blah blah... they can't make constant threads about the next overpriced used stuff they picked up off eBay/Topcat/Shovenose (yes, I'm being a hypocrite here, but hear me out)...
I would be interested to know how much has been spent on rigs V1/V2/V3... (not counting the price of Windows 7 OS, and not counting that Supermicro case because due to personal preference ratdude747 wants his computer to look like a f*cking server and that case could be used for the new build)...
Compare that to the price of the Newegg build I threw together in five minutes a couple posts up.
I have a feeling my build would end up cheaper, in addition to being more reliable, faster, quieter, more environmentally friendly (if you give a shit), use less power so be cheaper to run, and just overall better.Leave a comment:
Related Topics
Collapse
-
This specification for the ASUS E410MA-BV190TS-CASE Notebook can be useful for upgrading or repairing a laptop that is not working. As a community we are working through our specifications to add valuable data like the E410MA-BV190TS-CASE boardview and E410MA-BV190TS-CASE schematic. Our users have donated over 1 million documents which are being added to the site. This page will be updated soon with additional information. Alternatively you can request additional help from our users directly on the relevant badcaps forum. Please note that we offer no warranties that any specification, datasheet,...09-06-2024, 10:50 PM
-
This specification for the ASUS 571GT-AL188R-CASE Notebook can be useful for upgrading or repairing a laptop that is not working. As a community we are working through our specifications to add valuable data like the 571GT-AL188R-CASE boardview and 571GT-AL188R-CASE schematic. Our users have donated over 1 million documents which are being added to the site. This page will be updated soon with additional information. Alternatively you can request additional help from our users directly on the relevant badcaps forum. Please note that we offer no warranties that any specification, datasheet, or...09-06-2024, 11:05 AM
-
This specification for the ASUS VivoBook X712FA-AU311R-CASE Notebook can be useful for upgrading or repairing a laptop that is not working. As a community we are working through our specifications to add valuable data like the X712FA-AU311R-CASE boardview and X712FA-AU311R-CASE schematic. Our users have donated over 1 million documents which are being added to the site. This page will be updated soon with additional information. Alternatively you can request additional help from our users directly on the relevant badcaps forum. Please note that we offer no warranties that any specification, datasheet,...09-06-2024, 11:05 AM
-
This specification for the ASUS X712FA-AU306T-CASE Notebook can be useful for upgrading or repairing a laptop that is not working. As a community we are working through our specifications to add valuable data like the X712FA-AU306T-CASE boardview and X712FA-AU306T-CASE schematic. Our users have donated over 1 million documents which are being added to the site. This page will be updated soon with additional information. Alternatively you can request additional help from our users directly on the relevant badcaps forum. Please note that we offer no warranties that any specification, datasheet,...09-06-2024, 11:05 AM
-
This specification for the ASUS ZenBook UX461UN-E1042R-CASE Hybrid (2-in-1) can be useful for upgrading or repairing a laptop that is not working. As a community we are working through our specifications to add valuable data like the UX461UN-E1042R-CASE boardview and UX461UN-E1042R-CASE schematic. Our users have donated over 1 million documents which are being added to the site. This page will be updated soon with additional information. Alternatively you can request additional help from our users directly on the relevant badcaps forum. Please note that we offer no warranties that any specification,...09-06-2024, 07:07 AM
- Loading...
- No more items.
Leave a comment: