I purchased a bare motherboard that I’m going to try my 4th gen xenons that I have laying around plus a 1U bare bones I bought from a reputable computer recycler from Carrolton Tx. I plan to replace my AMD PC that started with 6 cores and now, only 4 are working, so either the motherboard is going out or something is going bad inside the cpu.
I plan to use one for my house router and the other for NAS and dev level work in IPFire. I am waiting on parts to come in so I can place them in normal cases instead of the 1U hair dryer fan cases.
no, at least here you can post your complete thought and reply before suspending you. Even though I did notice the unreasonable political censorship here like that spam social media site.
The case arrived (Thermaltake Tower 500) and installed a heat sink with fan, planned my static pressure ventilation and blocked areas of the case to enforce a positive case pressure cooling strategy. I am using high velocity fans for the primary cooling and some aRGB fans for some support ventilation and will help light things up in the closet where this is going. I’ve been thinking to making them thermostatic controlled.
Got all the parts in for the 40W build. started putting it together in a Raidmax case. The stock 80W that is getting transferred out of the 1U, I am waiting on that power supply even though I temporarily put the 40W build’s power supply in the white tower. But that was because I was analysing the cooling zones I’m establishing with aluminium tape.
Air flow isn’t much of an issue, but the fancy fans can’t do anything really in terms of that compared to real cooling fans. But the main cooling fans I picked were the Iceberg Thermal Icegale Xtra.
I have the 40W built up and 190 installed on it. Burn in after 48 hrs went well as all exhaust is cool and fans are running low speed and quiet. I still though want to connect the IPMI up so I can configure the fans. Some of these supermicro motherboards lets you turn fans completely off and schedule them to turn on at temperature set points. The Raidmax case though a little flimsy seemed to do the job even though the bottom of the board is hanging out in outer space. I am not that worried about warping since there is no weight on the end and the standoff mounts are not too far away from the edge. This might be even an advantage since now I have some airflow on the backside cooling off the board in addition to the primary function of moving air across the heat sinks. I installed the motherboard with the fish paper that was installed underneath the 1U case board and sealed the backside with aluminium tape. Also you notice the case top and bottom are blocked. The case, I rotated it so the front of the machine is now the base. This made more sense to me putting the board in the vertical position. The intake is on the bottom and venting is on the top. So doing this, increases cooling efficiency by utilising convection current. Which the Thermalright white case has, but made inefficient due to not establishing proper airflow zones. But it appears that they intended the case to be used by someone that wanted to use liquid cooling. The cheap case with the three fans in the front, but on the side w/o the front drive bays it seems is the best enclosure for these boards if not using a normal rack mount case.
I’m thinking of setting the ipmi on my device network (which is the orange that is with modified firewall rules for no internet access)
This board has two front panel LEDs for LAN, but I wish it had 3 so I could connect the internal networks. But I think I have a few options out there that I could use so I can make indicators for each network and display them behind the glass front (which was originally the side of the case)
This board has a 1Gb LAN as well as the rj45 serial. If I ran into a problem building, I would have connected this as the sensors for voltage and temperature is routed to this management platform instead of it being in bios like a normal computer.The thermostatic fan controls are in IPMI and not in bios.
There are a few options but if I don’t connect the ipmi lan it will auto failover to the 1st 10G port which I assigned it to the green network.
So I was thinking on assigning it an orange IP since I have it already blocked from the internet to run my printers and other non-internet static servers with.
I guess I could make a silver network. (a network running inside another network like orange, but all routing is controlled by the firewall) if I think that orange could be compromised.
Since I’m running this at my house, I don’t think that I need to make a black or out of band network for two machines. But if I go there I minus well add a 1Gb interface to it and exclusively route IPFire’s web gui to it and run all the management on a isolated network.
For anyone wondering what IPMI is, it is a embedded micro controller that runs independently from the rest of the computer to remotely set up the hardware and can be used to remotely install an OS on the system.
X540 is a well documented as they specified power under high temp, long runs and one port vs two and all speed modes.
Physically measuring the heat sinks, they are 15W tall versions of the 5W bga outline. Even though these could run without a heat sink, but they would end up having a package temp around 55C or 130F.
They are 10-12W for both 10Gb ports depending on temperature, network load and wire run.
So this 6 port configuration is about as much power to run as a 24 port 1Gb managed switch. minus the computer part of this (CPU/ram/video).
Its not going to be any better than getting 10Gb cards and adding them to a normal PC. If I wanted something energy efficient, I would find an Atom C3708 motherboard w/soc. That would be 17W TDP with 4 10Gb LAN built in.
even if this system pulled 2KW, its nothing compared to the bitcoin miners and vaccum tube audio I run all the time besides those custom tube amps I build and stress test and burn in for a week continuously. Like I really care it costs 30-50% more to run than a 1Gb network.