Rybka Chess Community Forum
Well, yes, human multitasking still has some bugs ;)
I wonder what the power draw is kilowatt/hr wise in that room? :) :)
That is cool. :)
I meant to write one keyboard, mouse and monitor for every two computers, but it looks like that would have been just as wrong.
Yes this is the switch I have and it works for up to 8 computers using only one mouse, keyboard and monitor. Ofcourse it can be confusing to shuffle around all that. I also managed to build an enclosure so that I can put all the motherboards on top of each other, each with its own power supply and VGA card and memory connected by ethernet switch. The plan was to use a 2 or 2.5 ton water cooling unit to cool water and channel it to the enclosure and then to the CPU's and MCH. I have found out that chilled water works extremely well for cooling Quad cores and maybe the future 6 cores and 8 cores. Unlike vapor cooling, chilled water has an excellent delta curve when it comes to temperature spikes. The plan was to chill the water to around 12c or just above the dew point (which is dependent on local humidity). The problem is that hardware changes move so fast that by the time you have designed and configured a setup, all the motherboards you bought are already obsolete. Still I might go ahead and build this as this seems to be hardware independent and CPU's will always benefit from chilled water cooling. I am still waiting for the Skulltrail equivalent Nehalem, but haven't yet seen an overclockable motherboard. If that happens I already have the Vapor system to push it to the limit. It seems that these new cores can easily go over 5 Ghz to 5.5 Ghz with good vapor. Imagine at 6 cores or 8 cores a socket running at 5.2 Ghz!!! What would then be fun is to have that system play Lukas's cluster, it would definetely be an interesting fight and probably the strongest chess ever played ... although I do think the cluster will come out on top.
>The plan was to use a 2 or 2.5 ton water cooling unit to cool water and channel it to the enclosure and then to the CPU's and MCH.
That sounds pretty cool indeed :)
>The plan was to chill the water to around 12c or just above the dew point (which is dependent on local humidity).
If you take a closer look at the photo of my cluster, you'll find (on the left of the desk) a device that indicates humidity, temperature and dewpoint. 2 years ago I used a chiller which is usually used to cool fish tanks for CPU cooling. It worked well, but here in Germany humidity is sometimes very high. So I had to adjust water temperature to the dew point which wasn't fun for everyday usage. In your country this might be more constant - so I hope it works for you.
>Imagine at 6 cores or 8 cores a socket running at 5.2 Ghz!!! What would then be fun is to have that system play Lukas's cluster, it would definetely be an >interesting fight and probably the strongest chess ever played ... although I do think the cluster will come out on top.
Yes, you are right - this would be very interesting. According to my very rough calculation, a 5.2 GHz 2x8 cores Nehalem computer should be 95 Elo points stronger than a 2xW5580 at normal clockspeed. My estimate is that the cluster is around 120 Elo points stronger than a 2xW5580. But I'll test that.
If you build that 2.5 ton cooling unit it would be very nice to get some photo of it :)
Hi Lukas ... the water cooling unit here is quite common and is used in many home. Our freshwater supply is distributed via cpvc black pipes and is some places it is exposed ... so in the summer the temperature of the water supply to the homes can be as hot as 52C+. These units look like a normal mini split AC unit, but instead of air fan coil unit, they simply have stainless steel (or copper) pipes that run through a water tank. Basically my plan was to put the noisy compressor on the roof and have the cooling coils cool a 25 litre to 50 litre tank with a cooling solution (these chillers are usually used to keep water temperature in 1200 gallon insulated tank to 21c when outside temperature is 48C). This would be connected to a pipe which I would then distribute to the "farm". The great thing about that would be that noise would be minimal and you could run everything in one IN and OUT loop and you have more cooling capacity than you will ever need. The problem with water chilling is that unless you want to put the motherboards in a hermetically sealed zero moisture compartment (very difficult to do) you have to take care of the dew point. Obviously with slight modification you could modify this unit to cool some solutions to even -10C or -20C, but without the mentioned air tight box your system would look like a nice popsicle ... so it is always best to only cool to what your dew point will allow.
At the moment I plan to wait for the new Skulltrail entry with Nehalem cores. I already have a dual compressor vapor system ready which I had used on my older Skulltrail (8 @ 4.8 Ghz). This time I will add some water cooling to the MCH and memory as well as I think it was limiting my overclocks in my older Skulltrail. My compressors are 1hp each so they can easily deal with any heat a Nehalem can throw at it, maybe even the heat of 4 sockets. It should be an interesting project but I have yet to hear of an overclockable Nehalem dual socket board.
the Nehalem EPs should be coming out in short order within the next week or two. :)
We already used Nehalem EPs in WCCC
maybe i'm thinking of Nehalem EX but I guess that won't come out until later this year or early next
No. 4 from the right is my surf computer, it didn't play - and the upper right computer is a test computer, it also didn't play in the Olympiad. But both of them were used for testing/debugging - so up to 60 cores were active .
Lukas, you should consider starting to use stickers, really.
Well, if you take a closer look, you will find that in fact there are labels on several computers telling me the computer name and the number for my KVM switch.
Impressive setup, Lukas! Does the cluster still work when it gets 30 degrees centigrade outside!?
Yes, Jeroen, it does. In fact I've got an AC in this room with a cooling capacity of 5.15 KW. The cluster only has ~3.7 - 4 KW, so it should be enough. And I've tested the cluster at a room temperature of 32°C which is no problem. Btw. during the games room temperature was only around 28° and on 4 days I could even switch my AC off.
That's what the water bottle is for. :)
FWIW - Frederic and I spent maybe 3 minutes yesterday talking about the 8-core tournament and 30 minutes talking about the cluster. The big-hardware thing seems to be a lot sexier from the media point of view.
I think people always want to see the strongest possible hardware ... at least to watch it play and not necessarily to have it in their homes. We have all been very aware for a long time that chess computers will kick our butts on our home PC's, so the WCCC doesn't have to be about hardware we have or can have in our homes. This holds true for almost every sport or event.
It's also a matter of novelty. Computer chess is not a mainstream sport, so you can really only hope to interest a larger group of people for a short period of time. For this, having something new and huge and cool really helps.
size matters, boys will be boys ;)
Wow Lukas, that is very impressive!
Thou shall not covet thy neighbor's 52-core cluster :-P
If that's a commandment, then I'm going to hell.
which secret weapon is the blue laser like gun in the bottom right corner?
I think it's Lukas' screw driver. Of course he doesn't build computers in an amateurish way ;)
That's an infrared thermometer.
Seems way past time for Lukas to get a rack...
There are various advantages to having a scattering of normal desktops.
I can't think of any off hand, but there must be one or two.
Alan, there are several - but I'm not going to discuss them with someone who tells me that ducted airflow is good for overclocking. :)
There are not many things in this life that I am sure of. The superiority of ducted air cooling to random air cooling is one of them. A few courses in CFD would straighten you out! :-)
>The superiority of ducted air cooling to random air cooling is one of them.
In theory you are right. But please show me a ducted airflow solution that cools an overclocked So. 1366 CPU running at 250W in reality
I use IBM blade servers that hold 14 dual socket 1366 CPU blades in a 7U chassis and dissipate approximately 3kW. Of course these run at the standard 2.93 GHz and are not overclockable (for liability reasons). This is serious hardware so ducted cooling is standard. These units would melt down if they relied on random air flow.
There are a couple of reasons that the equipment you are buying uses random, rather than ducted air flow:
1) There is not a large enough market for this application, certainly not the 30 or so people buying this stuff for computer chess. The gaming market is larger, but is more focused on graphic card performance.
2) There are not enough knowledgeable people in this realm to justify the design effort and additional cost. I would guess there is are several years of CFD effort in each IBM blade server chassis (~1M in engineering effort). Most cases probably have a few hours of equivalent effort, and cooling isn't nearly as high a priority as having a good looking case.
Finally, FWIW, there used to be overclocking competitions associated with CES, and years back, with COMDEX. For those that had an air cooled category, you would see carefully thought out ducting.
>I use IBM blade servers that hold 14 dual socket 1366 CPU blades in a 7U chassis and dissipate approximately 3kW.
Oh I see, next freestyle tournament can come for you now! :-) I hope the rack is properly filled.
PS: What´s CFD?
"Use" means specify and use for sale in products. I'm not sure the Royal Netherlands Navy would want their blades playing chess...
CFD is shorthand for Computational Fluid Dynamics, basically a finite element analysis routine for calculating flows of compressible or non-compressible fluids.
>I'm not sure the Royal Netherlands Navy would want their blades playing chess...
I´m sure, playing chess is much better than torture whales with sonar! :-)
PS2: Maybe you can test there, if 1. b3!! is the best opening!
I´m sure, playing chess is much better than torture whales with sonar! :-)
I am just listening. It is the whales that are torturing me with false alarms...
Maybe you can test there, if 1. b3!! is the best opening!
Maybe they might want a chess playing supercomputer on board but they would probably be afraid it would take over the boat and kill the crew...
>and kill the crew...
In the past there was an accident during rock concert and I ask my 12 years old nierce about it. Not bad, only one dead Dutch! :-)
PS: Please, this isn´t my view!
It seems that the Dutch don't always get too much respect from the Germans. A German guy I work with who is usually pretty well informed, didn't know that the Dutch put up a good fight in WWII until they were persuaded to end their resistance or have their cities bombed.
>that the Dutch put up a good fight in WWII ...
I´m sure nobody knows that in Germany (besides historians). Netherland is Blitzkrieg
for the Germans in WWII.
I only see the bitterness of older Dutch about Germans, when I visit this country as student in ~1975. But in this time I was a historian bastard! :-)
PS: It´s amazing. When you become older, you learn to look to both (all?) sides. Maybe a real handicap (in economy)! :-)
Yes, your ancestors were rather mischievous, weren't they? Singing ran an den Feind, ran an den Feind, bomben auf Engeland just like a bunch of rowdy frat boys with Heinkels who come around sheepishly later and apologize to the survivors after they've leveled the town! Oh well, it's history. We know you guys are super, super-pacifists now (except at football matches).
> I can't think of any off hand, but there must be one or two.
2) Easier to overclock & maintain (if you have the floor space)
3) Easier to mix and match different types of hardware
1) If you were going to go out and buy 12 computers, I am not sure that buying them in individual cases would be less expensive than buying a single rack and a single PDU. I suspect the cost would be pretty close between the two.
2a) Overclocking - I don't see any difference here. If you use exotic cooling approaches, having the cards closer together should be an advantage. If you are air cooling, a good rack is engineered to support this, while a good case is primarily designed to look good. Racks also offer some innovative cooling techniques. I think spray cooling will catch on in the next few years.
2b) Maintenance - I can't agree with this. having the motherboards in roll out drawers is much more convenient for accessing anything on them.
3) Nah, the rack won't be offended if you fill it with different hardware vintages. Worst case, you'll have to get new cable sets for the PDU to support changing MB standards.
The problem is - all you write is theory - you never tried that in practice.
If I decided to use a 19" rack using the cooling concept I like, that rack would have to be 330 cm high for my hardware. In that case, I'd have to make a hole into my house's roof - but I don't think, it's good if it rains on my computers - but maybe you have a waterproof computer version for this scenario ;)
The problem is - all you write is theory - you never tried that in practice.
Hardly. Every job I've ever worked on, including a supercomputer for weather prediction 27 years ago, has used rack mounted equipment. Anything else would be unprofessional. Any reasonable cooling methodology should fit into a 3U drawer (13.3 cm per drawer), and 2U is a lot more common. The great majority of server motherboards end up in racks, not cases.
I never claimed to be professional. Remember, I earn my money as a physician, not by selling hot and noisy 7 HE 3 kW computers to the Dutch army. I only claim to be effective. Much more cost effective and much more effective concerning noise than you will ever be. My coolers would require 5 HE cases for all of my comps - and my UPSs already have 20 HE. And you're badly informed - there are several hot and noisy 2 HE cooling concepts for So. 1366 - but there are no suitable 3 HE concepts. But there are several coolers that fit in 4 HE and in 5 HE cases. I prefer the 5 HE stuff.
The Dutch Navy is one of many customers. Navy applications are, more often than not, cooled by chilled water which is neither hot nor noisy. Most of these applications have gone to blades which are more dense than 1U motherboard designs, which were also hard to cool.
Socket 1366 is pretty new in the applications that I work on, because there is an extensive testing period before a product is qualified and a good vendor will stand behind it. Intel is well aware of these requirements and makes sure that the major vendors get engineering samples long before they go public. But there is still usually a 3 month wait for server boards from high line vendors.
As far as costs go, everyone at some level is interested in cost, but they generally have a more expansive view than how much it costs to buy the parts. They are interested in manageability, availability, efficiency, maintainability, and for some applications redundancy.
I also have quite a few computers. Not as many as Lukas, and not nearly as new, and I am hardly an expert, but my task was the same and I came to the same conclusion.
First - as far as I am aware, the desktop solution is cheaper and easier. Racks are actually quite expensive, and harder to get. Any rack problems will need a complicated solution, while a desktop case problem can be solved with a walk to the nearest computer store.
Second - I like to cool my computers using normal house fans. As you know, this is in my view the safest, cheapest, most robust solution. :)
Finally - there is quite a variance between different desktop machines. I have no idea what sort of issues might come up if you tried to mix and match everything in one rack, but this seems like a potential headache. For example, could you mix Intel and AMD? What about ATX vs eATX motherboards? Etc ..
If you build the computers yourself, you can do just about anything in a rackmount case. I'm actually using micro ATX right now. Rackmount cases aren't that much more expensive, but the actual rack and rails add a bit. Of course, they can just sit in a stack on the floor.
Powered by mwForum 2.27.4 © 1999-2012 Markus Wichitill