Charles Compton came up with this when s/he headbutted the keyboard a moment
ago in comp.os.linux.advocacy:
> Jim wrote:
>> Charles Compton came up with this when s/he headbutted the keyboard a
moment
>> ago in comp.os.linux.advocacy:
>>
>>> B Gruff wrote:
>>>> On Monday 13 November 2006 23:23 Roy Schestowitz wrote:
>>>>
>>>>> Top 500 Supercomputer Sites - November 2006
>>>>>
>>>>> ,----[ Quote ]
>>>>> | On the new list, the IBM BlueGene/L system, installed at DOE's
>>>>> | Lawrence Livermore National Laboratory (LLNL), retains the No.
>>>>> | 1 spot with a Linpack performance of 280.6 teraflops (trillions
>>>>> | of calculations per second, or Tflop/s).
>>>>> `----
>>>>>
>>>>> http://top500.org/lists/2006/11
>>>>>
>>>>> ,----[ Some stats ]
>>>>> | Operating system Family: Linux
>>>>> | Count: 376
>>>>> | Share %: 75.20%
>>>>> `----
>>>> There seems to be one striking omission in fact:-
>>>>
>>>> Linux 326 65.20 %
>>>> SuSE Linux Enterprise Server 8 3
>>>> Redhat Enterprise 3 1 0.20 %
>>>> HP Unix (HP-UX) 27 5.40 %
>>>> MacOS X 3 0.60 %
>>>> Solaris 5 1.00 %
>>>> UNICOS 8 1.60 %
>>>> Super-UX 3 0.60 %
>>>> AIX 43 8.60 %
>>>> Tru64 UNIX 3 0.60 %
>>>> SuSE Linux Enterprise Server 9 25
>>>> UNICOS/Linux 2 0.40 %
>>>> CNK/SLES 9 27
>>>> SUSE Linux 3 0.60 %
>>>> Redhat Linux 4 0.80 %
>>>> RedHat Enterprise 4 7
>>>> UNICOS/SUSE Linux 3 0.60 %
>>>> SUSE Linux Enterprise Server10 2 0.40%
>>>> SLES10 + SGI ProPack 5 5 1.00 %
>>>> Totals 500 100%
>>>>
>>>>
>>>>
>>> So either windows has 0% market share over the top 500 super computers,
>>> or the survey is biased to not include windows.
>>>
>>> Anyone know which is the case?
>>
>> IIRC last year there were two or three Microsoft-based systems in the
list,
>> none above #300 or so in the list. The notable absence of Microsoft
>> software running any of the top 500 this year comes as no surprise to me.
>>
>
> These are mostly clusters I would assume, no one other then Microsoft is
> going to pay for a cluster of Windows Server 2003, I don't think this
> speaks ill of Microsoft's software per se, but I do believe is speaks
> volumes for people capable of comparing 2 different numbers. Free
> licensed software being much less expensive then 3 and 4 digit licenses.
>
> This really only surprises me on the basis of "I thought there were more
> rich idiots out there."
>
> Charles~
Clusters are the big thing in HPC right now, not least because they're less
expensive to set up than bespoke supercomputers. They use off-the-shelf
hardware, so you're not necessarily tied to one hardware vendor. OK, mixing
incurs a performance hit. That's for another thread.
I'm currently specifying hardware for a low consumption cluster, which
currently (on paper) utilises Via C3 processors. Specifically, the Eden and
Nehemiah chips as embedded on the Epia and Epia-M boards. These things draw
something like 20 Watts, if that, without storage. So for the power cost of
a P4 3.06 (130W just for the processor, ~350W all told with one HDD, GPU,
and one optical drive), you could have a cluster of 15 individually
switchable, on-demand 1GHz processors, running from a single CDROM (or
bootable flash) with a premastered ISO image and storage onto a flash drive
on the head node. With a 24-port switch thrown in. Best bit about the deal:
the cluster can be completely silent save for the whir of the optical
drive.
Only problem at the minute is the initial cost of the hardware: Â150/node,
including 512MB RAM per core. Multiply that by 15, and well... stuff starts
to gettin' expensive when you're talking about scaling up to 30, 50, 100+
processors... the great thing is, it doesn't have to be a lump purchase.
Scalable means you can add to it as a: finances allow and b: demand for
processing power justifies it. Plug it in, switch it on, and a properly
configured cluster management setup'll just add the node to the mix.
Incidentally, I built a quad core (dual) Xeon last year, specced with 4GB
RAM and 400GB SATA RAID. System cost: Â3500.
Imagine how many Nehemiah blades that could buy...
clue: 22 with 512 RAM each (that's 11GB total, less overheads), plus 24-port
switch, plus 500W* power bus, and you got change!
*yes, that's all 22 of those puppies would require. Between them.
Going to the extreme end of things, I would estimate that a standard 72"
high rack with 19" wide payload bay would have a capacity of probably 216
blades, leaving room for power and switchgear. That's 108GB of RAM
available using the above spec, and a total power consumption of 4.3kW,
which is slightly below average for a fully populated IBM Bladecenter rack
(an exaggeration since the psu for a Bladecenter rack would be housed in a
seperate rack and be more like 8kW twin redundant=16kW). And run slightly
cheaper, too, at Â33,000 (with change) for the VIA rack as opposed to the
same price for ONE ceiling-configured Bladecenter HS21 (without HDD
storage) node - a rack full of the things would set you back 2.4 million
USD, or a little shy of one and a half million Sterling and you'd only get
60 processors.
Oh, and don't forget to add the shipping and installation for the IBM one -
their racks are /heavy/ - a full rack of Blades would weigh like two tons.
--
-*- Linux Desktops & Clustering Solutions -*- http://dotware.co.uk
-*- Registered Linux user #426308 -*- http://counter.li.org
-*- Linux is like a wigwam: no Windows, no Gates, and Apache inside.
-*- Disclaimer:
By sending an email to ANY of my addresses you are agreeing that:
1. I am by definition, "the intended recipient"
2. All information in the email is mine to do with as I see fit and make
such financial profit, political mileage, or good joke as it lends itself
to. In particular, I may quote it on usenet.
3. I may take the contents as representing the views of your company.
4. This overrides any disclaimer or statement of confidentiality that may
be included on your message.
|
|