Saturday, June 30, 2012

The history of supercomputers

CDC 6600 -- the world's first supercomputer




Have you ever wondered why a supercomputer is called a supercomputer? Is it the number of processors or the amount of RAM? Must a supercomputer occupy a certain amount of space, or consume a specific amount of power?
The first supercomputer, the Control Data Corporation (CDC) 6600, only had a single CPU. Released in 1964, the CDC 6600 was actually fairly small — about the size of four filing cabinets. It cost $8 million — around $60 million in today’s money — and operated at up to 40MHz, squeezing out a peak performance of 3 million floating point operations per second (flops).
In comparison, the CDC 6600 was up to 10 times faster than the fastest computer at the time, the $13-million ($91m today!), 2000-square-foot-occupying IBM 7030 Stretch — thus earning the title of supercomputer. At this point, Intel was still seven years away from releasing the 740KHz 4004 CPU. (For a bit of fun, definitely read the original 1960 IBM 7030 press release.)
The CDC 6600 was super for other reasons, too. It was cooled with Freon that circulated in pipes around the four cabinets, which was then heat exchanged with a chilled external water supply (you can see some pipework in the bottom right corner of the image above). While there was only one CPU (which in those days was constructed from multiple circuit boards, not a single chip!) the CDC 6600 had 10 Peripheral Processors, each of which was dedicated to managing I/O and keeping the CPU’s queue full. The CPU itself contained 10 parallel functional units, each of which were dedicated to different tasks; floating point add, floating point divide, boolean logic, etc. The architecture was superscalar, in other words (though this word didn’t exist at the time).
The CPU had 60-bit word length and 60-bit registers, but a very small instruction set, because it only dealt with information that had been pre-processed by the Peripheral Processors. It is this simplicity that allowed the CDC 6600 CPU to be clocked so high. By today’s standards, we would call the CDC 6600 the first RISC system.
The CDC 6600, incidentally, was designed by Seymour Cray — a name that
  Cray 1 supercomputer


Cray X-MP supercomputer

Cray X-MP

It’s important to note that, at this stage, an entire supercomputer was still being referred to as a single CPU. The Cray X-MP, released in 1982, had support for up to four CPUs, but housed inside the same Cray 1 chassis. The Cray X-MP CPUs were very similar to the Cray 1, but with a clockspeed bump from 80 to 105MHz and a more-than-doubling of memory bandwidth, each of the X-MP CPUs pushed up to 200 megaflops. For $15 million ($32 million today), you could get your hands on a grand total of 800 megaflops.
Cray DD49 1.2GB disk drive unit
By the end of the Cray X-MP’s run it could support up to 16 million 64-bit words of memory — in SRAM! — which is equivalent to around 128MB of today’s RAM. It’s also worth noting that none of the costs mentioned so far include permanent storage — just the computer itself. The Cray X-MP, for example, supported up to 32 disk storage units, each about the size of a filing cabinet (pictured above) and capable of storing 1.2 gigabytes. Each unit cost $270,000 each in today’s money — about $225k per gig — but with an impressive transfer rate of around 10MB/sec, they were probably worth it.

A Cray 2 supercomputer

Cray 2

By now you’re probably a bit bored of Cray computers — but the fact is, the company dominated supercomputing from its inception in the ’70s through until the early ’90s. In 1985, the Cray 2 was released. The technology used was fairly similar to the Cray 1 and Cray X-MP — ICs packed together on logic boards — and again it had a similar horseshoe-shaped chassis.
To boost performance, though, the logic boards were crammed incredibly tightly (pictured below), meaning air cooling and Freon heat exchanging was no good — instead, the the entire computer was submersed in Fluorinert. In the picture above, the device at the back is a Fluorinert “waterfall” radiator. (There are some more awesome photos in the original Cray 2 brochure [PDF].)
Cray 2 logic module -- lots of ICs
With increased performance (and up to 8 CPUs), Cray Research also had to overcome a memory bottleneck. Basically, the Cray 2 used “foreground” processors to load data from main memory to local memory (similar to a cache but not quite) via a very fast gigabit-per-second bus, and then pass instructions off to “background” processors which would actually perform computation. In today’s nomenclature, foreground processors would be similar to modern CPU load/store units. The peak performance of the Cray 2 was 1.9 gigaflops — about twice the Cray X-MP, and fast enough to retain the title of world’s fastest supercomputer until 1990.
The Cray 2 is notable for being the first supercomputer to run “mainstream” software, thanks to UniCOS, a Unix System V derivative with some BSD features. Until this point, Cray supercomputers had only really been used by US governmental agencies like the DoE and DoD (for nuclear modeling — what else?), but the Cray 2 found a home in many universities and corporations.

Hitachi SR2201 supercomputer

Here come the Japanese

After some 20 years of American dominance, the early ’90s would see the emergence of a new king of supercomputing: the Japanese. These computers, such as the NEC SX-3 (pictured below), Fujitsu Numerical Wind Tunnel, and Hitachi SR2201, used very similar architectures to Cray — i.e. highly parallel arrays of vector processors attached to fast memory — and all respectively became the fastest supers in the world. The SR2201 (pictured above — check out the self-adulating “H” chassis!), released in 1996, had 2048 processors and a peak performance of 600 gigaflops — by comparison, a modern Sandy Bridge Core i5 or i7 CPU can perform around 100-200 gigaflops.
NEC SX-3During this period there was a shift away from a single shared bus to massive parallelism, where 2D and 3D networks (such as Cray’s Torus interconnect) connected together hundreds of CPUs. This was the origin of MIMD— multiple instruction, multiple data — which eventually led to multi-core CPUs.
Meanwhile, Seymour Cray had broken away from Cray Research to form Cray Computer Corporation (CCC) to build the Cray 3, the first computer built with gallium arsenide chips. The project failed, and then CCC went bankrupt during the production of Cray 4. As you’re probably aware, though, Cray Research most definitely lives on — but more on that later.

ASCI Red supercomputer, at Sandia National Labs

But what about Intel?

We’re now up to the mid-’90s, and yet Intel — the king of microprocessors since the ’70s — hasn’t been mentioned once. The main reason for this is that supercomputers and PCs are generally at odds with each other: where supers want as much processing power as possible, PCs have lots of cost and heat constraints. For the most part, it just didn’t make sense to use Intel chips in early supercomputers.
Throughout history, Intel has occasionally tried to launch chips based on a non-x86 architecture, usually without success. In 1989 it released the i860, a 32- and 64-bit RISC chip designed for use in large computers. The i860 would become the basis for the Intel Paragon, a supercomputer that supported up to 4,000 processors in a 2D MIMD topology. Paragon was a commercial failure, but it led to the creation of ASCI Red in 1996 (pictured above), which was the first supercomputer made from off-the-shelf CPUs — Pentium Pros, and then Pentium II Xeons — and other readily-available commercial components.
ASCI Red, with over 6,000 200MHz Pentium Pros and a cost of $46 million ($67 million today), was the first supercomputer to break the 1 teraflop barrier. Later upgraded to 9,298 Pentium II Xeons, ASCI Red reached 3.1 teraflops. It was the fastest supercomputer in the world for four years, and also the first supercomputer installation to use more than 1 megawatt of power. It was only decommissioned in 2006, after 10 years of use by the Sandia National Laboratories.

A Beowulf cluster of beige box PCs

Everyman supercomputing

Once supercomputers could be built with off-the-shelf components, it was only a matter of time until everyone started building supercomputers. Beowulf clusters — networks with any number of commodity PCs, generally running Linux — quickly emerged, and Linux soon replaced Unix as the supercomputing OS of choice.
The commoditization of supercomputers (and compute clusters) almost certainly played a key role in computer animated films like Toy Story, and the increasing use of CGI in cinema and TV throughout the ’90s.
Next page: Petascale


An IBM Blue Gene/L supercomputer rack (each heatsink is a CPU)

Petascale

While continued improvements to CPUs obviously helped supercomputers break new records, for the most part high-performance computing (HPC) in the 2000s mostly focused on squeezing more and more CPUs into a single system. This involved the development of ever-more-complex interconnects, and reducing power usage (and thus heat production).
Japan retook the crown from the US ASCI Red and ASCI White in 2002 with the 35-teraflops NEC Earth Simulator (which cost $900 million!), but then in 2004 IBM released Blue Gene/L, the first of a series of supercomputers that would blow the doors off the competition until 2008. The first version of Blue Gene/L, located at Lawrence Livermore National Laboratory, had 16,000 compute nodes (each with two CPUs) and was capable of 70 teraflops — but the final iteration in 2007 had more than 100,000 compute nodes and peak performance of 600 teraflops. The exact price of the project is unknown, but it’s in the hundreds-of-millions department.
Blue Gene/L was exceptional for two main reasons: Instead of fast, power-hungry chips, it used low-power RISC PowerPC cores — and, except for RAM, the compute nodes were entirely integrated into SoCs (system-on-a-chip). The image above shows the incredible density of a 2U Blue Gene/L rack — and each heatsink is a CPU, and you’ll notice that there are no fans or water cooling blocks.
Blue Gene/L would lead the pack until it was succeeded by the $130-million IBM Roadrunner, a 20,000-CPU PowerPC/AMD Opteron hybrid that was the first computer to break the 1-petaflop barrier.


Tianhe-1A supercomputer

Don’t forget the Chinese

It took them a while, but in 2010 China eventually topped the supercomputing charts (TheTop500) with the 2.5-petaflops, $88 million Tianhe-1A. Tianhe-1A is notable for being one of the few heterogeneous supercomputers in operation — it houses 14,336 Intel Xeon X5670 CPUs and 7,168 Nvidia Tesla GPUs — apparently saving lots of power in the process.
More importantly, though, China recently unveiled Sunway, a 1-petaflops supercomputer built entirely out of homegrown ShenWei CPUs. China has repeatedly stated that it wants to lessen its reliance on Western high-technology, and Sunway is a very important step in that direction. Russia has also stated that it would like to build its own homegrown supercomputers, but so far it lacks China’s manufacturing prowess.


K supercomputer, water cooled innards

The return of Cray, and the Japanese

The current undisputed champion of the high-performance computing world is Fujitsu’s K, housed at the RIKEN institute in Japan, which clocks in at 10 petaflops — some four times faster than Tianhe-1A. K does away with the low-power approach pioneered by Blue Gene and simply throws 88,128 8-core SPARC64 processors into the mix. Each CPU has 16GB of local RAM, for a total of 1,377 terabytes of memory. K draws almost 10 megawatts of power — about the same as 10,000 suburban homes — and the whole thing (some 864 cabinets!) is, understandably, water cooled. At 100 billion Yen ($1.25 billion), K is the most expensive supercomputer ever built, too.
Looking forward, the next target is exaflops — 1,000 petaflops. Realistically, we should hit 100 petaflops in the next few years, and exaflops a few years after that (2018-2020). The USA’s fastest supercomputer, the 1.7-petaflops Cray Jaguar at Oak Ridge National Laboratory, is currently being upgraded to become the 20-petaflops Cray Titan. Titan will be built with Cray XK6 blades, which marry AMD Opteron CPUs and Nvidia Kepler GPUs up to a theoretical peak of 35 petaflops.
Jaguar supercomputer at ORNL
Meanwhile, DARPA, recognizing that current silicon technology might not even be capable of exaflops, has summoned researchers to reinvent computing. IBM, on the other hand, is building an exascale supercomputer to process the exabytes of astronomical data produced by the world’s largest telescope, the Square Kilometre Array. The telescope goes online in 2024, which will hopefully give IBM enough time to work out how to multiply the performance of current computers by more than 100.
So there you have it: From 3 megaflops to 10 petaflops in 48 years. The world’s fastest supercomputer is 3.3 billion times faster than the first.
Riken K computer prototype

Cray XK6 Supercomputer


TOP500 SUPERCOMPUTERS IN THE WORLD June 2011 (1-100)



RankSiteComputer/Year VendorCoresRmaxRpeakPower
1RIKEN Advanced Institute for Computational Science (AICS)
Japan
K computer, SPARC64 VIIIfx 2.0GHz, Tofu interconnect / 2011
Fujitsu
5483528162.008773.639898.56
2National Supercomputing Center in Tianjin
China
Tianhe-1A - NUDT TH MPP, X5670 2.93Ghz 6C, NVIDIA GPU, FT-1000 8C / 2010
NUDT
1863682566.004701.004040.00
3DOE/SC/Oak Ridge National Laboratory
United States
Jaguar - Cray XT5-HE Opteron 6-core 2.6 GHz / 2009
Cray Inc.
2241621759.002331.006950.60
4National Supercomputing Centre in Shenzhen (NSCS)
China
Nebulae - Dawning TC3600 Blade, Intel X5650, NVidia Tesla C2050 GPU / 2010
Dawning
1206401271.002984.302580.00
5GSIC Center, Tokyo Institute of Technology
Japan
TSUBAME 2.0 - HP ProLiant SL390s G7 Xeon 6C X5670, Nvidia GPU, Linux/Windows / 2010
NEC/HP
732781192.002287.631398.61
6DOE/NNSA/LANL/SNL
United States
Cielo - Cray XE6 8-core 2.4 GHz / 2011
Cray Inc.
1422721110.001365.813980.00
7NASA/Ames Research Center/NAS
United States
Pleiades - SGI Altix ICE 8200EX/8400EX, Xeon HT QC 3.0/Xeon 5570/5670 2.93 Ghz, Infiniband / 2011
SGI
1111041088.001315.334102.00
8DOE/SC/LBNL/NERSC
United States
Hopper - Cray XE6 12-core 2.1 GHz / 2010
Cray Inc.
1534081054.001288.632910.00
9Commissariat a l'Energie Atomique (CEA)
France
Tera-100 - Bull bullx super-node S6010/S6030 / 2010
Bull SA
1383681050.001254.554590.00
10DOE/NNSA/LANL
United States
Roadrunner - BladeCenter QS22/LS21 Cluster, PowerXCell 8i 3.2 Ghz / Opteron DC 1.8 GHz, Voltaire Infiniband / 2009
IBM
1224001042.001375.782345.50
11National Institute for Computational Sciences/University of Tennessee
United States
Kraken XT5 - Cray XT5-HE Opteron Six Core 2.6 GHz / 2011
Cray Inc.
112800919.101173.003090.00
12Forschungszentrum Juelich (FZJ)
Germany
JUGENE - Blue Gene/P Solution / 2009
IBM
294912825.501002.702268.00
13Moscow State University - Research Computing Center
Russia
Lomonosov - T-Platforms T-Blade2/1.1, Xeon X5570/X5670 2.93 GHz, Nvidia 2070 GPU, Infiniband QDR / 2011
T-Platforms
33072674.111373.06
14DOE/NNSA/LLNL
United States
BlueGene/L - eServer Blue Gene Solution/ 2007
IBM
212992478.20596.382329.60
15DOE/SC/Argonne National Laboratory
United States
Intrepid - Blue Gene/P Solution / 2007
IBM
163840458.61557.061260.00
16Sandia National Laboratories / National Renewable Energy Laboratory
United States
Red Sky - Sun Blade x6275, Xeon X55xx 2.93 Ghz, Infiniband / 2010
Oracle
42440433.50497.40
17Texas Advanced Computing Center/Univ. of Texas
United States
Ranger - SunBlade x6420, Opteron QC 2.3 Ghz, Infiniband / 2008
Oracle
62976433.20579.382000.00
18DOE/NNSA/LLNL
United States
Dawn - Blue Gene/P Solution / 2009
IBM
147456415.70501.351134.00
19Air Force Research Laboratory
United States
Raptor - Cray XE6 8-core 2.4 GHz / 2010
Cray Inc.
42712336.30410.04
20Korea Meteorological Administration
Korea, South
Haeon - Cray XE6 12-core 2.1 GHz / 2010
Cray Inc.
45120316.40379.01
21Korea Meteorological Administration
Korea, South
Haedam - Cray XE6 12-core 2.1 GHz / 2010
Cray Inc.
45120316.40379.01
22Universitaet Frankfurt
Germany
LOEWE-CSC - Supermicro Cluster, QC Opteron 2.1 GHz, ATI Radeon GPU, Infiniband / 2011
Clustervision/Supermicro
16368299.30508.50
23Government
United States
Cray XE6 12-core 2.2 GHz / 2010
Cray Inc.
45504295.50400.44
24University of Edinburgh
United Kingdom
HECToR - Cray XE6 12-core 2.1 GHz / 2011
Cray Inc.
44376279.64372.76
25Forschungszentrum Juelich (FZJ)
Germany
JUROPA - Sun Constellation, NovaScale R422-E2, Intel Xeon X5570, 2.93 GHz, Sun M9/Mellanox QDR Infiniband/Partec Parastation / 2009
Bull SA
26304274.80308.281549.00
26KISTI Supercomputing Center
Korea, South
TachyonII - Sun Blade x6048, X6275, IB QDR M9 switch, Sun HPC stack Linux edition / 2009
Oracle
26232274.80307.441275.96
27DOE/SC/LBNL/NERSC
United States
Franklin - Cray XT4 QuadCore 2.3 GHz / 2008
Cray Inc.
38642266.30355.511150.00
28Texas Advanced Computing Center/Univ. of Texas
United States
Lonestar 4 - Dell PowerEdge M610 Cluster, Xeon 5680 3.3Ghz, Infiniband QDR / 2011
Dell
22656251.80301.78
29Airbus
France
HP POD - Cluster Platform 3000 BL260c G6, X5675 3.06 GHz, Infiniband / 2011
Hewlett-Packard
24192243.90296.11643.00
30Grand Equipement National de Calcul Intensif - Centre Informatique National de l'Enseignement Supérieur (GENCI-CINES)
France
Jade - SGI Altix ICE 8200EX, Xeon E5472 3.0/X5560 2.8 GHz / 2010
SGI
23040237.80267.881064.00
31KTH - Royal Institute of Technology
Sweden
Lindgren - Cray XT6m 12-Core 2.1 GHz / 2011
Cray Inc.
36384237.20305.63658.35
32Universitaet Aachen/RWTH
Germany
Bullx B500 Cluster, Xeon X56xx 3.06Ghz, QDR Infiniband / 2011
Bull SA
25448219.84270.54
33Institute of Process Engineering, Chinese Academy of Sciences
China
Mole-8.5 - Mole-8.5 Cluster Xeon L5520 2.26 Ghz, nVidia Tesla, Infiniband / 2010
IPE, Nvidia, Tyan
33120207.301138.44
34INPE (National Institute for Space Research)
Brazil
Tupã - Cray XT6 12-core 2.1 GHz / 2010
Cray Inc.
30720205.10258.05
35DOE/SC/Oak Ridge National Laboratory
United States
Jaguar - Cray XT4 QuadCore 2.1 GHz / 2008
Cray Inc.
30976205.00260.201580.71
36Sandia National Laboratories
United States
Sandia/Cray Red Storm - Cray XT3/XT4 / 2009
Cray Inc.
38208204.20284.002506.00
37NOAA/Oak Ridge National Laboratory
United States
Gaea - Cray XT6-HE, Opteron 6100 12C 2.1GHz / 2010
Cray Inc.
30912194.40259.66610.70
38Japan Atomic Energy Agency (JAEA)
Japan
BX900 Xeon X5570 2.93GHz , Infiniband QDR / 2009
Fujitsu
17072191.40200.08831.23
39King Abdullah University of Science and Technology
Saudi Arabia
Shaheen - Blue Gene/P Solution / 2009
IBM
65536190.90222.82504.00
40Shanghai Supercomputer Center
China
Magic Cube - Dawning 5000A, QC Opteron 1.9 Ghz, Infiniband, Windows HPC 2008 / 2008
Dawning
30720180.60233.47
41Government
France
Cluster Platform 3000 BL2x220, L54xx 2.5 Ghz, Infiniband / 2009
Hewlett-Packard
24704179.63247.04
42Taiwan National Center for High-performance Computing
Taiwan
ALPS - Acer AR585 F1 Cluster, Opteron 12C 2.2GHz, QDR infiniband / 2011
Acer Group
26244177.10231.86800.00
43EDF R&D
France
Ivanhoe - iDataPlex, Xeon X56xx 6C 2.93 GHz, Infiniband / 2010
IBM
16320168.80191.27510.00
44Swiss Scientific Computing Center (CSCS)
Switzerland
Monte Rosa - Cray XT5 SixCore 2.4 GHz / 2009
Cray Inc.
22032168.70211.51713.00
45SciNet/University of Toronto
Canada
GPC - iDataPlex, Xeon E55xx QC 2.53 GHz, GigE / 2009
IBM
30240168.60306.03869.40
46Lawrence Livermore National Laboratory
United States
Sierra - Dell Xanadu 3 Cluster, Xeon X5660 2.8 Ghz, QLogic InfiniBand QDR / 2010
Dell
21756166.70243.67
47Government
United States
Cray XT5 QC 2.4 GHz / 2009
Cray Inc.
20960165.60201.22
48University of Tokyo/Institute for Solid State Physics
Japan
SGI Altix ICE 8400EX Xeon X5570 4-core 2.93 GHz, Infiniband / 2010
SGI
15360161.80180.02719.30
49ERDC DSRC
United States
Diamond - SGI Altix ICR 8200 Enh. LX, Xeon X5560 2.8Ghz / 2009
SGI
15360160.20172.03774.50
50IBM Poughkeepsie Benchmarking Center
United States
Power 775, Power7 3.836 GHz / 2011
IBM
6912159.60212.12423.12
51ERDC DSRC
United States
Garnet - Cray XE6 8-core 2.4 GHz / 2010
Cray Inc.
20176153.00193.69
52University of Colorado
United States
MRI - PowerEdge C6100 Cluster, Xeon X5660 2.8 Ghz, Infiniband / 2010
Dell
15648152.20175.26
53Vestas Wind Systems A/S
Denmark
iDataPlex DX360M3, Xeon 2.93, Infiniband/ 2011
IBM
14664151.67171.86458.25
54CINECA / SCS - SuperComputing Solution
Italy
iDataPlex DX360M3, Xeon 2.4, nVidia GPU, Infiniband / 2011
IBM
3072142.70293.27160.00
55CLUMEQ - McGill University
Canada
Guillimin - iDataPlex DX360M3, Xeon 2.66, Infiniband / 2010
IBM
14400136.30153.22376.75
56Vienna Scientific Cluster
Austria
VSC-2 - Megware Saxonid 6100, Opteron 8C 2.2 GHz, Infiniband QDR / 2011
Megware
20700135.60185.01430.00
57New Mexico Computing Applications Center (NMCAC)
United States
Encanto - SGI Altix ICE 8200, Xeon quad core 3.0 GHz / 2007
SGI
14336133.20172.03861.63
58Computational Research Laboratories, TATA SONS
India
EKA - Cluster Platform 3000 BL460c, Xeon 53xx 3GHz, Infiniband / 2008
Hewlett-Packard
14384132.80172.61786.00
59Lawrence Livermore National Laboratory
United States
Juno - Appro XtremeServer 1143H, Opteron QC 2.2Ghz, Infiniband / 2008
Appro International
18224131.60162.20
60eni
Italy
HP ProLiant SL390s G7 Xeon 6C X5650, Infiniband / 2011
Hewlett-Packard
15360131.20163.43
61DOE/NNSA/LANL
United States
Cerrillos - BladeCenter QS22/LS21 Cluster, PowerXCell 8i 3.2 Ghz / Opteron DC 1.8 GHz, Infiniband / 2009
IBM
14400126.50161.86276.00
62NOAA/ESRL/GSD
United States
Jet - Raytheon/Aspen Cluster, Xeon X5560/X5650 2.8/2.66 Ghz, QDR Infinband / 2010
Raytheon/Aspen Systems
13732126.50148.12
63University of Southern California
United States
HPC - PowerEdge 1950/SunFire X2200/IBM dx340/dx360/HP SL160, Xeon/Opteron 2.3-2.67GHz, Myrinet 10G / 2011
Dell/Sun/IBM
17280126.40175.99
64National Computational Infrastructure National Facility (NCI-NF)
Australia
Vayu - Sun Blade x6048, Xeon X5570 2.93 Ghz, Infiniband QDR / 2010
Oracle
11936126.40139.89
65University of Chicago
United States
Cray XE6 12-core 2.1 GHz / 2010
Cray Inc.
17856125.80149.99
66National Institute for Computational Sciences/University of Tennessee
United States
Athena - Cray XT4 QuadCore 2.3 GHz / 2008
Cray Inc.
17956125.13165.20888.82
67Atomic Weapons Establishment
United Kingdom
Blackthorn - Bullx B500 Cluster, Xeon X56xx 2.8Ghz, QDR Infiniband / 2010
Bull SA
12936124.60145.15
68Japan Agency for Marine -Earth Science and Technology
Japan
Earth Simulator - SX-9/E/1280M160 / 2009
NEC
1280122.40131.07
69IDRIS
France
Blue Gene/P Solution / 2008
IBM
40960119.31139.26315.00
70ECMWF
United Kingdom
Power 575, p6 4.7 GHz, Infiniband / 2008
IBM
8320115.90156.421329.70
71ECMWF
United Kingdom
Power 575, p6 4.7 GHz, Infiniband / 2009
IBM
8320115.90156.421329.70
72DKRZ - Deutsches Klimarechenzentrum
Germany
Power 575, p6 4.7 GHz, Infiniband / 2008
IBM
8064115.90151.601288.69
73JAXA
Japan
Fujitsu FX1, Quadcore SPARC64 VII 2.52 GHz, Infiniband DDR / 2009
Fujitsu
12032110.60121.281020.50
74US Army Research Laboratory (ARL)
United States
SGI Altix ICE 8200 Enhanced LX, Xeon Nehalem quad core 2.8 GHz / 2009
SGI
10752109.30120.42475.00
75Commissariat a l'Energie Atomique (CEA)/CCRT
France
GENCI-CCRT-Titane - BULL Novascale R422-E2 / 2010
Bull SA
11520108.50130.00477.00
76Joint Supercomputer Center
Russia
MVS-100K - Cluster Platform 3000 BL460c/BL2x220, Xeon 54xx 3 Ghz, Infiniband / 2009
Hewlett-Packard
11680107.45140.16
77HLRN at Universitaet Hannover / RRZN
Germany
SGI Altix ICE 8200EX, Xeon QC E5472 3.0 GHz/X5570 2.93 GHz / 2009
SGI
10240107.10120.73
78HLRN at ZIB/Konrad Zuse-Zentrum fuer Informationstechnik
Germany
SGI Altix ICE 8200EX, Xeon QC E5472 3.0 GHz/X5570 2.93 GHz / 2009
SGI
10240107.10120.73
79Total Exploration Production
France
SGI Altix ICE 8200EX, Xeon quad core 3.0 GHz / 2008
SGI
10240106.10122.88442.00
80Lawrence Livermore National Laboratory
United States
Muir - Dell Xanadu 3 Cluster, Xeon X5660 2.8 Ghz, QLogic InfiniBand QDR / 2010
Dell
15000105.90168.00
81Cyfronet
Poland
Zeus - Cluster Platform 3000 BL2x220, L56xx 2.26 Ghz, Infiniband / 2011
Hewlett-Packard
11694104.77124.42
82Computer Network Information Center, Chinese Academy of Science
China
DeepComp 7000, HS21/x3950 Cluster, Xeon QC HT 3 GHz/2.93 GHz, Infiniband / 2008
Lenovo
12216102.80145.97
83Lawrence Livermore National Laboratory
United States
Hera - Appro Xtreme-X3 Server - Quad Opteron Quad Core 2.3 GHz, Infiniband / 2009
Appro International
13552102.20127.20
84Information Technology Center, The University of Tokyo
Japan
T2K Open Supercomputer (Todai Combined Cluster) - Hitachi opteron QC 2.3 GHz Myrinet 10G / 2009
Hitachi
15104101.74138.96831.50
85Kurchatov Institute Moscow
Russia
Cluster Platform 3000 BL2x220, E54xx 3.0 Ghz, Infiniband / 2010
Hewlett-Packard
10304101.21123.65
86Lawrence Livermore National Laboratory
United States
Edge - Appro GreenBlade Cluster Xeon X5660 2.8Ghz, nVIDIA M2050, Infiniband / 2010
Appro International
8240100.50239.87745.00
87South Ural State University
Russia
SKIF Aurora - SKIF Aurora Platform - Intel Xeon X5680, Infiniband QDR / 2011
RSC SKIF
8832100.40117.00
88Max-Planck-Gesellschaft MPI/IPP
Germany
VIP - Power 575, p6 4.7 GHz, Infiniband / 2009
IBM
684898.42128.741095.00
89Institute of Physical and Chemical Res. (RIKEN)
Japan
RIKEN Intergrated Cluster of Clusters, Xeon X5570 2.93GHz, Infiniband DDR / 2009
Fujitsu
904897.94106.04
90Government
Sweden
Cluster Platform 3000 BL2x220, L56xx 2.26 Ghz, Infiniband / 2011
Hewlett-Packard
1728097.50156.21
91DOE/SC/Pacific Northwest National Laboratory
United States
Chinook - Cluster Platform 4000 DL185G5, Opteron QC 2.2 GHz, Infiniband DDR / 2008
Hewlett-Packard
1817697.07159.95
92Naval Oceanographic Office - NAVO DSRC
United States
Cray XT5 QC 2.4 GHz / 2011
Cray Inc.
1272096.55122.11588.90
93EDF R&D
France
Frontier2 BG/L - Blue Gene/P Solution / 2008
IBM
3276895.45111.41252.00
94University of Edinburgh
United Kingdom
HECToR - Cray XT4, 2.3 GHz / 2009
Cray Inc.
1228895.08113.05
95IT Service Provider
Germany
Cluster Platform 3000 BL2x220, E54xx 3.0 Ghz, Infiniband / 2009
Hewlett-Packard
1024094.74122.88
96Clemson University
United States
Palmetto - PowerEdge 1950/SunFire X2200/iDataPlex/IBM dx340 Intel 53xx/54xx 2.33Ghz, Opteron 2.3 Ghz, E5410 2.33GHz Myrinet 10G / 2011
Dell/Sun/IBM
1252892.48115.26
97Tsinghua University
China
Inspur TS10000 HPC Server, Xeon X56xx 2.93 Ghz, QDR Infiniband / 2011
Inspur
921692.42107.30
98IBM Thomas J. Watson Research Center
United States
BGW - eServer Blue Gene Solution / 2005
IBM
4096091.29114.69448.00
99Idaho National Laboratory
United States
Fission - Appro Xtreme-X3 Opteron 2.4GHz, Infiniband QDR / 2011
Appro International
1248091.03119.81360.00
100University of Alaska - Arctic Region Supercomputing Center
United States
Cray XE6 8-core 2.3 GHz / 2010
Cray Inc.
1164888.92107.16