Header Logo
Facilities available to our group members
File:cowboy.png
The Cowboy supercomputer cluster. Funded by an NSF MRI grant and housed at OSU Stillwater.

Computation facilities/capabilities: Our group has sole access to a 120-core GP-GPU cluster that is housed in the Borunda computational lab. This GP-GPU Cluster is a general-purpose cluster outfitted with graphics processing unit consists of ten compute nodes each equipped with the following: Intel Core i7-3930K Sandy Bridge 3.2 GHZ processor (6-cores and 12 MB cache), 1 TB hard drive + 256 GB solid-state HD, 32 GB SDRAM, 1 NVIDIA Tesla C2075 GPU card (448 CUDA cores running at 575 MHz and 6 GB RAM), and 1 NVIDIA GeForce GTX 660 TI GPU card (1344 CUDA cores running at 915 MHz and 3 GB RAM). Each node has two stand-alone GPUs, our cluster has a total of 17,920 CUDA cores optimized for highly parallelized calculations.

OSU's HPCC Facilities are currently composed of `Cowboy' which is equipped with the following: 252 compute nodes (Intel Xeon E5-2620 “Sandy Bridge” hex core 2.0 GHz CPUs), 2 “fat nodes” each with 256 GB RAM and an NVIDIA Tesla C2075 card (the rest is the same configuration as above), 92 TB of globally accessible high-performance disk provided by three shelves of Panasas ActivStor12, Infiniband for message passing, Gigabit Ethernet for I/O, and an ethernet management network, 15 compute nodes (same configuration as above) each with two Xeon Phi cards (64 cores running at 1.30 GHz and with 16GB RAM) available only to the Fennell and Borunda groups at OSU (Condominium Equipment). Dr. Borunda is a Co-PI in an NSF-funded MRI project that will keep the supercomputer up to date for the following five years.

Group members also have access to the OU Supercomputing Center for Education and Research (OSCER) and the NASA Advanced Supercomputing systems. In 2015, OSCER has been deploying ‘Schooner’, the largest academic supercomputer in Oklahoma history. This Dell cluster supercomputer has a peak speed of 346.9 TFLOPs (trillions of calculations per second) and consists of 499 compute nodes, 10,180 CPU cores, ~23 TB RAM, ~450 TB of globally accessible user storage, and two networks. Computing capacity consists of: PowerEdge R430 compute nodes (servers), 266 × dual Intel Xeon “Haswell” E5-2650v3 10-core 2.3 GHz, 32 GB RAM (all owned by OSCER), PowerEdge R730 compute/accelerator-capable nodes, 4 × dual Haswell E5-2650v3 10-core 2.3 GHz, 32 GB RAM, dual NVIDIA K20M accelerator cards (3 OSCER, 1 condominium), 12 × dual Haswell E5-2650v3 10-core 2.3 GHz, 32 GB RAM, dual Intel Xeon Phi MIC 31S1P accelerator cards (all OSCER), 13 × dual Haswell E5-2650v3 10-core 2.3 GHz, 32 GB RAM, no accelerator cards (all OSCER), and 5 × dual Haswell E5-2670v3 12-core 2.3 GHz, 64 GB RAM, no accelerator cards (all OSCER). Storage consists of both a large scale, high performance parallel filesystem and servers full of disk drives (high performance parallel filesystem, global user-accessible: DataDirect Networks Exascaler SFX7700X, 70 SATA 6 TB disk drives, ~305 TB useable) and lower performance servers full of disk drives, global user-accessibe (~150 TB useable). The nodes are networked with Infiniband and Ethernet.