Co-financed

Co-financed from various projects and third-party funds over different time periods (In verschiedenen Zeiträumen, aus den verschiedenen Projekten und Drittmitteln mitfinanziert)

  • 2010-2021: Transregional Collaborative Research Center 80 (TRR 80),
  • Augsburg Centre for Innovative Technologies (ACIT) Kompetenzzentrum für Innovative Technologien ,
  • 2010-2017: Research Unit (Forschergruppe) FOR 1346,
  • Konjunkturprogramm I / II,
  • 2000-2009 Collaborative Research Center (Sonderforschungsbereich) SFB 484,

  • WAP-Mitteln der Physik,

Timeline


Update of Storage System

  • 2 NetApp 2860 (> 1PB)

First GPU Node

  • 1 nodes: 2x64 core AMD Epyc 7742, 2.25 GHz, 1TByte RAM, Hewlett Packard Enterprise (HPE) DL385 Gen10 Servers (alcc136)
    • 1 Nvidia tesla_v100s-pcie-32gb

Extra AMD Epyc Nodes

  • 2 nodes: 2x32 core AMD Epyc 7452, 2.35 GHz, 512GByte RAM, Hewlett Packard Enterprise (HPE) DL385 Gen10 Servers (alcc140-141)
  • 3 nodes: 2x64 core AMD Epyc 7742, 2.25 GHz, 1TByte RAM, Hewlett Packard Enterprise (HPE) DL385 Gen10 Servers (alcc137-139)
  • Networking: 25GBit/s Ethernet.

First AMD Epyc Nodes

  • 2 nodes: 2x32 core AMD Epyc 7452, 2.35 GHz, 512GByte RAM, Hewlett Packard Enterprise (HPE) DL385 Gen10 Servers (alcc132-135)
  • Networking: 25GBit/s Ethernet.

First Skylake Nodes

  • 3 nodes: 2x18 core Intel Xeon Gold 6140 (Skylake) 2.3 GHz, 256GByte RAM, Hewlett Packard Enterprise (HPE) DL360 Gen10 Servers (alcc128,alcc130-alcc131)
  • Networking: 25GBit/s Ethernet.

New Login Node

  • 1 nodes: 2x18 core Intel XEON Gold 6140 (Skylake) 2.3 GHz, 384GByte RAM, Hewlett Packard Enterprise (HPE) DL360 Gen10 Servers (alcc128,alcc130-alcc131)
  • Networking: 25GBit/s Ethernet.

Network Modernization

  • Networking: 25GBit/s Ethernet.

Extra Broadwell Nodes

  • 3 nodes: 2x14 core Intel Xeon E5-2680v4 (Broadwell) 2.4 GHz, 256GByte RAM, Hewlett Packard Enterprise (HPE) DL360 Gen9 Servers (alcc123-alcc125)
  • Networking: Dual 1GBit/s Ethernet.

Extra Storage Nodes

  • 2 nodes: 2x10 core Intel Xeon E5-2640v4 (Broadwell) 2.4 GHz, 128GByte RAM, Hewlett Packard Enterprise (HPE) DL360 Gen9 Servers (secondary GPFS nodes)
  • Networking: Dual 1GBit/s Ethernet.

New Storage

  • NetApp E2760 (276TB)
  • New primary gpfs node: 2x12 core Intel Xeon E5-2650v4 (Broadwell) 2.2 GHz, 128GByte RAM, Hewlett Packard Enterprise (HPE) DL360 Gen9 Servers (primary GPFS node)
  • Networking: Dual 1GBit/s Ethernet.

First Broadwell Nodes

  • 9 nodes: 2x14 core Intel Xeon E5-2680v4 (Broadwell) 2.4 GHz, 256GByte RAM, Hewlett Packard Enterprise (HPE) DL360 Gen9 Servers (alcc114-alcc122)
  • Networking: Dual 1GBit/s Ethernet.

First HPE Servers, First Haswell EP Nodes

  • 2 nodes: 2x12 core Intel Xeon E5-2680v3 (Haswell EP) 2.6 GHz, 512GByte RAM, Hewlett Packard Enterprise (HPE) DL360 Gen9 Servers (alcc109-alcc110)
  • Networking: Dual 1GBit/s Ethernet.

Extra Ivy Bridge EP Nodes

  • 3 nodes: 2x10 core Intel Xeon E5-2680v2 (Ivy Bridge EP) 2.8 GHz, 256GByte RAM, Fujitsu Primergy RX200S8 alcc(106-108),
  • Networking: Dual 1GBit/s Ethernet.

First Ivy Bridge EP Nodes

  • 3 nodes: 2x10 core Intel Xeon E5-2680v2 (Ivy Bridge EP) 2.8 GHz, 128GByte RAM, Fujitsu Primergy RX200S8 alcc(103-105),
  • Networking: Dual 1GBit/s Ethernet.

Extension of Storage

  • GPFS1 (100TB)

First Sandy Bridge EP Nodes

  • 4 nodes: 2x8 core Intel Xeon E5-2670 (Sandy Bridge EP) 2.6 GHz, 128GByte RAM, Fujitsu Primergy RX200S7 alcc(97--99,102),
  • 2 nodes: 2x8 core Intel Xeon E5-2670 (Sandy Bridge EP) 2.6 GHz, 256GByte RAM, Fujitsu Primergy RX200S7 alcc(100-101),
  • Networking: Dual 1GBit/s Ethernet.

New Storage System

  • Migration from Lustre file system to General Parallel Files System (GPFS) IBM (60TB)

Workload Managing System

  • Migration from SGE (Sun Grid Engine) to Slurm (Simple Linux Utility for Resource Management)

Additional Fujitsu Water-cooled Rack


Extra Westmere Nodes

  • 14 nodes: 2x6 core Intel Xeon X5670 (Westmere) 2.93 GHz Quad Core, 48GByte RAM, Fujitsu Siemens Computers Primergy RX200S6 alcc(80-93),
  • Networking: Dual 1GBit/s Ethernet.

First Westmere Nodes

  • 9 nodes: 2x6 core Intel Xeon X5670 (Westmere) 2.93 GHz Quad Core, 48GByte RAM, Fujitsu Siemens Computers Primergy RX200S6 alcc(70-78),
  • Networking: Dual 1GBit/s Ethernet.

First Gainestown Nodes, Konjunkturprogramm II

  • 16 nodes: 2x4 core Intel Xeon X5570 (Gainestown) 2.93 GHz Quad Core, 64GByte RAM, Fujitsu Siemens Computers Primergy RX200S5 alcc(54-69),
  • Networking: Dual 1GBit/s Ethernet.

First Water-Cooled Rack

  • Fujitsu Primecenter LC Rack, Cooltherm Rack

UPS (USV)

  • Unterbrechungsfreie Stromversorgung USV-Anlage, D6306/60kVA.

Extra Compute Nodes

  • 4 nodes: 2x4 core Intel Xeon E5440 (Harpertown) 2.83 GHz 32GByte RAM, Fujitsu Siemens Computers RX200S4 alcc(49-52).

Extension of Data Storage System

  • Sun Storage Tek,
  • Parallel File System Lustre,
  • 2 nodes, data storage: first x86_64 64bit 2 x Intel Xeon DP E5440 (Harpertown) 2.83 GHz 16GByte RAM, Fujitsu Siemens Computers RX300S4.

Extra Compute Nodes

  • 3 nodes: 2x4 core Intel Xeon E5440 (Harpertown) 2.83 GHz 12GByte RAM, Fujitsu Siemens Computers Primergy RX200S4 (7x SFB-484, 1x WAP-Mitteln der Physik (21 00 00 99))  alcc(37-44).

RAM Upgrade

  • upgrade each node to 16GB RAM.

First Harpertown Nodes

  • 8 nodes: 2x4 core Intel Xeon E5440 (Harpertown) 2.83 GHz 12GByte RAM, Fujitsu Siemens Computers Primergy RX200S4 (7x SFB-484, 1x WAP-Mitteln der Physik (21 00 00 99))  alcc(37-44).

Data Servers

  • 4 nodes: x86_64 64bit 2x2 core Intel Xeon 5150 (Woodcrest) 2.66 GHz 4GByte RAM, Fujitsu Siemens Computers Primergy RX300S3/X.

Extra Compute Nodes

  • 5 extra nodes: x86_64 64bit 2x2 core Intel Xeon 5160 (Woodcrest) 3.0 GHz 16GByte RAM, Fujitsu Siemens Computers Primergy RX200S3/X.

Beginning of cooperation between Institute for Physics and chair for "Physische Geographie und Quantitative Methode"

First x86_64 Nodes

  • 12 nodes: x86_64 64bit 2x2 core Intel Xeon 5160 (Woodcrest) 3.0 GHz 12GByte RAM, Fujitsu Siemens Computers Primergy RX200S3/X,
  • Networking: 1GBit/s Ethernet.

Storage Oxygen RAID 416E

  • Storage Oxygen RAID 416E, 16x 750 GB.

First x86_64 nodes

  • 3 nodes: 64bit AMD 2x 2.2GHz Dual Core Opteron, 8GB RAM, DELTA Computer Products GmbH (alcc118 - alcc120).

Birth of ALCC (Augsburg Linux Compute Cluster)

The first x86 cluster in Institute for Physics

  • 17 nodes: Intel 32bit Xeon (Paxville) 2.8GHz, 1GB RAM, IBM xSeries 335/345, (alcc01-alcc17),
  • IBM Fast T600 Storage,
  • Networking: 1GBit/s Ethernet,
  • Workload managing system: Sun Grid Engine (SGE).

First Nodes Ordered including Data Servers


Preparation, Racks


  • Keine Stichwörter