cirr
ELITE MEMBER
- Joined
- Jun 28, 2012
- Messages
- 17,049
- Reaction score
- 18
- Country
- Location
By Dan Olds, Gabriel Consulting Get more from this author
Posted in HPC , 15th November 2012 15:36 GMT
SC12 Salt Lake City is abuzz with news that China's NUDT team has once again snared the LINPACK benchmark crown at a student cluster-building competition. The team's record-breaking score of 3.014 TFLOPS topped all other competitors and marked the first time a student cluster team has broken through the 3 TFLOPS barrier.
This is the second LINPACK win for NUDT (China's National University of Defense Technology) in less than a year: the group's HPC system also scored highest in the benchmark at the ISC'12 competition in Hamburg in June.
Teams from China took both of the top LINPACK slots at SC12 last night, the other being USTC (University of Science and Technology of China).
Team Longhorn came third with 2.488 TFLOPS. Student clustering buffs may recall that Team Longhorn was one of the first teams to achieve a TFLOP in the 2010 competition in New Orleans, so it's not surprising to see them posting a score in the top echelon.
I was surprised to see Team Venus land in the number four spot, beating other more experienced competitors. It's a great result given their relative inexperience with HPC and clustering in general. Team Taiwan and Team Boilermaker (from Purdue University) came fifth and sixth, respectively.
Under the SCC rules, teams have almost no ability to make changes to their hardware once they settle on their final configuration. In other words, they can't run a whole bunch of power-hungry GPUs for a certain workload and then physically detach them or put them to sleep in order to save power when running other less GPU-friendly workloads.
This makes sense - it ensures that the 26 amp power cap actually has teeth. As a result, some teams may configure their system in a way to perform better on scientific applications (which will make up the majority of their overall score) rather than capture the highest LINPACK.
The LINPACK line-up
This rule turned out to have an effect on Team Chowdah's (Boston) LINPACK score. Because the group wasn't allowed to put certain components and nodes to sleep, it had to power down two nodes completely. This means it was two nodes short of a full system on both the LINPACK and application runs - which will definitely throw some sand in their chowder.
While this definitely had an impact on LINPACK, we're not sure exactly what this means for Chowdah's scientific application runs. In other words, the team still has a chance at the overall SCC crown.
More SCC History
I plotted out the history of LINPACK scores since the inaugural SCC way back in November of 2007 in Reno. After the first few years, the assembled clusters' performance has satisfyingly ramped up from SC09 (0.7 TFLOPS) to today's huge score of more than 3 TFLOPS.
Here's a test question for the readers: is the growth in LINPACK scores by the SCC teams larger or smaller than what we'd expect to see with Moore's Law? ®
Posted in HPC , 15th November 2012 15:36 GMT
SC12 Salt Lake City is abuzz with news that China's NUDT team has once again snared the LINPACK benchmark crown at a student cluster-building competition. The team's record-breaking score of 3.014 TFLOPS topped all other competitors and marked the first time a student cluster team has broken through the 3 TFLOPS barrier.
This is the second LINPACK win for NUDT (China's National University of Defense Technology) in less than a year: the group's HPC system also scored highest in the benchmark at the ISC'12 competition in Hamburg in June.
Teams from China took both of the top LINPACK slots at SC12 last night, the other being USTC (University of Science and Technology of China).
Team Longhorn came third with 2.488 TFLOPS. Student clustering buffs may recall that Team Longhorn was one of the first teams to achieve a TFLOP in the 2010 competition in New Orleans, so it's not surprising to see them posting a score in the top echelon.
I was surprised to see Team Venus land in the number four spot, beating other more experienced competitors. It's a great result given their relative inexperience with HPC and clustering in general. Team Taiwan and Team Boilermaker (from Purdue University) came fifth and sixth, respectively.
Under the SCC rules, teams have almost no ability to make changes to their hardware once they settle on their final configuration. In other words, they can't run a whole bunch of power-hungry GPUs for a certain workload and then physically detach them or put them to sleep in order to save power when running other less GPU-friendly workloads.
This makes sense - it ensures that the 26 amp power cap actually has teeth. As a result, some teams may configure their system in a way to perform better on scientific applications (which will make up the majority of their overall score) rather than capture the highest LINPACK.
The LINPACK line-up
This rule turned out to have an effect on Team Chowdah's (Boston) LINPACK score. Because the group wasn't allowed to put certain components and nodes to sleep, it had to power down two nodes completely. This means it was two nodes short of a full system on both the LINPACK and application runs - which will definitely throw some sand in their chowder.
While this definitely had an impact on LINPACK, we're not sure exactly what this means for Chowdah's scientific application runs. In other words, the team still has a chance at the overall SCC crown.
More SCC History
I plotted out the history of LINPACK scores since the inaugural SCC way back in November of 2007 in Reno. After the first few years, the assembled clusters' performance has satisfyingly ramped up from SC09 (0.7 TFLOPS) to today's huge score of more than 3 TFLOPS.
Here's a test question for the readers: is the growth in LINPACK scores by the SCC teams larger or smaller than what we'd expect to see with Moore's Law? ®