What's new

Is Tianhe-2 Overrated?

.
Some people here seems to have the IQ of a 12 years old (Strange as they all say how high the Chinese IQ were) and do not understand why Tianhe-2 is wasting it bandwidth. Let me explain one more time.

Process to power ration is not "Processor" to power ratio. What does it means? Let assume Tianhe-2 can only have 10 process per year to process. Then the economical level of Tianhe-2 would be depending on the task perform by Tianhe2 which mean if almost all those process occupied Tianhe2 processing power, then and only then said equipment would be considered striking the necessarily power to process. Because the power consumption remain slightly constant, and it would have lost value if Tianhe was used to process information only requiring half of its computation power. However, according to Statistical sampling model. The standard deviation of the task would have the majority of process will only require a fraction of Tianhe-2 processing power.


Process 1 - require 15% of Tianhe-2 Processing power
Process 2 - require 30% of Tianhe-2 Processing power
Process 3 - require 35% of Tianhe-2 Processing power
Process 4 - require 40% of Tianhe-2 Processing power
Process 5 - require 45% of Tianhe-2 Processing power
Process 6 - require 50% of Tianhe-2 Processing power
Process 7 - require 50% of Tianhe-2 Processing power
Process 8 - require 55% of Tianhe-2 Processing power
Process 9 - require 75% of Tianhe-2 Processing power
Process 10 - require 95% of Tianhe-2 Processing power

This 10 process satisfied the standard deviation (SD) model. So, theoretically, tianhe-2 would spend 65% of its time processing something slightly less than half to more slightly more than half its capacity. But still, it have to consume the power require for the computer to function.

9 out of 10 times the Tianhe-2 power is wasted because essentially you pay for full price for using something in half most of the time.

Now, you can still say you need Tianhe-2 to process the one that need majority of its power (Process 10). But in ascent, no. Because process and computer power can be either serial or parallel. A process can either break down into 2 or more and process them separate for the same result, require longer time but less processing power. Or alternatively you can connect 2 or more supercomputers to parallel process the task. Hence in effect, you don't actually need supercomputer like Tianhe-2.



is this a joke or something?
I think that you are a joke while you call other people lower IQ.
1. Your standard deviation model should also apply to IBM machine too if it works..
2. Normally supercomputers run multi-tasks at the same time. So your model doesn't work.

usually if your tasks run at low CPU, you may choose a supercomputer with low speed to save the cost, or share CPU with other tasks.
 
.
I think that you are a joke while you call other people lower IQ.
1. Your standard deviation model should also apply to IBM machine too if it works..
2. Normally supercomputers run multi-tasks at the same time. So your model doesn't work.

usually if your tasks run at low CPU, you may choose a supercomputer with low speed to save the cost, or share CPU with other tasks.

That's what you don't understand my point. Look at the last point I raise?

You do not use a new set of sample for a new machine. Just because US have a lower power machine, that does not mean the median task they are doing also half. They are building to fit the majority of the task in this model.

Computer process can either break down into serial and parallel (Which contributed to your "Multitasking"), you can either break down your process or put 2 supercomputer together and perform the one process that over your standalone machines capacity, hence making a single machine with massive power obsolete.
 
.
That's what you don't understand my point. Look at the last point I raise?

Computer process can either break down into serial and parallel (Which contributed to your "Multitasking"), you can either break down your process or put 2 supercomputer together and perform the one that over your standalone machines capacity, hence making a single machine with massive power obsolete.
For multi-core computers, you can manage CPU cores and memories and use them independently for independent tasks. It behaves like many independent supercomputers working in parallel for independent tasks. This may not fully utilize the speed of the supercomputer, but it improves cost management.
 
.
For multi-core computers, you can manage CPU cores and memories and use them independently for independent tasks. It behaves like many independent supercomputers working in parallel for independent tasks. This may not fully utilize the speed of the supercomputer, but it improves cost management.

dude, I am aware of that, but you still missed my point.

It does not matter, because the assumption model based on 2 things.

1.) You have a finite amount of task
2.) You job list fall into the SD model.

The problem is, you can scale back individual core (in this case, node) individually, but the task remain, you still need to power the essential in order to power the rest of the machine and this is constant as long as you have the same architecture of inter-connect. so whatever saved by controlling node individually in TIanhe-2 the same % would have been saved by Titan. And based on the task at hand, where Tianhe would still be running almost 65-80 % at half capacity where at Titan, with the same sample set, would have been running at 75% + constantly
 
.
I don't know why it is so hard to understand this but multiple users can login simultaneously to supercomputers. You can book the entire supercomputer capacity for a timeframe but this is very very unusual. Usually you book for a certain processing capacity which is measured as the number of cores. You book 100 gb of memory and 2000 cores and run a simulation. In the meantime there are other users crunching their own numbers.

This is called multi-user operating system. This is the foundation of supercomputing. If this weren't the case supercomputers would never be feasible to begin with.
 
.
I think that you do not understand the difference between single core computer and multi-core computers. All supercomputers have thousands of CPU cores. Core and memory management is very critical.
For a task with low CPU usage, only limited number of core and memory are involved while other cores and memories rest with little power consumption. Only for tasks need high CPU usage, more and more cores and memories will get involved.
 
.
I think that you do not understand the difference between single core computer and multi-core computers. All supercomputers have thousands of CPU cores. Core and memory management is very critical.

For a task with low CPU usage, only limited number of core and memory are involved while other cores and memories rest with little power consumption. Only for tasks need high CPU usage, more and more cores and memories will get involved.

And I think you don't understand my point.

What you are saying is indeed correct, however, this would have happened in Tianhe, THIS WILL ALSO HAPPENED TO TITAN. Unless you are claiming the IBM Computer does not have node management or the Chinese node management technology are more superior than the IBM, what you are saying does not make sense...

I understand how node management and core management help with saving time and power consumption, but do you actually think only Tianhe have them but not the IBM?

Again, only 2 parameter will change this model.

1.) You have an infinite amount of task, then it will favour Tianhe-2, but this is statistically and physically impossible.

2.) The job list falls inside SD model. Which if Chinese job list is actually more than half are over 75% capacity, then that would be another story.
 
.
Some people here seems to have the IQ of a 12 years old (Strange as they all say how high the Chinese IQ were) and do not understand why Tianhe-2 is wasting it bandwidth. Let me explain one more time.

Process to power ration is not "Processor" to power ratio. What does it means? Let assume Tianhe-2 can only have 10 process per year to process. Then the economical level of Tianhe-2 would be depending on the task perform by Tianhe2 which mean if almost all those process occupied Tianhe2 processing power, then and only then said equipment would be considered striking the necessarily power to process. Because the power consumption remain slightly constant, and it would have lost value if Tianhe was used to process information only requiring half of its computation power. However, according to Statistical sampling model. The standard deviation of the task would have the majority of process will only require a fraction of Tianhe-2 processing power.


Process 1 - require 15% of Tianhe-2 Processing power
Process 2 - require 30% of Tianhe-2 Processing power
Process 3 - require 35% of Tianhe-2 Processing power
Process 4 - require 40% of Tianhe-2 Processing power
Process 5 - require 45% of Tianhe-2 Processing power
Process 6 - require 50% of Tianhe-2 Processing power
Process 7 - require 50% of Tianhe-2 Processing power
Process 8 - require 55% of Tianhe-2 Processing power
Process 9 - require 75% of Tianhe-2 Processing power
Process 10 - require 95% of Tianhe-2 Processing power

This 10 process satisfied the standard deviation (SD) model. So, theoretically, tianhe-2 would spend 65% of its time processing something slightly less than half to more slightly more than half its capacity. But still, it have to consume the power require for the computer to function.

9 out of 10 times the Tianhe-2 power is wasted because essentially you pay for full price for using something in half most of the time.

Now, you can still say you need Tianhe-2 to process the one that need majority of its power (Process 10). But in ascent, no. Because process and computer power can be either serial or parallel. A process can either break down into 2 or more and process them separate for the same result, require longer time but less processing power. Or alternatively you can connect 2 or more supercomputers to parallel process the task. Hence in effect, you don't actually need supercomputer like Tianhe-2.



is this a joke or something?

This is exactly what im trying to say, why do they need Tianhe -2 dozens of closets for word processing,
I do it on my ipad just fine, even my ipad is an overkill for it, as it barely gets hot while i type 100 WPM.
and if I want to watch Youtube I just bring out my Galaxy, anyone who buys a 70 inch 4k TV is stupid.
let alone China builds a super computer, every time it powers on it must create a blackout in Guangzhou.
Talking about processing power being wasted, i need to go play some 3D games on my ipad now.
 
.
Your SD Model doesn't Apply because all of those processes run simultaneously. Dude I'm a computer scientist and I've used 40 cpu system for my MSc thesis. You talk nonsense.
 
.
I don't know why it is so hard to understand this but multiple users can login simultaneously to supercomputers. You can book the entire supercomputer capacity for a timeframe but this is very very unusual. Usually you book for a certain processing capacity which is measured as the number of cores. You book 100 gb of memory and 2000 cores and run a simulation. In the meantime there are other users crunching their own numbers.

This is called multi-user operating system. This is the foundation of supercomputing. If this weren't the case supercomputers would never be feasible to begin with.
Yes. This is how supercomputers run. Supercomputers have core and memory management. For tasks requiring low resources, other cores and memories just rest with low power consumption. But
And I think you don't understand my point.

What you are saying is indeed correct, however, this would have happened in Tianhe, THIS WILL ALSO HAPPENED TO TITAN. Unless you are claiming the IBM Computer does not have node management or the Chinese node management technology are more superior than the IBM, what you are saying does not make sense...

I understand how node management and core management help with saving time and power consumption, but do you actually think only Tianhe have them but not the IBM?
Both have. Tianhe-2 may just have more cores and memories than IBM. Technically if Tianhe-2 reduces the running cores and memories, it may just become another Titan. That's why you can scale for power consumption. Normally more cores will lower energy efficiency.
 
.
Yes. This is how supercomputers run. Supercomputers have core and memory management. For tasks requiring low resources, other cores and memories just rest with low power consumption. But

Both have. Tianhe-2 may just have more cores and memories than IBM. Technically if Tianhe-2 reduces the running cores and memories, it may just become another Titan. That's why you can scale for power consumption. Normally more cores will lower energy efficiency.

Yeah exactly. Three different systems would be much less energy efficient. Besides while the idle capacity is sleeping, it consumes much less energy then it is utilized. At Each clock frequency CPU only asks "do I have any work to do" which consumes a pretty low energy. ALU (the most energy consuming part of CPU) virtually consumes nothing while idle.
 
.
Tianhe-2 has 16,000 computer nodes, each comprising two Intel Ivy Bridge Xeon processors and three Xeon Phicoprocessor chips. It is also based on Intel cores. Tianhe-2 just means that China has the leading technology in connecting and managing these nodes. To stop Chinese supercomputer technology, US goverment has forbidden Intel to export related CPUs. But China has developed home-made CPUs after this ban. US ban usually boosts Chinese technology because usually project leaders choose the easiest solution if available. Tianhe-3 will be based on home-grown CPUs and will be expected in mid-2016.
China reveals home-grown supercomputer chips after Intel x86 ban • The Register

If we see Tianhe-3 released this year, China will be a full and independent master of supercomputer technology, without any Intel help.
 
.
lol, this time I am gonna ask, is this actually a joke??

Oh by the way, it may be interested you to know, USAF supercomputer is build form 1760 PS3........

US Air Force connects 1,760 PlayStation 3's to build supercomputer

So, maybe one day you can connect 10,000 ipad and build your own supercomputer. I don't know.

yes, that was a joke, now this is a serious question,
You are saying that instead of learn to type faster, you just type with your toes too when you need to right?

distributed supercomputer can't handle a centralized task like a real supercomputer.

According to your theory, all supercomputers are a waste.

besides, I thought this thread was about waste of power, not a waste of processing power.

 
.
yes, that was a joke, now this is a serious question,
You are saying that instead of learn to type faster, you just type with your toes too when you need to right?

distributed supercomputer can't handle a centralized task like a real supercomputer.

According to your theory, all supercomputers are a waste.

besides, I thought this thread was about waste of power, not a waste of processing power.


I am saying why learning to type at all, when you can use voice command?

And yes, in a way, EVERY SUPERCOMPUTER in this world are a waste of time, money and resource. Ask any computer scientist, not the self proclaim one here on PDF.

Do you know how 1 node is connect to another in a supercomputer? If you know, then you will understand why parallel connecting 2 supercomputer is basically the same as 2 node being put together.

And saying the last sentence have actually just show you how ignorant you are regarding the topic at hand. Why? well......again, if you have to ask, you did not belong here.
 
.

Latest posts

Country Latest Posts

Back
Top Bottom