What's new

Is Tianhe-2 Overrated?

distributed supercomputer can't handle a centralized task like a real supercomputer.

Yes, exactly. You don't have a shared memory in distributed computing which creates a lot of problems to begin with. You need to virtualize and manage the distribued memory as if it's a single gigantic memory (Source : Single system image - Wikipedia, the free encyclopedia You use frameworks like Hazelcast to create that "illusion". Such systems has a lot of downsides compared to a supercomputer. For starters, they use internet to communiate. Internet connection is never reliable. Even a few seconds of offline time can ruin the entire calculation. They also use data duplication for fail-safe purposes which creates 20-25% waste of memory by itself.

But let's say memory is cheap. What about CPU power? Processes need to communicate when they are running. Normally in supercomputers processes communicate via closed-network communication and it's very very fast. Tianhe-2A uses Infiniband (Source : InfiniBand - Wikipedia, the free encyclopedia which is an industry standard. For managing the network communication between CPU's Tianhe-2A uses FeiTeng processeor (source:FeiTeng (processor) - Wikipedia, the free encyclopedia FeiTeng is used for smart network management and load balancing.

However there was a huge Japanese success with K computer. As you may now K computer still leads graph-500 list which is a specific test for data intensive computing. Japanese engineers tried to strike a balance between single centered approach and distributed computing. They programmed it with Computer Clustering Architecture (source : http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5993668). That architecture creates a distributed memory, which has a scalability advantage. However unlike peer-to-peer systems or grid computing inter-memory communication is managed via a closed network with MPI protocol (source : Message passing in computer clusters - Wikipedia, the free encyclopedia K computer uses Tofu as an MPI (source : http://www.fujitsu.com/downloads/TC/sc10/programming-on-k-computer.pdf) which is a modified version of MPI for K-Computer specifically and a 6 dimensional Torus Interconnect (source : http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5331902) for network topology. As far as I know 6D Torus interconnect is a unique system. Cray uses 3D Torus interconnect. You can't even visualise 6D Torus interconnect. I don't know how Japanese engineers did it. Probably they've used another supercomputer to design 6D Torus interconnect scheme. Brilliant.

Now all of those creates extra overhead for K-Computer. However it is very easily scalable and cost effective. It also forces the programmer to think about data distribution while programming. Programmers have much more flexibility in such systems. I guess that's why K-Computer is very good in data intensive computing. Because with a poorly written code distributed memory architecture turns into a nightmare. @Nihonjin1051, Japanese people always manage to amaze me. This is a good system.
 
.
Now all of those creates extra overhead for K-Computer. However it is very easily scalable and cost effective. It also forces the programmer to think about data distribution while programming. Programmers have much more flexibility in such systems. I guess that's why K-Computer is very good in data intensive computing. Because with a poorly written code distributed memory architecture turns into a nightmare. @Nihonjin1051, Japanese people always manage to amaze me. This is a good system.

Thank you my friend , and on a more neurosciences pathway (an area of specialization for me during my MS in Neuroscience days), the K computer was actually utilized to stimulate brain activity or at least code that paradigm by asking the question:

  • Is it really possible to simulate the human brain on a computer?

AI researchers have been investigating that question for decades, but Japanese and German scientists have run what they say is the largest-ever simulation of brain activity using a machine. The simulation involved 1.73 billion virtual nerve cells connected by 10.4 trillion synapses.

Each synapse between excitatory neurons had 24 bytes of memory for greater accuracy. The simulation ran on open-source NEST software and had about 1 petabyte of main memory, which is roughly equal to the memory of 250,000 PCs.The synapses were randomly connected and the process was meant only to "test the limits of the simulation technology developed in the project and the capabilities of K," RIKEN said in a release.

The K computer is housed at RIKEN's Advanced Institute for Computational Science in Kobe, Japan, and has a rated performance of 10.51 petaflops per second using 705,024 SPARC64 processing cores.

"If petascale computers like the K computer are capable of representing 1 percent of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exascale computers hopefully available within the next decade," Markus Diesmann of the Institute of Neuroscience and Medicine at Germany's Forschungszentrum Julich said in the release.

So we can see @Lure, @Abacin @bbccdd1470 how super computers, as in the case of the K-Comp, can be utilized not just for data permeation, coding, but actually used to better unravel the neuronal synaptic network in the human central nervous system. Juxtapose that data onto the paradigm of AI (artificial intelligence). :)


The fact that China, Japan, Germany, Turkey among other countries are interested in this through recent developments means that we are in for an exciting future in regards to neuroscience, computer science, and the combination of both in the AI paradigm.



Ganbare!!!!!!!! brothers!!!!!!!

However there was a huge Japanese success with K computer. As you may now K computer still leads graph-500 list which is a specific test for data intensive computing.

Thank you my friend, and I believe Japan is not alone in this development. We must juxtapose any development in AI paradigm with robotics , a field that is currently being explored by Japan and China. An exciting time, my friend. :)
 
.
The simulation involved 1.73 billion virtual nerve cells connected by 10.4 trillion synapses.

Each synapse between excitatory neurons had 24 bytes of memory for greater accuracy. The simulation ran on open-source NEST software and had about 1 petabyte of main memory, which is roughly equal to the memory of 250,000 PCs.The synapses were randomly connected and the process was meant only to "test the limits of the simulation technology developed in the project and the capabilities of K," RIKEN said in a release.

That explains all the trouble for making a 6D Torus Interconnect topology between CPU's. That's really an exciting news. I'm not sure how one can simulate the synaptic plasticity tough. Clearly the connection between some processing units needs to be diminished while enhanced between others. That should be an evolving network. An evolving network that adapts to the problem at hand. So it should be able to change it's network connections to mimic synaptic plasticity. Do you have any other idea?
 
.
American members questions the cost and efficiency of Chinese supercomputers, but Chinese and Japanese members says its worth it, regardless. That pretty much sums up this thread.
:cheers:
 
. . .
Yes, exactly. You don't have a shared memory in distributed computing which creates a lot of problems to begin with. You need to virtualize and manage the distribued memory as if it's a single gigantic memory (Source : Single system image - Wikipedia, the free encyclopedia You use frameworks like Hazelcast to create that "illusion". Such systems has a lot of downsides compared to a supercomputer. For starters, they use internet to communiate. Internet connection is never reliable. Even a few seconds of offline time can ruin the entire calculation. They also use data duplication for fail-safe purposes which creates 20-25% waste of memory by itself.

But let's say memory is cheap. What about CPU power? Processes need to communicate when they are running. Normally in supercomputers processes communicate via closed-network communication and it's very very fast. Tianhe-2A uses Infiniband (Source : InfiniBand - Wikipedia, the free encyclopedia which is an industry standard. For managing the network communication between CPU's Tianhe-2A uses FeiTeng processeor (source:FeiTeng (processor) - Wikipedia, the free encyclopedia FeiTeng is used for smart network management and load balancing.

However there was a huge Japanese success with K computer. As you may now K computer still leads graph-500 list which is a specific test for data intensive computing. Japanese engineers tried to strike a balance between single centered approach and distributed computing. They programmed it with Computer Clustering Architecture (source : http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5993668). That architecture creates a distributed memory, which has a scalability advantage. However unlike peer-to-peer systems or grid computing inter-memory communication is managed via a closed network with MPI protocol (source : Message passing in computer clusters - Wikipedia, the free encyclopedia K computer uses Tofu as an MPI (source : http://www.fujitsu.com/downloads/TC/sc10/programming-on-k-computer.pdf) which is a modified version of MPI for K-Computer specifically and a 6 dimensional Torus Interconnect (source : http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5331902) for network topology. As far as I know 6D Torus interconnect is a unique system. Cray uses 3D Torus interconnect. You can't even visualise 6D Torus interconnect. I don't know how Japanese engineers did it. Probably they've used another supercomputer to design 6D Torus interconnect scheme. Brilliant.

Now all of those creates extra overhead for K-Computer. However it is very easily scalable and cost effective. It also forces the programmer to think about data distribution while programming. Programmers have much more flexibility in such systems. I guess that's why K-Computer is very good in data intensive computing. Because with a poorly written code distributed memory architecture turns into a nightmare. @Nihonjin1051, Japanese people always manage to amaze me. This is a good system.

Thank you my friend , and on a more neurosciences pathway (an area of specialization for me during my MS in Neuroscience days), the K computer was actually utilized to stimulate brain activity or at least code that paradigm by asking the question:

  • Is it really possible to simulate the human brain on a computer?

AI researchers have been investigating that question for decades, but Japanese and German scientists have run what they say is the largest-ever simulation of brain activity using a machine. The simulation involved 1.73 billion virtual nerve cells connected by 10.4 trillion synapses.

Each synapse between excitatory neurons had 24 bytes of memory for greater accuracy. The simulation ran on open-source NEST software and had about 1 petabyte of main memory, which is roughly equal to the memory of 250,000 PCs.The synapses were randomly connected and the process was meant only to "test the limits of the simulation technology developed in the project and the capabilities of K," RIKEN said in a release.

The K computer is housed at RIKEN's Advanced Institute for Computational Science in Kobe, Japan, and has a rated performance of 10.51 petaflops per second using 705,024 SPARC64 processing cores.

"If petascale computers like the K computer are capable of representing 1 percent of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exascale computers hopefully available within the next decade," Markus Diesmann of the Institute of Neuroscience and Medicine at Germany's Forschungszentrum Julich said in the release.

So we can see @Lure, @Abacin @bbccdd1470 how super computers, as in the case of the K-Comp, can be utilized not just for data permeation, coding, but actually used to better unravel the neuronal synaptic network in the human central nervous system. Juxtapose that data onto the paradigm of AI (artificial intelligence). :)


The fact that China, Japan, Germany, Turkey among other countries are interested in this through recent developments means that we are in for an exciting future in regards to neuroscience, computer science, and the combination of both in the AI paradigm.



Ganbare!!!!!!!! brothers!!!!!!!



Thank you my friend, and I believe Japan is not alone in this development. We must juxtapose any development in AI paradigm with robotics , a field that is currently being explored by Japan and China. An exciting time, my friend. :)

Positive ratings!
 
.
That should be an evolving network. An evolving network that adapts to the problem at hand. So it should be able to change it's network connections to mimic synaptic plasticity. Do you have any other idea?

Absolutely, that is the beauty of synaptic plasticity !!!!! Currently we (global neuroscience institutions) lack that technology, but through supercomputers we can explore and redesign this paradigm. We need to understand the complex aspect of the interactions in the brain, particularly the relations of enzymes, genes, and ion channels. Right now we , in a limited degree, utilize computer models that enable exploration of information processing that is done by the signalling pathway in synaptic activity in neurons of the stratum, hippocampus and cerebellum. I suppose new software developments such as the K-comp and others are used to combine computational neuroscience tecniques with systems biology techniques --- to realize some kinetic models of the molecular mechanisms that underlie new research.

We are , currently, my friend, at a cusp. A cusp of a new area of exploratory neuroscience and well K-COMP ha shown that supercomputers are integral to this area in medical science research.


Regards,
Kenji
 
.
Right now we , in a limited degree, utilize computer models that enable exploration of information processing that is done by the signalling pathway in synaptic activity in neurons of the stratum, hippocampus and cerebellum.

Yeah, artificial neural networks works mimics the Hippocampus. The algorithm is based on a very simple rule, Hebbian learning as you know. One can approximate any second degree function with a multilayer feedforward neural networks. It has a huge learning potential.

However Prefrontal cortex is the actual source of mystery. This is the place what makes us who we are, yet the synaptic plasticity is very chaotic in that region. As far as I know we don't have any algorithm at hand that function the work of prefrontal cortex.

Besides there are still some receptors which are very mysterious. There are a lot of events happening inside a synaptic cleft in milliseconds. Hence the communication between neurons is an unknown phenomenon.

If we can find out how prefrontal cortex works and be able to identify the mysteries in synaptic cleft we will have a much better chance to mimic it. If those issues are solved we can easily mimic that activity in computers. Besides we will have a lot of targeted medicines for common/uncommon mental diseases like depression or schizophrenia or even dementia and parkinson's with very little side effects unlike the current medications. So we will hit two birds with one stone. Nope actually three. Checmical inbalance in brain can also create a lot of discomfort in perfectly healthy people. I mean sometimes you procrastinate with no obvious reason. Or when you don't have a good rest, your memory doesn't work that well. Or maybe we can be perfectly healthy but genetically angry. We can optimize these chemical imbalances with drugs that will be used by healthy people. I mean everyone can really experience the life fully with the perfect version of themselves.

What's your opinion on this? I'm really curious.

We can even modify BDNF gene to create people like in the movie limitless. Since you BDNF triggers neurogenesis, we can enhance our learning capability enormously. We can have a memory like a recorder. Do I sound like a mad scientist?
 
Last edited:
. .
Besides there are still some receptors which are very mysterious. There are a lot of events happening inside a synaptic cleft in milliseconds. Hence the communication between neurons is an unknown phenomenon.

If we can find out how prefrontal cortex works and be able to identify the mysteries in synaptic cleft we will have a much better chance to mimic it. If those issues are solved we can easily mimic that activity in computers. Besides we will have a lot of targeted medicines for common/uncommon mental diseases like depression or schizophrenia or even dementia and parkinson's with very little side effects unlike the current medications. So we will hit two birds with one stone. Nope actually three. Checmical inbalance in brain can also create a lot of discomfort in perfectly healthy people. I mean sometimes you procrastinate with no obvious reason. Or when you don't have a good rest, your memory doesn't work that well. Or maybe we can be perfectly healthy but genetically angry. We can optimize these chemical imbalances with drugs that will be used by healthy people. I mean everyone can really experience the life fully with the perfect version of themselves.

What's your opinion on this? I'm really curious.

Well, yes, you hit it right. There are inherent connections that are found in the frontal lobe to form the vital feed forward and feedback circuits from the center of the prefrontal information processing. In fact the PFC -- through its extensive association connections-- is linked with distant and broadly dispersed parts of the association and limbic corteces. Specifically we see important subcortical linkages that extend prefrontal neural systems in the pons, midbrain, hypothalamus and the amygdala. This then is important in integration of higher order brain functions such as viscal, autonomic, and emotional functions. I suppose @Lure --- this represents the multimodal association of the PFC. Hahaha, and an area that was considered an 'unknown' phenomena, but with new technologies, i suppose we can begin to unravel it.

Mad scientist? No! Definitely not, but an intellectual no doubt ! I am so glad to have you back here, my friend. To be able to engage in academic dialogue with you here , among many others, greatly is testament to PDF.

Btw, interesting that you mentioned Depression among other behavioral-cognitive disorders. Perhaps we can discuss this in another thread. I don't know if there is a Psychology,Neuroscience,Psychiatry thread here in PDF, but we should consider creating one. :)

Regards,
Kenji
 
.
Btw, interesting that you mentioned Depression among other behavioral-cognitive disorders. Perhaps we can discuss this in another thread. I don't know if there is a Psychology,Neuroscience,Psychiatry thread here in PDF, but we should consider creating one.

Yeap we can discuss it. I agree with you. We definitely need a thread for Pschological issues. Also for neuroscience as you have said, it's like christmas. Everyday there is a new and very exciting development. We can definitely find a lot of content.
 
.
.
Ah, that is an interesting point @Lure ! Are you subscribed to pubmed? If so, i think you will like this:

Conditional gene targeting in the mouse nervous system: Insights into brain function and diseases. - PubMed - NCBI

Yeah, I'm registered for some time now. When I was in university I was interested in artificial intelligence. My graduation project was about the topic. That's when I discovered pubmed. Since then I take a look at it when I use a drug etc. That's a very good source.

What you've posted is quite useful for the genetic research. Especially this part : "ere we review Cre mouse lines that have been developed to target either the entire brain, selected brain areas, or specific neuronal populations". That kind of flexibility is surely desired in research. I hope in the future we will have such flexibilities in pharmacology. Clearly if we have drugs that are active on certain parts of the brain and inactive on other parts would be great.

For genetic research part I guess they will target entire 5000 genes. However there are some very good targets that could be prioritized.

p11 gene : p11 mediates the BDNF-protective effects in dendritic outgrowth and spine formation in B27-deprived primary hippocampal cells. - PubMed - NCBI

You can delete certain traumatic memories from hippcampus. This might be a very good antidepressent. Besides the reverse is also true. If you stop the expression of this gene for certain neural populations you will have lose neural plasticity at those areas of Hippocampus, meaning you don't forget what you've learned.

GDF11 protein : Vascular and Neurogenic Rejuvenation of the Aging Mouse Brain by Young Systemic Factors | Science

"Further, we show that GDF11 alone can improve the cerebral vasculature and enhance neurogenesis."

Dihexa : Prospective Alzheimer’s drug builds new brain cell connections - WSU News WSU News | Washington State University
Evaluation of metabolically stabilized angiotensin IV analogs as pro-cognitive/anti-dementia agents

A very interesting molecule. Triggers neurogenesis 7-fold more then BDNF.

"As can been seen in Figures 4A, B, &C, dihexa at its highest dose significantly (p<.001) increased the time spent in the target quadrant when compared to the scopolamine impaired groups regardless of the delivery method employed and was not different from non-scopolamine treated controls (p>.05)."


"Typically about 50% of rats exhibit impaired performance in the water maze when compared to 3 month old rats (Zeng et al., 2012). As such, we evaluated the ability of orally delivered dihexa (2mg/kg/daily) to impact water maze learning in 24 month old Sprague Dawley rats of mixed gender. As expected the results shown in Figure 5 indicate that dihexa significantly improved performance (p<.05) on most of the test days. It should be noted that because these aged rats were not pre-screened for cognitive deficits the results substantially underestimate the effect of dihexa. The expectation that only half of the untreated rats would be effective learners even without dihexa treatment likely contributed to the high variability in escape latencies seen with the untreated group."
 
.
Well Turkey is a secular country by definition. But if you mean a "muslim majority" country, then ITU (Istanbul Technical University) has a supercomputer for academic purposes which is on the list.

Istanbul Technical University (ITU) | TOP500 Supercomputer Sites
You made that or you bought it?

If you talk about buying my friend Iraq was the first country to buy and use super computers in the seventies!!! look where we were and where we are now because of dictators.
 
.

Latest posts

Country Latest Posts

Back
Top Bottom