The increasing need for efficient processing of data and computational power has given rise to the creation of supercomputers.
What is a supercomputer?
A supercomputer is a computer that performs at or near the currently highest operational rate for computers. Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or which perform a great amount of computation (or both).
Popular among supercomputers are traditional processors, interconnected and installed in given locations with the task of solving specific data problems. Examples of the top supercomputers in the world include:
- Jaguar, which is located at the Department of Energy’s Oak Ridge Leadership Computing Facility in Tennessee.
- Nebulae, located at the newly built National Supercomputing Centre in Shenzhen, China.
- Kraken, situated at the National Institute for Computational Sciences (NICS). The NICS is a partnership between the University of Tennessee and Oak Ridge National Lab.
With the emergence of decentralized technology, the design and installation of supercomputers appear to have shifted gears.
However, according to Sergey Ponomarev, the founder of Supercomputer Organized by Network Mining (SONM), from a technical point of view the term "decentralized computer" does not exist, it's just a marketing ploy.
Ponomarev notes that a correct technical term would be to speak of a global/decentralized operational system. For example, the first such system was Multics, developed in 1964 which allowed the assembling of a supercomputer - just screwing new parts to an already running system, without having to restart.
Ponomarev claims that his company does essentially the same thing. However, in its case, it is not a manual installation of computer hardware but rather it is an organization of a network cluster.
“SONM is more to say a global network such as the Internet but actually managed with some global operating system,” he says.
Ponomarev further summarily describes the decentralized supercomputer as fog computing, where the fog exists in the singular. SONM (as well as Golem, Ayeks, BOINC and others) are building a global operating system that would help users get close to this fog.
Ponomarev tells Cointelegraph:
“Fog computing can solve some of the most challenging tasks of humanity by joining the powers of personal computers, laptops and even smartphones. Scientific calculations of any difficulty can be performed quite fast due to the opportunities fog computing provides.”
Solutions offered by decentralized supercomputers include the generous availability of processing power, uninterrupted uptime, economic incentives among other features.
Christopher Franko, founder at Expanse.tech, tells Cointelegraph:
“The one peculiar solution that really stands out is uptime. Decentralized systems have 100 percent uptime. So imagine an always accessible supercomputer that you could input data to and get the output at any time all the time.”
Features and benefits
Franko acknowledges incentivization as a very important feature of decentralized supercomputers. He notes that people with idle machines could assign their machines downtime to contribute computational power to the decentralized computer and earn money for it.
He also identifies Bitcoin as a decentralized computer that specializes in one type of computation - value transfer - while Ethereum and Expanse try to take it a step further and give people the ability to do more complex computations. So logically, super-efficient, super-fast, decentralized computation machines are the next step and, for now, are a little out of reach.
Vadim Budaev, co-founder at scorch.ai, expects supercomputers to become a more intrinsic part of the digital technology ecosystem in huge data solutions.
“I hope, in a while, decentralized supercomputers can be able to solve not only some tasks for cryptocurrency mining, but also process data sets for AI services, such as photo video processing and voice recognition. The AI community and our startup need it very much. Unfortunately, for a while, there are no efficient algorithms to parallelize these processes, but we hope it will be created.”