Dr. Zoltán Juhász, Associate Professor at the University of Pannonia. Supervisor at the Information Technology Sciences Doctoral School of the University of Pannonia, and HPC expert of the Government Agency for IT Development. Field of research: GPU calculations, parallel and distributed systems, parallel algorithms, performance predicting methods, bioelectric signal processing, EEG analysis, brain connectivity.

“These algorithms are driven by human thought.” – interview with Dr. Zoltán Juhász, Associate Professor at the University of Pannonia.

When and how did you get to know supercomputing?

I graduated from the Technical University and then started to work in England, and that’s where I got to know parallel computers more closely. They weren’t referred to as supercomputers yet, but the technology was the same. We used computers with 64 to 128 processors there. Here in Hungary, the university had a maximum of one or two processors in simultaneous use, so this was a rather big leap. I worked there as a researcher for two years. The technology we used there was referred to as ‘transputer’.

At the end of the 1980s, the information technology of the future was intended to be built upon transputer processors comprising board-integrated memory and serial communication interfaces manufactured by Inmos, a British microelectronics company based in Bristol. Although transputers did not deliver on the hopes attached to them, they had a great impact on information technology.
It was a semiconductor factory that released it, but the theory of all this came from a department where I worked as a researcher. It is worth noting that when they started manufacturing these processors, great emphasis was placed on getting it across the industry, and therefore, Transputer Centres were created at some universities. Partly to support such research at the university, and partly to make local businesses and industrial players familiar with this novel technology. This was 30 years ago – they knew back then that manufacturing processors will not be enough by itself. It was a clear objective to enable market players to use them for developing new products as soon as possible.

Has this technology also arrived in Hungary?

In 1992, I tried to build something similar, to create a transputer-based supercomputer at my university in Hungary, but the limitations of that technology were clear by then. Cluster computers – a lot of interconnected computers – emerged, and we tried to use them for very fast calculations. For 10 to 15 years after that, I was working on similar research projects – how to create such big computation systems, how to simplify their programming, and how to find ways of using them for scientific and industrial tasks.

What was your first supercomputer work and experience?

I'm not a traditional user. In the real sense of the word, a computer user is a person with very large computing tasks that cannot be done in the traditional computer environment. They are mostly representatives from the field of natural science – biologists, physicists, chemists, and pharmaceutical researchers. I am an IT professional working on establishing even faster calculation methods and algorithms that will help traditional users.

Is there an alternative to supercomputers?

There were some 15 years ago. Not for all tasks, but definitely for certain ones. As long as we were working with cluster-based systems, our very objective was to replace the highly expensive supercomputers with cheaper alternatives. We intended to democratise this technology, to arrive from the highly locked-up supercomputers to computing systems accessible to many. Today, this is cloud-based computing accessible to anyone.

What has changed in the meantime?

There came a big leap in 2006 – the first programmable NVIDIA graphics cards emerged. This induced a very fast development in the field of processor manufacturing. Very soon it turned out that these graphics cards provide a computing performance which is larger than those of previous supercomputers or cluster-based distributed environments by orders of magnitude. From that point, there was no doubt that we have to deal with these. This was so successful, that today’s supercomputers are fundamentally built upon it – because that’s what we need if we want to achieve really high performance. In the past decade, I mostly worked on how to use these graphics cards efficiently – basically for scientific calculations –, and how to program them.

Does this technology need new approaches?

In short: this is a very different programming methodology and these tools function in a very different way compared to what we are used to. The use of a graphics card itself – and especially a supercomputer built of these – requires a different way of thinking.

How would you explain it to a layperson?

In today’s supercomputers, the high speed comes from the fact that it does many things and can perform a lot of partial calculations simultaneously. This requires a program with many subtasks. In the past, the number of these was in the hundreds with a traditional supercomputer. In today’s machines, 5 to 10 thousand compute units, that is, processors are present in a graphics card. There will be a lot of these – 140 in Komondor. If I assume 1 000 in each, that makes 140 000 compute units. Very large supercomputers have several tens of thousands of cards and run several tens of millions of jobs simultaneously. To be able to make use of this performance, this many subtasks must be simultaneously ready for execution. Now, this is not easy – to write a program code in which there are millions of processes running simultaneously. Of course, there are repeated patterns and typical jobs based on the similarity of their data structure or algorithm. Once we got to know one of them, we can use the same algorithms and methods when a similar one comes.

Aren’t supercomputers dehumanising science this way?

No. Programmers will keep having the task of deciding on which jobs should be reasonably parallelised, and when it is not worth it. Software may help, and they can generate parts of the code automatically, but they cannot tell you which parts. These algorithms are driven by human thought. In many cases, known methods are designed to be executed after one another, and may not be suitable for parallelising. Then we need to rethink the entire task to find an alternative solution that can be parallelised.

Are there experiments with novel solutions going on?

As long as our thinking is bound by something – whether experience or a preconception –, we may not be sure that we use a new technology properly. There are mathematical theories and methods behind the algorithms. Original basic concepts have turned out to be unsuitable for such a degree of parallelising in a number of recent cases. Mathematicians and physicists are seeking alternative solutions, and we will be able to increase the speed by orders of magnitude when they find them.

How did you join the supercomputer development work of Hungary?

When the first supercomputer, SUN arrived, I was already following the developments in Hungary. On the one hand, I knew Dr. Tamás Máray from my university years; on the other hand, I always worked with parallel programming. But from 2005, I got predominantly interested in distributed systems. My curiosity about supercomputers got reinforced after 2006 when graphics cards emerged. As far as I know, I was the first at the university to have a GPUbased server with 4 cards. It was clear by then that very big things are coming, and that’s why I started to use, research, and develop it very intensively.

To what extent do you think the use of supercomputers is challenging?

This has levels. If you are lucky, you can solve your task using a program code written by others. For example, you need a molecular simulation program. If there is one, you just enter the data and start the program. This can be done by anyone having basic calculation skills. The difficulties come when such software doesn’t exist or is not capable of doing the thing we need. In such cases, the question is whether the software can be upgraded to include additional functions or whether a brand new program is needed for the task. This is a serious challenge.

What is the essence of this challenge?

The fundamental mission of a supercomputer is to calculate faster than a traditional computer. But on the level of the basic principle, a supercomputer is made up of the same components as a laptop. We can only become faster if we are able to use the myriad components of supercomputers simultaneously and efficiently. When we write bad program codes, losses can be huge during the execution when the machine waits for something instead of computing. For data, or for someone else to finish what they are doing. If internal coordination and communication between the subtasks are not well-organised, it may happen that the machine quickly calculates something and then just waits. The challenge is to use the supercomputer constantly, with zero losses, and at 100% capacity.

How would you describe the evolution of supercomputers?

Uninterrupted – performance keeps rising constantly. First, it did a hundred thousand operations per second and then we reached one million, one billion... Now the latest level is supercomputers in the exaflops range, and the whole world is trying to break through this performance barrier. But technological leaps always come during this evolution. One technology prevails for about a decade and then comes a leap, and the supercomputer industry evolves into that direction until the next one. The last such leap was the emergence of graphics processors, and this development direction has been here since 2010 and will remain predominant for a big chunk of time.

Does it also affect you?

When such leaps occur, software written before often cannot be used anymore. These programs must be rewritten, which takes a lot of money and a lot of time. But it’s true that researchers and industrial developers will use it for the next decade so a return on investment is guaranteed.

How difficult is it to follow these leaps?

Following it closely is difficult even for a professional. This technology develops at such a speed that we can also hardly keep up with it. But most of the users are not developers and aren’t that much affected by this.

Which is the question most frequently asked from you as an HPC expert?

If someone has never worked with a supercomputer, the basic question is whether they should use it at all. Or should they build an infrastructure on which they can run their jobs within their own research or corporate environment? Another practical question is: when the applicant has a calculation job and wants it to run faster. So this applicant would transfer it to a supercomputer. The question is how to start? If they need to write a program, in which programming language they should write it? And there are a number of alternative technologies within a language, so which is the best to use? A plethora of technical questions should be discussed with a potential user. Everything always depends on the user and the task in question. What we can help with in any case is to identify which directions should be followed and which ones shouldn’t, when there are fifty potential directions. We can save them from a rather tiresome period, which isn’t free from disappointments either.

What are the most important tasks of the Competence Centre?

One of them is being able to give advice on such problems to existing and future users. Another one is the hinterland – enhancing the general level of the HPC skills on the level of researchers, users, and education. It is a key task of the Competence Centre to coordinate all this. It is indispensable to establish cooperation among Hungarian professionals and to develop this field to ensure a continued supply of professionals so that new generations also acquire this technology and do it better than us.

Should supercomputing be integrated into higher education?

By all means, yes. In the past, supercomputer centres were hosted by universities – but in the United States, most of them are hosted by public research institutions. These machines have become so expensive that universities cannot be expected to operate them. Funding and operation are government duties, so the current operational model is good. However, the use of supercomputers should be brought to the universities. This is partly done – scores of research teams and research institutions use supercomputers on a daily basis but students don’t do that so much yet. Here is an opportunity for a big step forward – us bringing supercomputers into class to ensure that they are actively used by as many engineering and science students as possible.

How can we achieve this?

Students should be granted easier access. It is a fundamentally different type of use. Researchers use supercomputers for relatively long periods and for big tasks. Students want to use it in class so they need immediate access and they will run small jobs. The goal is to teach them how to use these technologies as users and not to make them solve problems of the size that nobody could before. At least they should have a theoretical picture of why supercomputers are so fast – for example, they should learn the basic principles of parallelising.

Do you sometimes also keep in touch after the joint work?

By all means, yes. It did happen that I was asked to give advice and we were after two years of research work together.

How do you see the future of supercomputers?

The future feels fine, thanks. The last time when the field was so exciting was a long ago – this speed of development is amazing. Fields of application that bring the evolution of supercomputers into new directions have emerged; these include artificial intelligence, materials sciences, and data analysis. This shift from classical scientific simulations brings these systems closer to average users. This evolution will not stop, we will see and hear wonderful things in the next decade, and whoever is missing out will inevitably fall behind. Global competition is sharp – and the question of who will have the leading role in supercomputing is still open. There are aspirants in Asia, and you have the US – and the EU is also planning to install two exascale supercomputers soon. But I don‘t expect a technological leap in the near future – semiconductor manufacturing has just reached its limits and this brings a slowdown. Of course, there are a few more tricks in the pockets definitely. I don’t see revolutionary leaps though – but I would rather not be right of course, and let it evolve as quickly as possible.