Dr. Imre Szeberényi, Associate Professor at the Budapest University of Technology and Economics, Deputy R&D Director of the IT Centre for Public Administration. Supervisor at the Information Technology Sciences Doctoral School, lecturer at the Electrical Engineering Sciences Doctoral School, and HPC expert of the Government Agency for IT Development. Independent impact factor of all his scientific publications and works: 134. Field of research: distributed and parallel systems, Grid & Cloud.

“... great capacities may inspire methods we didn’t even dare to dream of before.” – interview with Dr. Imre Szeberényi, Associate Professor at the Budapest University of Technology and Economics.

When and how did you get to know supercomputing?

I graduated from the Budapest University of Technology and Economics as an electrical engineer in 1983 and started to work there as a researcher. As a fresh postgraduate research fellow, I could immediately join a number of research and education development projects in which I could leverage my skills with the then-novel UNIX systems. In 1992, a supercomputer system based on the existing inter-university network was launched under the framework of a FEFA (Catching Up with European Higher Education Fund) project for higher education development. I had my first encounter with a real supercomputer in 1993 at Cornell University during the trip and subsequent training course supported by this project; at the time, this supercomputer was ranked No. 80 on the TOP500 list. But six years later I had the honour of having a close look at and using a 512-core SP2 machine which was No.6 on the list in 1995. I would also mention it as a curiosity that in 2010, I managed to see the then 4th-placer in Tokyo.

How did you become a supercomputing expert?

The above-mentioned FEFA project was a great push because I could work with the experts of the participating universities (“Eötvös Lóránd” University of Sciences, Budapest University of Economics, Technical University of Budapest, and “József Attila” University of Sciences). Although the IBM SP1 purchased under the framework of the project only became operational in late 1994, I had started using and popularising the PVM (Parallel Virtual Machine) system known from Cornell at the Technical University of Budapest already prior to that as my opportunities allowed. In this period, around 1993, an increasing number of UNIX workstations appeared at the departments of the university. Almost all departments had one or two, and were mostly used during the daytime. Due to my knowledge of UNIX systems, I often helped with the operational tasks of these machines so that there were almost no UNIX computers at the university that I wouldn’t know the “owner” of. Thus, it was only natural for me to try the PVM system on these, which is superb to use in distributed heterogeneous environments – it is by no chance that PVM clusters are still known as the “supercomputers of the poor”. By the fall of 1993, I had a “system” consisting of 10 to 12 machines which included VAX-11/750 supermini and HP and SUN graphics workstations. It was a very exciting period. I started to seek tasks that would be solved with the SP1 machine that was coming.

In 1995, I had the opportunity to spend a month at Westminster University. There, I learnt about the methods of parallelising and the associated key algorithms partly from my colleagues there, and partly by “bursting into” the library of the university.

What was your first supercomputer work or project?

Around 1994, Gábor Domokos – who later became famous as the father of Gömböc – contacted me in connection with my letter reporting on the results of the FEFA project, and told me that the performance of his laptop had proved to be insufficient for a research task. Let’s see what can be done. Of course, I first needed to understand the problem. After a lot of consultations, the PVM program delivering the first solution was ready and was subsequently used many times for new tasks and journal articles. The relationship was so good that Gábor later became my PhD supervisor.

How did you join the community of Hungarian supercomputer users?

I worked with Gábor Domokos on several projects, so I became a user too. The SP1 installed at“Eötvös Lóránd” University of Sciences became obsolete, and there was no funding for extending the 8-processor system, but luckily the first real supercomputer of the academic community arrived in 2001, and I myself also ran certain jobs on it. In the early 2000s, grid development projects to combine the computing capacities were also launched in Hungary, and these were further strengthened by the EU-supported CERN projects. I took part in a number of such projects, in which I tried to make use of my experience with supercomputers.

In 2010, the Institute for National Information Infrastructure Development (NIIFI) – the legal predecessor of KIFU – could allocate a considerable budget for the purchase of supercomputers in 2010 thanks to the TIOP and KMOP projects. If I remember correctly, more than 1 billion HUF was available. Dr. Tamás Máray, who had been a student of mine and later became a colleague, was the Deputy Director of NIIFI back then and asked me to join the project as a technical expert. I have gathered a lot of experience in this role which I could make good use of during the process of specifying and purchasing Superman – the supercomputer of the Technical University of Budapest – in 2012. In the form of a 360-core cluster, this machine is still operational at the university. We created its software environment with the help of a highly enthusiastic group of students.

Do supercomputers have a community-building effect?

Of course! The Technical University of Budapest has a user community as almost all faculties have used the supercomputers of the university for various research purposes. The so-called responsible persons of each faculty keep in touch via a dedicated mailing list, and we also had regular meetings between 2012 and 2019. We had an in-person meeting for the last time in 2019 – one reason for this is that the machine is obsolete beyond measure and no further extension of the guarantee could be obtained as of the subsequent year, and the other reason is the pandemic. For the transfer of the information related to KIFU’s new supercomputer, I wish to revive and shake up this community because exchanges of experiences may be exceptionally inspiring.

Are you currently working on some supercomputer projects?

R&D projects for which I could also allocate resources – I don’t have any now. My main research task focuses on distributed and parallel systems, which also includes supercomputers. In connection with supercomputers, I am responsible and a lecturer for one MSc major. I strive to give an updated presentation about the associated technologies and software and hardware tools to my students.

What is the reason behind the amazing speed at which supercomputing advances?

Supercomputers are tools which enable us to carry out calculations and simulations of increasing complexity. The performance needs are on a constant rise for two reasons. On the one hand, researchers wish to tackle tasks of increasing size and complexity; on the other hand, they need solutions that are richer in detail. However, such increasing performance is a vicious circle as larger performance opens up further opportunities, which propel further needs in turn. A good example for this is when we experimented with our algorithm in the environment of the Technical University of Budapest – we were happy when we managed to compute systems with 3 to 4 degrees of freedom within a couple of days. When the same algorithm was run at the University of Cornell, where a 512-processor IBM SP2 was made available to us, we got inpatient already after a couple of hours when running it for systems including 9 to 11 independent variables.

How can your results be utilised?

Chiefly I would mention research projects related to solving boundary value tasks. Several engineering problems can be described by common differential equations, the traditional solution of which is the so-called path planning solution; this is simple but will not necessarily return all solutions as opposed to the parallel scanning method developed by us. However, this is highly computation-intensive, which renders the solution of this task hopeless without supercomputers. Among others, this method was used in the design of the manufacturing process of flexible fibres in connection with my research at Princeton University. One curiosity about this research project is that the researchers of that university could confirm our theoretical results through experimental work. Another exciting task was a research project with Cornell University, in which we performed calculations related to the equilibrium state of liquid bridges and to mechanical models of DNA molecules.

What is the greatest benefit of supercomputing?

Directly, the fact that we can compute more accurate or larger models. And indirectly, the fact that great capacities may inspire methods we didn’t even dare to dream of before. Each new tool creates new methods and opportunities, and this is especially true for supercomputers.

To what extent do you perceive the development of supercomputers?

Very much. Just look at the TOP500 list. Progress is almost uninterrupted. The maximum performance almost doubles every 13 months. When Superman arrived at the university in 2012, the fact that everyone was surprised about was that the performance of the new machine could have even made it to the bottom of the TOP500 list a few years before. We were indeed very proud of it, but it has become completely obsolete by now. Another example for technological development is a germanium transistor from the 1960s, which was visible to the naked eye. Today we are at manufacturing techniques with a 5-nanometer resolution, which is equal to one 20000th of the diameter of a human hair.

Isn’t it difficult to follow this speed?

I would rather say that it’s exciting and laboursome. Training materials need to be updated after every semester, and being up-to-date means a lot of work.

To what extent do you think the use of supercomputers is challenging?

For those accustomed to command-line applications, it isn‘t challenging. However, it is a bigger task for those unskilled in this because of “growing up” in a graphical environment. That’s why it is so important what support tools and services we may provide for the supercomputers. There are several types of supercomputers, so you need some kind of systematisation. You need a theoretical basis, then you need to get to know the tools, methods, and algorithms, and the most frequent programming languages also need to be presented.

From which fields of science do most people ask for your advice?

Physicists and chemical engineers are the most active users, but the number of fields of application grows faster than the available capacities. People are increasingly discovering and making use of the opportunities supercomputers may offer.

What experiences have you gathered as an external HPC expert of KIFU?

Until 2022 I worked as an expert for KIFU. During this time I had the opportunity to get to know the most advanced tools available in Hungary and to show them to others.

How can supercomputers be used effectively?

The most important and most difficult thing is that we have to identify a problem which requires a supercomputer and then inform the experts of the problem about the capabilities of the supercomputer. It may be a complex industrial design job or a multi-variable problem such as weather forecasting, which is highly computation-intensive and must be performed within a short period of time. To stick with this example, if we can only make predictions for seconds, and that takes hours, then it makes no sense. British mathematician Lewis Fry Richardson recommended numeric weather forecasting for the first time in 1922 and developed a method that enabled faster calculations than the rate at which the weather changes. He would have only needed 64 000 people and a building to accommodate them all and close to one other so that couriers ensuring internal communication could arrive in time to report the partial results to the chief meteorologist. His estimates suggested that 12 hours would have been sufficient to calculate the data for a one-day forecast. Naturally, Richardson’s dream was not feasible, but he would have definitely used a supercomputer if he had had one.

What do you think is the most important task of the Competence Centre?

To identify tasks that fit the supercomputer and to “attract” users. Dissemination is indispensable and of key importance; we need to advertise and tell – however, it is not only supercomputing that we should focus on but the complex set of services KIFU will provide when its infrastructure will include the new machine. Namely, you can imagine a task which doesn't necessarily need a supercomputer but can be solved by other services (e.g., cloud). From the academic perspective, it is very important to ensure that these services are readily and freely available for researchers. At the same time, the administration process should be simplified and access should be made easier.

In what fields of science would you think that integrating basic supercomputing skills into the curriculum is important?

Everywhere as regards use, and in the field of IT as regards programming. More precisely, parallel programming, while also smuggling supercomputer use into it. There are initiatives but a lot of work still has to be done.

What do you think of the future of supercomputers?

It's quite easy to foresee who will manufacture supercomputers and with what architecture – the market will shrink to one or two players. Quantum computing is only a “toy” now, but is becoming increasingly serious; it takes about another twenty years for it to have a use-value. From the user side, I can’t define our limits since almost all of us have very serious computing capacities in our pockets through our mobile phones. Twenty years ago, the performance a smartphone has today would have been imposing even for a supercomputer. What is sure is that the succession of new technologies will induce a succession of new methods that we didn’t even think of due to the lower capacities. Supercomputing is very complicated and inaccessible for everyday people, and they don’t need it even if they get access. However, it is very important for a researcher or an engineer, because serious computing capacities are also present in the background of an architectural designing program, for example. But it would have come to nobody’s mind to develop such a program if a tool for that had not existed.