Dr. Örs Legeza

Scientific Advisor of the Institute of Solid-State Physics and Optical Sciences, founder of the “Lendület” (‘Impetus’) Research Group of Strongly Correlated Systems at the Wigner Research Centre for Physics, Doctor of the Hungarian Academy of Sciences, and recipient of the Award of the Hungarian Academy of Sciences (2021). A visiting researcher at the Institute of Physics of Ludwig Maximilian Universität München and Freie Universität Berlin, a visiting professor at Philipps Universität Marburg, a Hans Fischer Senior Fellow at the Technische Universität München, and an HPC expert of KIFÜ. His main research interests focus on the development of tensor network state methods via concepts of quantum information theory, and their application to strongly correlated quantum many-body systems.

How did you get to know supercomputing?

For me, the decisive moment was when I first saw a computer which was switched on. When I was a child, we had very few computers in Hungary. I was in fifth grade when I got into touch with information technology, and very soon I fell in love with it – a year later and I was there leading the relevant study group at school. The road to supercomputers is long because the scope of uses is extremely wide. There exist ready-made applications for certain tasks, and there are also others where you can make developments and extensions because you have the possibility to make different combinations in the software – you can unite functions and generate new ones. The next step here is that you start developing them yourself – hence comes programming. There is a wide range of options here too – there are low-level and high-level programming languages, and everything depends on how deep you want to dig. The question is whether you reach the point of becoming interested in the hardware too. This is related to the types of tasks you want to solve, and to the resource needs of those tasks – that is to say, scaling the tasks to be solved is a very important question. You’ll become an HPC user if you have tasks with huge and steadily increasing amounts, or where the evaluation time must be very short because decisions had to be made all the time, and they cannot wait for hours, weeks, or even a year for a solution to come.

What was your first supercomputer experience?

I attended four years of high-school in the US, and also had my university years there – I got into touch with supercomputers as a student at Ohio State University. There were lectures, and you could see them – they had a different size back then, there was a bench on the Crays and you could sit on it. But it is one thing to have a supercomputer – the question is what we do with it, what is the program that we use. You need to use entirely different approaches when programming a machine in which you have one or two cores running, and those that have 20 to 40 – and this is only a single machine! When these are combined and they communicate with each other, a plethora of questions arise. Therefore, supercomputers are a very good thing, but you also need applications tailored to them in order to make efficient use of the opportunities provided by the several tens of thousands of computing cores.

What do you use supercomputers for?

I am a quantum physicist, so I focus on the simulation of quantum systems. We would like to understand the mathematical and physical properties of quantum systems, and to invent and design further systems by combining them. These are very serious mathematical models on the boundary between mathematics and physics – it is applied mathematics and theoretical physics. The simulation of quantum systems requires supercomputers because the associated resource needs to show exponential scaling. We want to simulate as large systems as possible in order to understand the experimental results or to suggest experiments. When the experimental results are in line with theory, that’s when you arrive at applied research, and then at products which will ultimately be used in everyday life.

How would you sum it up for a layperson?

The point is that particles also behave as waves, and at very small dimensions they will overlap, that is, they will not be independent from of each other, and quantum physics approaches will be necessary for describing them. Due to the interactions between the particles, you can confer very different properties to them by modifying and adjusting certain conditions. Electrons will behave very differently in such an interacting medium – these behaviours include supraconducting, something I had wished to resolve for a very long time. Supraconductors conduct current without resistance under a certain temperature, and the basic objective is to raise that temperature.

How can your results be utilised?

I develop supercomputer algorithms that can simulate quantum systems. The goal here is to make ourselves capable of recommending the production of materials that can also be used in everyday life – such as high-temperature supraconducting. Another key direction is to implement quantum computers – among others, we use supercomputer-based simulations to understand and improve the operation of quantum computers.

Do you have a project or result that would have been unfeasible without a supercomputer?

All of them. We work on algorithms where exponential scaling is reduced to polynomial scaling in order to enable classical computers to run quantum system simulations. There are results for which a conventional computer is sufficient but in case of the general goal, we almost always use supercomputers due to the size and complexity of the problem. I also use my laptop for the calculations and for the development work, and then I transfer my work to a bigger machine such as the institute cluster; if it works fine on that cluster, that’s the time for the transfer to HPC. However, in the small systems created on my own machine, reliable and universal conclusions cannot be drawn, and that’s why those enormous calculations run on supercomputers are necessary.

Is there an alternative to supercomputers? Can your tasks be solved in other ways?

Current research focuses on quantum computers providing a similar simulation environment. And this is one of our research projects, programming quantum computers – but reaching the exascale range is the most important thing to do now. Although Google achieved quantum superiority two years ago, this only holds for a very special task, which currently has no practical uses otherwise. In addition, the stability of these systems is an unresolved problem, they are noisy, and error correction often needs more resources than performing the calculation.

Have supercomputers made you surprised?

No. It is a very long path until one gets to supercomputers. You’ll have learnt a lot of things by then, and this will only be another more specialised scope of applications for which you still need to study of course. It does not happen overnight, you can only start doing it when you are prepared.

To what extent do you think the use of supercomputers is challenging?

Very much challenging. If you have the software written then it’s relatively easy but supercomputer-based applications may include very complex parameterisation. A whole lot of things must be adjusted to use the supercomputer – as a resource – as effectively as possible. This is where you need user experience, which is handed over from one researcher to the other. It is very important that you have to monitor how the application made use of the capacities. Someone may only use 20% of the requested resources, or may handle so much data which takes almost as much time as the calculations themselves. These should obviously be improved. If the program enables calibrations, then the user can adjust those, and will be able to exploit the resource with increased efficiency. This is a big challenge in any case – you need to adapt the task to the resources, and the resources to the task.

How fast can someone acquire the necessary skills?

It depends on the type of use. There are simple programs and there are sophisticated ones, and there are those where you can adjust all kinds of parameters – and what’s more, all of them are interrelated in some ways. Programming is a separate profession, which you may study for years and decades, and you also need to remain up-to-date. New hardware architectures just keep coming all the time, as do new pieces of software for them. This is a constantly advancing world.

If your only goal is to acquire user- level skills, how much time will it take?

There are different levels of application, and the question is what the goal is. Those using the applications do not need to know how communication between the processors occurs. But you may always learn new things, and indeed you always have to learn new things. There may be existing algorithms that you can’t directly use for a new set of tasks and you need to reorganise the way the program runs. The way we subdivide the tasks does matter – it is important to have as little data migration as possible. But there are lots of different solutions for optimised data communication. In some places multiple redundancies make it faster – but others place less emphasis is put on that and they optimise data storage and access in a different manner. Furthermore, different pieces of hardware need different software solutions, so if you want to enter a given system you will need previous training on that system. And of course, you don’t have to start with the most complicated task, which would have huge resource needs, but you need to start with something simpler. Step by step, test yourself with a minor task, and then fine-tune the adjustment, and then upgrade the scale.

For whom would you recommend the use of supercomputers?

For those working on tasks requiring large computing capacities, and for those needing as much computing capacity as possible in as short time as possible. Using a practical, everyday example, serious cost savings can be achieved by optimising energy flows, and you can also simulate that.

What makes an efficient supercomputer user?

Previous education and training. Supercomputers are an opportunity, but are also associated with some responsibility on the part of the user. HPC is extremely expensive, and therefore, you need to use it as efficiently as possible. It is very important for users to get acquainted with the various documents, and information released about the system so that they will calibrate their tasks in a manner to ensure the most efficient use of the resources. This doesn’t mean that they will succeed and that’s why the above-mentioned monitoring is that much important. Based on the feedback, they will improve, develop, or reparameterise the task. This is a learning process which requires continuous learning.

Why did you apply to join the expert teams of the HPC Competence Centre?

In addition to research, I also used to work as an IT security advisor for the industry for about ten years a long time ago. Then, I was only focussing on research besides my four children but now that they are somewhat more grown up, I have free capacities again. I work on supercomputers, I develop algorithms, and I’m even highly interested in this. I hope that I can make use of my skills, and help with the launch of Komondor by adopting international experience – you don’t need to invent everything, good solutions can be adapted, and then you can progress more efficiently and more quickly.

What will be the most interesting task for you?

The scheduling profiles we can create in KIFÜ’s new system Komondor, and the ways we can make HPC culture and opportunities more attractive to industrial users. I think this is very important, and requires interactivity between the user and the service provider. In addition, I wish to be an active user of the system and use Komondor for implementing some of the calculations required for our research projects.

What do you expect from the Competence Centre?

Science today is extremely specialised but interconnecting the complementary skills in relation to solving a problem may create immense potential and opportunities. I’ve already had discussions with colleagues having an HPC expertise that became a great learning opportunity for all those involved although we are active in different fields and have different subtasks. Therefore, the greatest strength of the Competence Centre may be to develop approaches shedding light on a task and on the related opportunities from a great number of angles. This may bring about synergies that may result in a much more efficient solution method.

To what extent do you perceive the development of supercomputers?

At our institution, Wigner Research Centre for Physics, professors of a more advanced age still had their punch card-based programs and even punched tapes in their cabinets. In the ‘80’s, the press was writing about the behaviour of quantum systems of 4, 6, or 8 units – and in highly sophisticated cases 12 units –, and they predicted what would happen to an infinite system based on those. This suddenly jumped to a 100 in the algorithm I began working with between ’93 and ’94. That is, we reached from 4 to 12 within 15 years, then from 12 to 100 within 3 years, and then later to a 1000 – now I think we are around 10000 in some cases. When I started off, the huge supercomputer of the research institute had 64MB of memory. Later, the 500-MB machine was a huge leap, and today’s machines have terabytes. And then there is the number of cores! First, we were working on computers having a single CPU, which had a much lower frequency than nowadays. And today you have several tens of thousands of cores in a single supercomputer – progress is enormous.

How do you see the future of supercomputers?

The problem with today’s conventional computers is that you need increasingly smaller dimensions to be able to raise the speed of processors further, and that’s the scale where quantum physics effects that cannot be eliminated emerge. There is a physical limit below which we can’t go – and parallelising is the solution for this. I can imagine a technological shift, the use of new materials – graphene-based processors are highly promising, for example. Nevertheless, one of the most important fields of research is spintronics, where you can perform operations by manipulating and controlling the electron spins. But these are developments in the basic or applied research phase. Many believe that quantum computers will knock out supercomputers, but there is no universal quantum computer. You may develop quantum computers for specific tasks, that is, these computers will only be very good in certain cases, but they will not be universal tools like HPC. But parallelisation is hard-coded into quantum computers. At the moment, it is not predictable which of them will provide the practical solution, but I don’t believe quantum computers will suddenly pop up everywhere because there is still a lot to do about them.