doctoral study program in the field of study of Computer Science
Supervisors and topics for admission to study
in academic year 2023/24
Doctoral degree studies at Faculty of Informatics and Information Technologies STU in Bratislava
prof. Ing. Vanda Benešová, PhD. (vanda_benesova[at]stuba.sk)
Research of new methods of computer vision processing in medical applications using artificial intelligence
Computer vision is becoming increasingly important in the automatic clinical processing of medical visual data, particularly in the processing of radiological and microscopic images.
The most important current research of medical applications of computer vision is related to the diagnosis of various diseases with the aim to give more relevant information to the doctor as a result of automatic processing or to reduce time-consuming actions of the doctor.
To meet these objectives, it is necessary to develop new, robust methods of segmentation that could be applied for example for the segmentation of anomalies organs, tumors, monitoring their development over time, etc. It is also important to obtain additional information from the medical visual data such as assessment of malignant tumors in mammographic images or to get information from various types of microscopic images.
In recent years, this area of research has been dominated by applied methods of artificial intelligence, especially the use of deep neural networks.
The research in the doctoral studies will be focused on one of these areas.
Keywords: deep learning, computer vision, medical imagingResearch methods of using deep neural networks in computer vision
Artificial intelligence methods, especially deep learning methods, have brought a significant shift in the success of creating computer vision applications and enabled their use in many areas of real life.
In addition to research aimed at designing new systems of deep neural networks as well as their use in various application domains, it is also important to address the issue of their explainability and interpretability (XAI). Also related to this is the question of appropriately incorporating different approaches to interpretability so that they are beneficial to the target group for which they are relevant, e.g. for system developers, domain experts, or end-users. Another challenge for the use of deep neural networks in specific applications is the need to create annotations, which is often very time-consuming and requires domain expertise, e.g. in medicine. Therefore, the research focuses on solutions that work with imperfect data files with limited annotations. Semi-supervised learning (SSL) Learning without unsupervised domain adaptation (UDA) or noisy label learning (NLL) are three of the most common problems.
Research during the doctoral study will focus on one of the above areas.
Keywords: deep learning, computer vision medical imaging, explainable AI (XAI)Research of the Human-centered AI in computer vision applications
Recently, we have been observing massive technological progress in the field of artificial intelligence, especially deep learning. A new research challenge becomes the task of embedding and integrating the user or domain expert in the iterative design and development of an AI application, similar to the concept of user-centered design (UCD) known from UX. Under the term human in the loop of AI (Human in the Loop of AI), there are various proposals that reflect this approach.
The creation of annotations and active learning methods are also related to the problem of integrating the user into the AI application design process. Another open research area is AI explainability methods that are appropriate from a user or domain expert perspective. Related to this is the issue of trust in AI and the calibration of trust.
Research during the doctoral studies will focus on one of the mentioned areas.
Keywords: deep learning, UX in AI, Human in the Loop of AI, Human-centered AI
doc. Mgr. Gabriela Czanner, MA, MA, PhD. (gabriela.czanner[at]stuba.sk)
Methodology development for Artificial Intelligence for automated detection of corneal diseases
Keratoconus is a significant cause of visual loss in young adults. It is a progressive, usually bilateral corneal disease, accounting for more than 25% of corneal transplants undertaken in Europe and the United States. The introduction of corneal topography and tomography has improved the ability to diagnose keratoconus by increasing the ability to identify ectatic change at an earlier stage than has been previously possible (Belin et al., Eye Contact Lens. 2014;40(6):326-330). Despite advances in the field of imaging technology, however, measurement imprecision remains an important issue in identifying and discriminating change as a marker of disease progression. Aim of this project is to characterize the shape of cornea in health and disease, quantify the agreement of measurements, and find best approach to detect and monitor the disease. Methods used will include statistical predictive modelling, hierarchical modelling (also known as mixed effect modelling), empirical Bayes, quantification of uncertainty of AI, and the use of the uncertainty toward more accurate decision making.
Keywords: artificial intelligence, statistical modelling, ophthalmology explainable AI, clinical decision supportMethodology development of Artificial Intelligence for automated monitoring of progression to cognitive impairment in patients with Alzheimer disease
AD causes a decline in cognitive abilities such as memory and concentration. Such cognitive changes contribute to the loss of the ability to perform basic activities of everyday life independently. AD causes dementia symptoms that become worse over time and are irreversible. Alzheimer patients face the risk of progressing to mild cognitive impairment (MCI) and later to dementia. While dementia is not treatable, there are new promising therapies that can slow down the progression of patients who are not yet at mild cognitive impairment. Therefore, for cognitively normal Alzheimer patients it is important to predict the time of progression to mild cognitive impairment, as well as to express the AI certainty or uncertainty. This project will use longitudinal data and face several challenges such as: many potential predictors, unequal time between visits and missing data. This project will propose and develop methods on the intersection of deep neural nets and longitudinal statistical predictive modelling, to create an automated monitoring that is explainable. Uncertainty of AI will be investigated and quantified. Then methods will be developed to use the AI uncertainty toward a more accurate decision making.
Keywords: artificial intelligence, statistical modelling, ophthalmology, explainable AI, clinical decision support
prof. Ing. Pavel Čičák, PhD. (pavel.cicak[at]stuba.sk)
Communication in the environment of Internet of Things
The Internet of Things represents a complicated structure of non-homogeneous components with the various level of intelligence. For that reason, their interaction and communication denotes a separate level of complexity and a specific architecture. The topic deals with research in a chosen part of this interaction including communication in computer networks. That covers formal description methods of communication devices and protocols. Study is mainly focused on the mobile communication devices and systems. Security aspects at various levels are included.
Keywords: Internet of Things, communication in the Internet of Things, communication methods, protocols properties, mobile communicationsControl systems design in the environment of Internet of Things
The Internet of Things (IoT) represents a complicated structure of non-homogeneous components with the various level of intelligence and different types of application assignment. The controlling elements of these systems require various approaches for the design of hardware as well as software architecture and technical implementation, including low-power elements. The topic deals with research in the selected domain of methods to support automated design of controlling systems and subsystems of this type. An integral part of the design is also the solution of security, reliability, diagnostics and communication in the IoT environment.
Keywords: architecture of control units and systems, reliability and diagnostics, security of communicationIncreasing the Internet of Things efficiency
The Internet of Things (IoT) represents an environment consisting of a great amount of intercommunicating heterogeneous nodes, which are often limited in terms of their physical reachability, performance, or energy. This is especially true for the end nodes of sensor networks, which are integral part of the Internet of Things. These nodes are usually powered by batteries or harvesting the energy from the environment. Therefore, it is still required to increase their energy efficiency, which prolongs the battery-replacement interval (or the device lifetime itself) or enables to integrate more functions into a single device without increasing its energy requirements. In this area, it is possible to focus on the IoT device design, increasing efficiency of node’s operation during data processing, or on the energy-efficient secure communication between various IoT nodes.
Keywords: energy efficiency, low power consumption, Internet of Things, wireless sensor networks
doc. Ing. Ladislav Hudec, PhD. (ladislav.hudec[at]stuba.sk)
- Analysis of security data using machine learning methods
Due to the significant development of digitization and the Internet of Things, the present electronic world has a lot of cyber security data. Efficiently resolving cyber anomalies and attacks is becoming a growing concern in today’s cyber security industry all over the world. Traditional security solutions are insufficient to address contemporary security issues due to the rapid proliferation of many sorts of cyber-attacks and threats. Utilizing artificial intelligence knowledge, especially machine learning technology, is essential to providing a dynamically enhanced, automated, and up-to-date security system through analyzing security data. The goal is, for selected area, to propose new procedures and methods for the application of machine learning methods in the analysis of security events, especially with regard to the detection and prediction of cyber attacks.
Keywords: cybersecurity, machine learning Adversarial Attacks on Machine Learning Models
Supervisor-specialist: Ing. Jakub Breier, PhD., TTControl, Vienna, Austria
It was shown that neural networks are vulnerable to adversarial attacks. The majority of the academic community focuses on neural networks because they are currently the most popular research topic within machine learning. However, there are other algorithms that are widely used in the industry, such as support vector machines, tree-based models, etc. The goal of this project is to analyze security of these algorithms, propose novel adversarial attacks and defenses.
Keywords: machine learning, adversarial attacks, securityAutomating the scenarios generation for cyber range
A cyber range is an environment used for training security experts and testing attack and defence tools and procedures. Usually, a cyber range simulates one or more critical infrastructures that attacking and defending teams must compromise and protect, respectively. The infrastructure can be physically assembled, but much more convenient is its virtualization in cloud environment with IaaS (Infrastructure as a Service) service model. Although some modern technologies support the IaaS, the design and deployment of scenarios of interest is mostly a manual operation. As a consequence, it is a common practice to have a cyber range hosting few (sometimes only one), consolidated scenarios. However, reusing the same scenario may significantly reduce the effectiveness of the training and testing sessions. The goal of dissertation theme is to propose a framework for automating the definition and deployment of arbitrarily complex cyber range scenarios.
Keywords: games for cyber range, automating the scenarios generation
prof. Ing. Volodymyr Khylenko, PhD. (volodymyr.khylenko[at]stuba.sk)
Development of algorithms for optimizing the management of the financial and economic system using blockchain and AI technologies
In the context of the globalization of the modern world economy, the digitalization of financial services, the emergence of new information technologies, more opportunities are opening up to improve the quality of management of the global financial and economic system. The issues of optimizing such management, improving the efficiency of management decisions, and protecting against the negative impact of the human factor are becoming increasingly relevant. The research topic is both the analysis and development of tools for assessing the current state and forecasting the dynamics of financial and economic processes in the context of globalization, as well as the creation of software and algorithmic support for specialized narrow-profile decision support systems. The focus of the work should be on the use of new information technologies to solve the problem of optimal (optimized) control. It is necessary to consider the opportunities provided by blockchain, Big Data technologies, data mining, AI and others and justify the chosen solutions.
Keywords: Financial and economic system, optimal control, information technology, Big Data, data mining, blockchain technology, artificial intelligence, decentralized financeIncreasing the resilience of cybersecurity systems using neural networks and AI in the post-quantum era
The development of quantum technologies and the expected release of industrial quantum computers pose a threat to the confidentiality of information, its protection from distortion, unauthorized redirects, and so on. The topic of dissertation research should be the study of the degree of reality of the threats created by quantum computers regarding the encryption algorithms used in modern cybersecurity systems and the development of recommendations for making changes to cybersecurity systems. It is necessary to develop general requirements and recommendations to improve the level of security of cybersecurity systems in the post-quantum era. An important issue should be the study of the dangers to cybersecurity systems from new information technologies, in particular when using neural networks and AI, and the development of recommendations for protection against this type of threat.
Keywords: Quantum computers, post-quantum era, cybersecurity, encryption algorithms, cybersecurity enhancement, neural network, AI
prof. Ing. Ivan Kotuliak, PhD. (ivan.kotuliak[at]stuba.sk)
Improving the security of communication networks
This topic is concerned with the issue of ensuring the quality of service required by an application over a communications network. The required quality includes the requirement for security. It mainly focuses on the design of new approaches in network management, either in the area of routing protocols or in the area of data traffic management. The solution can use statistical properties of traffic, intelligent routing by software networks or enhanced networks for content delivery.Use of quantum computing in increasing computational efficiency
Quantum computing has matured from an experimental technology to everyday usable technology. The focus of this thesis is to understand the capabilities of quantum computing, their advantages, and the possible efficiency of computation and to validate these assumptions using available resources. Subsequently, it will be necessary to apply these techniques in new fields, such as artificial intelligence or security.AI-assisted Side-Channel Attack
Supervisor-specialist: Bc. Xiaolu Hou, PhD. (xialu.hou[at]stuba.sk)
Physical attacks on cryptographic implementations have been studied for a few decades. In this project, we focus on two main attack types: side-channel attacks and fault attacks. Side-channel attack (SCA) [1] targets implementations of cryptographic primitives passively. They exploit the possibility of observing the physical characteristics of a device during the encryption/decryption process. The observed information is then used to reveal the secret key used during the computation. Fault attack (FA) on cryptographic implementations exploits a situation when the attacker has access to the target device which is carrying out cryptographic computations. The attacker injects fault into the computation and utilizes the resulting ciphertext (and the corresponding correct ciphertext) to derive information related to the secret key.
In the past few years, AI has been adopted for SCA for different aspects of SCA [2,3,4]. This year, a new direction of adopting AI for FA has also been proposed [5]. For this project, the student is supposed to analyze the state-of-the-art AI-assisted physical attack methods. Then propose novel AI algorithms for SCA or FA. Finally, implement the proposed algorithm and evaluate it on modern symmetric block ciphers, e.g. AES [6] and PRESENT [7].
References:
[1] Mangard, Stefan, Elisabeth Oswald, Thomas Popp. Power analysis attacks: Revealing the secrets of smart cards. Vol. 31. Springer Science & Business Media, 2008.
[2] Moos, Thorben, Felix Wegener, and Amir Moradi. "DL-LA: Deep Learning Leakage Assessment: A modern roadmap for SCA evaluations." IACR Transactions on Cryptographic Hardware and Embedded Systems (2021): 552-598.
[3] Rijsdijk, Jorai, et al. "Reinforcement learning for hyperparameter tuning in deep learning-based side-channel analysis." IACR Transactions on Cryptographic Hardware and Embedded Systems (2021): 677-707.
[4] Wu, L., Perin, G., & Picek, S. (2022). The best of two worlds: Deep learning-assisted template attack. IACR Transactions on Cryptographic Hardware and Embedded Systems, 413-437.
[5] Cheng, Y., Ou, C., Zhang, F., & Zheng, S. (2023). DLPFA: Deep Learning based Persistent Fault Analysis against Block Ciphers. Cryptology ePrint Archive.
[6] Daemen, J., & Rijmen, V. (1999). AES proposal: Rijndael.
[7] Bogdanov, A., Knudsen, L. R., Leander, G., Paar, C., Poschmann, A., Robshaw, M. J., ... & Vikkelsoe, C. (2007). PRESENT: An ultra-lightweight block cipher. In Cryptographic Hardware and Embedded Systems-CHES 2007: 9th International Workshop, Vienna, Austria, September 10-13, 2007. Proceedings 9 (pp. 450-466). Springer Berlin Heidelberg.
doc. Mgr. Michal Kováč, MSc., PhD. (michal_kovac[at]stuba.sk)
Explainable Artificial Intelligence in Precision Oncology
Twenty years of technology research are more than enough to transform oncology from an experimental science to data science. Reasons why it is so are many, though the introduction of massive parallel sequencing holds a lion's share. Now we stand much closer to harnessing the full potential of cancer genome variability to know whence the tumor came from and how it is going to evolve. The research aims at explainable AI research in methodology of timing druggable mutations and their cells through cancer evolution and ultimately, using that information to build a recommendation system for clinical decision making. The key technologies to explore are dedicated to XAI big data analysis with distributed computing, with special focus on genomic data generated through larger international sequencing projects, such as the Cancer Genome Atlas.
Keywords: big data, precision medicine, genomics, distributed computing, explainable AI, clinical decision support
doc. Mgr. Monika Kováčová, PhD. (monika.kovacova[at]stuba.sk)
Research in the field of Mixed models based on XAI methods for determining tumor process heterogeneity
Tumor genomes are often highly heterogeneous and consist of genomes of multiple subclonal types. Complete characterization of all subclonal types is a fundamental need in tumor genome analysis in order to determine the most optimal and accurate therapeutic approaches for the type of tumor process. With the advances in NGS, we are now able to develop XAI-based computational methods to determine the subclonal structure of tumors as accurately as possible. Most of these methods are based on sequence information derived from somatic point mutation analysis. The accuracy of these algorithms depends critically on the quality of the information obtained by classical algorithms for determining the copy number and allelic frequencies of individual mutations and usually requires deep genome coverage to achieve a reasonable level of accuracy. The aim of this research is to develop new mixed nonlinear models that can significantly improve the accuracy of subclonal tumor structure determination and also allow to analyze the evolution of the tumor on simulated and real cancer sequencing data and evaluate their contribution.
Keywords: tumor heterogeneity, subclonal tumor structure, accuracy of allelic frequency determination, monitoring of temporal evolution of tumorXAI methods in research of mutational signatures for determining the evolution of the tumor process
Mutational signatures are key to understanding the processes that shape cancer genomes, yet their analysis requires relatively rich whole-genome or whole-exome mutation data. Recently, orders-of-magnitude sparser gene-panel sequencing data have become increasingly available in the clinic. To deal with such sparse data, we will work on a novel probabilistic mixture model of the data so as to maximize the model’s likelihood. We will compare classical models based on NMF and Bayesian approach and their usability for sparser data. Recent evidence has also unveiled strong correlations among replication timing and various forms of genetic mutation. We can deal with ideas if our probabilistic models allow us to find connection also between mutation signatures and monitoring of temporal evolution of tumor via mixture models. Thus, understanding the signatures of mutational processes may lead to the development of many effective diagnostic and treatment strategies.
Keywords: mutational signatures, evolution tumor process, sparse gene-panel sequencing data, probabilistic mixture models
doc. Ing. Tibor Krajčovič, PhD. (tibor.krajcovic[at]stuba.sk)
Hard Real-Time Fault-Tolerant Embedded Systems
Intermittent and transient faults are a substantial part of all faults in embedded systems. Determining their impact on the flow of the program requires that the system be in its working environment and that the check be performed in concurrent with the running of the program. Additional control processes and control means must not reduce the computational power of the system to such an extent that the response time is not met.
Keywords: pro-active program protection, malware, antivirusSafety-Critical Embedded Systems
For safety critical embedded systems is essential to use safety mechanisms so that the failure of the embedded system does not endanger the operator. The failure of the system can be caused not only by a failure of the embedded system itself, but also by an attack from the outside world. An analysis of these possibilities and a proposal of new firmware development methods for such systems is needed.
Keywords: embedded systems, safety-critical, endangerment of the operatorPro-active protection of the program against malicious software
Currently, the most important means of protection against malicious software is an antivirus program. However, its capabilities are limited by the detection algorithms themselves, the speed of updating the database, etc. Malicious software can affect a program's behavior in many ways. There is a need to analyze these possibilities and propose new methods for developing such programs that will have the ability to protect themselves from malicious software, even from those that were not detected by the antivirus program.
Keywords: embedded systems, real-time, fault-tolerance
doc. Ing. Ján Lang, PhD. (jan.lang[at]stuba.sk)
Open systems knowledge interlinking
Within a software development, it is relatively difficult and at the same time necessary to navigate between a wide range of potentially available software knowledge (including, for example source code fragments), to make decisions based on their context for their effective use. Standard procedures, such as the decomposition of a complex problem (applied e.g. within agile approaches in software development) into less complex or simple ones, only multiply the number of potential connections and indirectly associate the necessary knowledge, skills and competences in the context of software development itself. It implies the level of education, experience or professional training that an individual must have at least in order to be considered competent or qualified for the task in a measurable way. Interconnection at the level of knowledge must also be investigated in connection with open systems, existing multi-platform recommenders based on artificial intelligence and expressed in a relevant form, e.g. through models or visualization. A certain premise of re-using successful solutions available in the form of a "question and answer" Q&A pair by appropriately connecting elements from the problem domain to elements from the solution domain sounds significantly motivating here.
Keywords: knowledge interlinking, knowledge, skills, competences, reusability, visualization, modeling, agility, open systems, Q&AModeling as a documentation procedure
Modeling is an interpretation technique that enables the expression of ideas, thoughts or simply an intention at different levels of abstraction. The model itself as a product can thus be perceived as a framework, but also as a significantly detailed and executable model supported by invasive or non-invasive mapping its elements. In software development, it not only serves for the communication of interested parties, verification of selected properties, the very expression of variability but as a potential support for public investments protection. As an analytical or design model, it supports the readability of otherwise complex constructions even over time and represents a certain form of documentation artifact. The importance of automated documentation generation in the form of non-trivial views on the structure even behavior of the system being developed is also interesting retrospectively, especially with regard to the limitations of existing approaches, where the only measure of progress is the creation of functioning software. Where, and whether to use modeling in agile and lean software development offers a number of open problems that a doctoral student can address in his research.
Keywords: modeling, model, documentation, idea, intention, artifact, structure, behavior, retrospective, agility, savingsDecision support systems in clinical practice
Supervisor-specialist: Ing. Fedor Lehocki, PhD. (fedor.lehocki[at]stuba.sk)
Clinical decision support systems can support medical personnel in decision regarding diagnosis and therapy. With recent progress in data based AI several challenges emerge especially in explainability of derived decision by such systems. Analysis of existing decision processes in data and knowledge based AI systems would pose interesting approach in development of methodology using the best characteristics of both approaches to improve their acceptance by clinical practice.
Keyword: clinical decision support, data, knowledge, AI, telemedicineOptimization of digital systems development
Supervisor-specialist: Ing. Lukáš Kohútka, PhD. (lukas. kohutka[at]stuba.sk)
The development of digital systems in the form of microchips is carried out by an extensive and complex process that requires a significant amount of effort and time to develop, while such development is carried out by several teams of people with different roles, so collaboration in the team of digital system developers also plays an important role. The efficiency of the work of digital systems developers is largely dependent on the software resources that these developers use. While software engineers have relatively large resources for efficient software development, digital systems developers currently develop digital systems less efficiently, which results in high development costs and longer time to market.
This topic is devoted to the search for new approaches to the efficient development of digital systems using software resources, while it is possible to focus on optimization or automating selected parts of the digital systems development process, such as design, description, verification, synthesis, or documentation of a digital system.
Keywords: optimization, digital systems, team collaboration, ASIC/FPGA, CAD (computer-aided design), EDA (electronic design automation), IDE (integrated development environment)Design and optimization of digital systems based on RISC-V
Supervisor-specialist: Ing. Lukáš Kohútka, PhD. (lukas. kohutka[at]stuba.sk)
RISC-V is an open instruction set for RISC-type processors, which was created in 2010 and is gradually gaining popularity due to its openness, modularity, and universality. Therefore, research into new microprocessors is largely focused on RISC-V. The development of increasingly efficient digital systems, especially microprocessors, is a long-term research problem and a challenge, as the software requirements for digital systems that enable the software to be executed in an acceptable time and at an acceptable cost, including energy consumption, are constantly increasing, and thus creating pressure for the development of ever more efficient and more energy efficient architectures of digital systems.
This topic is devoted to the design and optimization of digital systems that are based on the open RISC-V instruction set. Within this topic, one can focus on finding a new RISC-V microprocessor architecture or a selected part of the processor or expanding an existing RISC-V microprocessor with a new hardware acceleration of the selected algorithm, for example in the form of a coprocessor. The main goal is to achieve higher efficiency of a digital system based on RISC-V, while the important criteria for evaluating such a system are the area of the chip (including ASIC), performance (clock frequency, throughput, and latency), energy consumption and, last but not least, the reliability of the digital system.
Keywords: optimization, digital systems, microprocessor, RISC-V, ASIC/FPGA
Ing. Giang Nguyen, PhD. (giang.nguyen[at]stuba.sk)
- Soft computing for complex solutions
The topic is focused on soft computing, which constructs computationally intelligent methods by combining edge technologies such as machine learning, neural networks, and deep learning to solve complex domain problems like network monitoring, resource management. Various methods used in soft computing are neither independent of nor compete, but rather, they work in a cooperative and complementary way. Soft computing aims for tolerance of imprecision, uncertainty, partial truth, and approximation to achieve effectiveness and low solution cost. When today data has large-scale potential with many V's characteristics, the wider collaboration between smart data-centric AI methods, scalable data processing, and high-performance support is practical to face challenges in many areas. All of these advanced technologies do not have to always merge together, but the alliance is essential. - Machine learning and sensitive data protection
Many modern-world problems, which we want to solve with the help of AI, require direct access to sensitive data such as personal, medical, business, or security information. In many cases, getting access to such information is nearly impossible due to the protection requirements. From this point, several questions are raised, such as how to avoid sensitive information disclosure in the data science process or moreover, how can we model sensitive data without direct access to them. The topic of the work is focused on collaborative (federated) learning, where it is necessary to ensure knowledge sharing between partners, while ensuring the protection of sensitive data at the same time.
doc. Dr. Ing. Michal Ries (michahl.ries[at]stuba.sk)
Architecture for blockchain services in the domain of decentralized finances
Supervisor-specialist: Ing. Kristián Košťál, PhD. (kristian.kostal[at]stuba.sk)
The digitalization of financial services is advancing with the announcements of central bank projects to introduce digital currencies based on blockchain technology. The topic of the thesis is to design a blockchain architecture including all actors, devices, parameters for the ecosystem of the industry for a credible exchange of digital assets. The focus of the work is the proposal of architecture and consensus algorithms in order to identify a new model that will allow a gradual transformation from today's Internet of content and centralized finances to the future Internet of values with the applications of decentralized finances. In Ph.D. studies, it will be also important to look at aspects of safety, possible regulations, and anti-money laundering techniques, so that the field of research can be extended to the application of machine learning approaches and artificial intelligence. The results of the research could also lead to definition, possible standardization, how content that is online could be a bearer of value and network participants would exchange these values, not the content itself. The use of blockchain technology is also gaining momentum in the public service sectors, where there is often a problem with non-transparency of data, which is the exact opposite of the use of blockchain technology, where everything is transparent. The future market is set up as a multi-way system consisting of billions of interaction endpoints. In this vast heterogeneous environment, the secure exchange of data, values , and seamless transactions will be essential for the efficient/credible functioning of the financial sector ecosystem.
Keywords: Blockchain technology, artificial intelligence, decentralized finance.Research challenges for the application of blockchain technology in the domain of cryptoeconomics
Blockchain technology today provides sufficient technological background in enhancing cybersecurity for applications in the crypto-economics domain. The main advantage of blockchain technologies applied to security systems is their decentralized, self-verifying nature in the public domain. Without a single entry point and the need to compromise hundreds or thousands of nodes at once, disrupting blockchain-based security systems is technologically very complex. The application of blockchain technology in the crypto-economy will have a major impact on ensuring high levels of cyber security, which brings major research challenges such as. a more secure DNS system, a more secure P2P (peer-to-peer) messaging system, decentralized data storage, a more secure "privacy-first" web browser, or e.g. more secure decentralized data platforms for IoT (Internet of Things).
Keywords: Blockchain technology, crypto-economics, cyber securityResearch in the domain of using AI techniques in Distributed Ledger Technology
Distributed ledger (DLT) technology is a digital system for recording a trade between two parties, in which transactions and their details are recorded in several places simultaneously. Unlike traditional databases, distributed ledgers have no centralized data storage or management functions. The basic idea of the DLT is to decentralize data storage so that it cannot be owned or managed by only one specific actor. The ledger can be updated with a sheet of transactions, where the transaction can no longer be changed after it has been recorded in the sheet. Subsequently, it is necessary to verify the upcoming transaction before entering the sheet by a trusted party. One of the best-known examples of DLT is Blockchain technology. The difference between various DLTs, such as blockchain, is how data are stored and consensus on validity is reached.
In addition to the isolated use cases and capabilities of artificial intelligence, it can also help overcome some of the limitations of DLT-based systems, such as implementing learning rules applied to transactions. Combining these two technologies (AI and DLT) can provide successful, efficient, and valuable results in the financial field and the broader application of DLT. The doctoral study will aim to define and model how artificial intelligence capabilities can be integrated into systems based on distributed ledger technology. The research will also take place in machine learning models that can use data stored in the DLT network to predict behavior or do real-time analytics.
doc. Ing. Peter Trúchly, PhD. (peter.truchly[at]stuba.sk)
Utilization of 5G and network control in an interconnected transport environment
Cooperative intelligent transport systems are systems that have been rapidly developing recently and are based on an environment of connected (or autonomous) vehicles. These vehicles are connected not only to each other, but also actively communicate with external services or devices, which are the transport infrastructure itself. This connection makes it possible to increase road safety, but also to effectively manage the entire traffic situation. Communication in different networks is usually heterogeneous, different access technologies (wireless, mobile, satellite) and different protocols are used. Effective management and control of such an environment is not a trivial task. It opens up a number of problems that a doctoral student can address in his research.
Keywords: Cooperative intelligent transport system, connected vehicles, autonomous transport, network management, wireless technologies, 5G, road safetyResearch in the field of sensory data fusion for connected and automated vehicles
Supervisor-specialist: Ing. Marek Galiński, PhD. (marek.galinski[at]stuba.sk)
The vehicles, which aim to provide autonomous driving at level 3 and above, are equipped with a number of different sensors, thanks to which they collect huge volumes of data in real time. In order for the vehicle to be able to make informed decisions without compromising safety, it is important to understand this information correctly and to interconnect it appropriately. Also vehicles themselves are being interconnected and with the rising deployment of standalone 5G networks this trend will accelerate. In the field of sensory data fusion, there are a number of open research problems today focused on how to correctly interpret this collected data in the context of autonomous vehicle driving. Due to the huge amount of this data, this topic is especially suitable for those who want to deal with optimization problems and data science during their doctoral research, mobile networks respectively.
Keywords: autonomous transport, sensor data, data optimization, data science, road safetyCooperative control research for autonomous vehicles
Supervisor-specialist: Ing. Rastislav Bencel, PhD. (rastislav.bencel[at]stuba.sk)
The topic is focused on autonomous transport in the context of cooperative control, which includes platooning or group start. An effective solution needs to be proposed to increase the flow and safety of traffic. The improved traffic flow results in a reduction in energy consumption and emissions. The topic itself represents a complex problem, within which it is necessary to consider that in the future, there will be vehicles that will not meet the required level of autonomous driving to ensure platooning or group start. Additionally, it is possible to use different types of communication for autonomous vehicles. The communication can occur directly between the vehicles or between the infrastructure and the vehicle. The presented topic provides the possibility of research in the areas of algorithm optimization, architecture design, or solving network problems at the Vehicle-to-Infrastructure or Vehicle-to-Vehicle levels.
Keywords: road safety, autonomous transport, platooning, group start, intelligent infrastructure, 5G networks
doc. Ing. Valentino Vranić, PhD. (valentino.vranic[at]stuba.sk)
Patterns in Agile Software Development and Beyond
Living structures tend to develop in a pattern-like way: by generating elements that balance the conflicts of contradicting forces. In software development, this is so with code, where we talk about design patterns, but also with people, where we talk about organizational patterns. Patterns do not come in isolation. On the contrary, they form complex linguistic systems. This raises a number of research questions, such as how to make patterns and pattern languages more accessible and comprehensible, how patterns form pattern languages and how are they composed with each other, how are organizational patterns related to design patterns and software modularization, how to recognize patterns in software development artifacts, how can organizational patterns contribute to collaboration in distributed settings, or how the understanding of patterns in software development can contribute to the understanding of patterns in other areas of human life (e.g., in drama) or vice versa.
Keywords: agile, lean, patterns, pattern languages, modularization, collaboration, distributed software developmentSoftware Knowledge Comprehension and Reuse
Software development operates on versatile knowledge, such as code, patterns, use cases, graphical models, specification, modularization, running software, execution logs, tests, and even people organization. Using and reusing such software knowledge requires understanding the intent behind it and how it’s interlinked. Connections and possible transformations between software development artifacts need to be explored and supported by new kinds of models and visualization in order to improve their comprehensibility and reusability. Agile software development, as a human centered way of software development, and software product lines, as an organized approach to feasible software reuse, are of a particular interest in this sense.
Keywords: comprehension, reuse, intent, modularization, agile, lean, software product lines, visualization, modeling
Doctoral degree studies on Institute of Informatics SAS
Institute of Informatics SAS is an external educational institution in accredited doctoral degree study program Applied informatics.
Ing. Zoltán Balogh, PhD. (balogh.ui[at]savba.sk)
Design and Research of Intelligent Software Agents for an Auction-Based Distributed Market Platform
The topic focuses on the design, research and development of intelligent software agent methods and a distributed platform, for use in auction-based resource allocation in complex and dynamic environments. Software agents compete or cooperate with each other to achieve a common goal. These agents can also represent different entities, such as machines, people, or organizations, on behalf of which they act. Research can be focused on the design and optimization of agents themselves, as well as interactions between agents (communication, negotiation, coordination, competition or cooperation). The topic also focuses on the development of new auction mechanisms that can efficiently process and analyze information from different types of sources, including physical, digital and virtual. Research can also be directed at ways to improve the scalability, adaptability, and robustness of software agent auctions in different environments and scenarios. The results of this research will provide valuable insights and recommendations for the design and optimization of auction software agents with potential applications in various fields including e-commerce, logistics, energy and telecommunications.
Mgr. Martin Bobák, PhD. (martin.bobak[at]savba.sk)
Intelligent software platforms for the virtualized computing continuum
The volume of data is constantly growing, which makes its processing and storage using standard approaches more demanding. There is a growing demand to use the entire (heterogeneous) computing continuum, i.e. from less powerful edge devices (e.g. various edge/IoT devices) to high-performance clusters available in the form of cloud resources. Such an approach increases the scalability, reliability and efficiency of the proposed solution. On the other hand, this approach brings new challenges and open problems such as elasticity (automatic scalability), effective access to data and/or computing resources, effective management and sharing of data in a distributed environment, integration of decentralized cognition, and others.
RNDr. Ján Glasa, CSc. (jan.glasa[at]savba.sk)
Computer modelling of fires
Consulting Co-Supervisor: Ing. Lukáš Valášek, PhD. (lukas.valasek[at]savba.sk)
Computer modelling of fires currently achieves a high level of accuracy and reliability. Existing simulators allow to include a wide range of physical processes related to fire in the calculation and to model the fire course and its effects even in large structures. However, such simulations are generally computationally demanding and require high-performance computers. The research will focus on modelling and visualizing a fire in a selected structure using the FDS program system and its effective realization on powerful computers. The simulation results will be compared with the data obtained experimentally.
Keywords: computer modelling, CFD, fire, FDS, parallel calculation, HPC
doc. Ing. Ladislav Hluchý, CSc. (Ladislav.Hluchy[at]savba.sk)
Distributed large data processing
Rapidly increasing volumes of diverse data from distributed sources create challenges for extracting valuable knowledge. Such applications can be considered modelling, simulation, pattern recognition, visualization, etc. in different areas as e.g. biomedicine, astrophysics, environmental sciences, aeronautics, automotive, energy, material sciences. Due to the size of the data, which are often referred to as large, extreme, it is necessary to design a methodology, robust methods and tools for extreme-scale analytics in synergy with distributed architectures for collecting and managing vast amounts of data such as Cloud Technologies and IoT. The dissertation project will be focused on the analysis, design of methodology, methods and algorithms for processing of large data for selected applications, which are currently solved at UI SAV. The research project will also include research and development of appropriate tools and services for distributed processing of methods and algorithms.Artificial Intelligence Methods in Cyber Security
Most current approaches to computer security focus on specific aspects of information and communication technology systems such as access control, cryptography, anonymization, virus protection, antivirus detection, intrusion detection, and anomaly detection. However, they lack an overall view of many aspects of cyber threats and do not pay due attention to one of the most important elements of cyber security: the human aspect. In addition, they often fail to address the dynamic nature of cyber attacks that are rapidly evolving and become more sophisticated by using new vulnerabilities and combining various attack channels (network, physical, human, etc.). To address these constraints and to increase our detection and response capabilities, we need a systematic and holistic approach to cyber security that takes into account technological and human factors. The dissertation project will focus on the design of methodology and methods for the analysis of anomalies and abnormalities using techniques of data acquisition and machine learning (data mining and processes mining) with the possibility of detection of hitherto unknown threats and vulnerabilities.
Ing. Peter Malík, PhD. (p.malik[at]savba.sk)
Deep neural networks for applications in image processing and computer vision
Artificial intelligence has become ubiquitous. Most people do not realize that they use various forms of applied artificial intelligence on a daily basis. And that is the goal of successful application of artificial intelligence in general. The user should perceive this as a seamless improvement in the quality of the service or industrial process. Currently, such a result requires a high level of expertise in the field of artificial intelligence, data analysis and the application domain itself. Transfer learning helps to reduce the time of applying an existing intelligent model to a new application task, but it is not sufficient by itself to achieve the desired results. It is still necessary to make a lot of efforts to make the new application not only accurate, but also reliable, robust and non-discriminatory. It is still an open research question which model and which method to choose, and which individual steps to use to achieve the best required parameters in a specific application domain.
This topic specializes in the field of image data, where the input space is typically multidimensional with a large number of points in space (pixels), time (frames), and electromagnetic spectrum (channels). Large-volume data contains a large amount of redundant information capable of not only overwhelming but also completely confusing an intelligent system. The right choice and correct application of algorithms is key here. Deep neural network models achieve the best results in this area. Models of convolutional neural networks are gradually being replaced by models of image transformers due to their better results achieved on sufficiently large data sets. It is expected to work with both types of models, but the emphasis is on modern algorithms of image transformers and the creation of algorithms and methods necessary for their application in specific domain tasks. The specific application domain and specific work tasks are determined after consultation with the student. The student will be actively involved in ongoing research projects, and thus will solve research tasks of practical application of artificial intelligence in industrial practice, and will work with real industrial data. Research tasks will be solved in the premises of the Institute of Informatics of the Slovak Academy of Sciences. The student will present his results at international conferences or in renown foreign journals. Participation in international competitions focused on the application of intelligent models is supported.
Key words: artificial intelligence, deep learning, transfer learning, image transformers, convolutional neural networks, neural network architecture, detection, instant segmentation, image processing, computer vision, industry applications
Ing. Milan Rusko, PhD. (rusko.ui[at]savba.sk)
Automatic detection of Alzheimer´s disease by patient´s speech analysis
The object of this work is to design and implement a system for automatic detection of symptoms of Alzheimer's disease via automatic analysis of patient’s speech. The student will review the state-of-the-art in this non-invasive screening and diagnostic method in the world. He will analyze the approaches that are using acoustic and linguistic characteristics as well as machine learning techniques. In the practical part he will design, implement and evaluate a program for automatic detection of Alzheimer's disease by analyzing the patient's speech.Automatic measurement of stress in human voice
The aim of the work is to to prove the concept of identifying the stress level speaker by analyzing his speech. The doctoral student will give an overview of the state-of-the-art solutions. He will analyze the most commonly used methods that use acoustic and linguistic cues as well as machine learning techniques. He will design, implement and evaluate a system for automatic detection of actual emotions from speech.High-end expressive speech synthesis in Slovak
The aim of the thesis is to record a speech database and create a speech synthesizer in Slovak using the latest machine learning technologies, which will be able to generate a voice with higher levels of emotional activation - arousal (excited, urgent, warning) as well as a voice with lower level of arousal (calm, soothing). The student will also create a voice expressing negative emotions and a voice expressing positive emotions. The voice will be implemented in the voice assistant.
prof. Ing. Ivan Štich, DrSc. (ivan.stich[at]savba.sk)
Trial wave function optimization for QMC via artificial neural networks
Goals: Construction of many-body trial wave functions for use in quantum Monte Carlo (QMC) using different kinds of neural networks architectures, such as FermiNet [1] or Boltzmann machines [2] and custom-made architectures. The study will start from lattice systems, such as Heisenberg model, and will systematically be developed towards more complicated systems, such as multi-electron atoms.
Annotation: Fundamental problem of many-body QMC methods is a clever construction of trial wave functions which can dramatically reduce the huge computational costs of QMC techniques as well as hugely improve their accuracy [1]. To this end we propose use of neural network architectures which will incorporate the anti-symmetry property of the Fermionic state. The proposed techniques will be applied to a host of different electronic structure problems of systematically increased complexity. Starting for the Heisenberg model the study will proceed towards simple lattice fermionic models and finally towards small atoms. The main objective is generalization of the proposed techniques to general many-body condensed matter systems.
[1] G. Carleo and M. Troyer, Solving the Quantum Many-Body Problem with Artificial Neural Networks, SCIENCE 355, 602-606 (2017).
[2] D. Pfau, J. S. Spencer, A. G. D. G. Matthews and W. M. C. Foulkes, Ab-Initio Solution of the Many-Electron Schrödinger Equation with Deep Neural Networks, Phys. Rev. Res. 2, 033429 (2020).
Ing. Dinh Viet Tran, PhD. (viet.tran[at]savba.sk)
New methods for development, deployment and orchestration of cloud services
Applications that utilize machine learning and deep learning require significant computing power, as well as specialized accelerators such as GPUs. Additionally, distributed learning and federated learning require resources that are distributed across different data centers. The goal of this thesis is to propose a flexible and resource-efficient cloud platform for artificial intelligence that addresses these challenges. The proposed platform should automate the provisioning of resources required by applications, freeing developers from the need to manipulate underlying infrastructures, while also optimizing performance and resource utilization.
Mgr. Peter Weisenpacher, PhD. (weisenpacher.ui[at]savba.sk)
Computer modelling of flow during a road tunnel fire
Consulting Co-Supervisor: Ing. Lukáš Valášek, PhD. (lukas.valasek[at]savba.sk)
Road tunnels are an important part of international transport systems; therefore, increased attention is paid to the fire safety of tunnels. The research will focus on problems related to computer modelling of flows in a highway tunnel using the FDS software system, which allows realistic modelling and visualization of flows generated by fire and simulates the operation of tunnel safety systems. The research will also include issues related to the parallel implementation of the simulation. The calculations will be performed on an efficient computing infrastructure at the Institute of Informatics of the Slovak Academy of Sciences in Bratislava.
Keywords: computer modelling of fire, highway tunnel, fire, FDS, paralelization, HPC