PhD course in
Computer Science and Artificial Intelligence
The Department of Mathematics, Computer Science and Physics of the University of Udine hosts the PhD course in Computer Science and Artificial Intelligence in agreement with Fondazione Bruno Kessler. The course continues an outstanding tradition in computer science teaching and research at the University of Udine, and ideally links up with the best science education courses in Italy at master level, as stated by the 2020/21 official ranking made by CENSIS. This tradition is further enriched by the dynamic, project-oriented knowledge production running at Fondazione Bruno Kessler, generating an ideal environment where top students can meet excellence in both theoretical as well as applied research fields.
The course is active since the XXXVII cycle (2021/22), and originates from the splitting of the PhD course in Computer Science, Mathematics and Physics. It resumes the tradition of the previous PhD course in Computer Science, active across thirty years since the first national cycle (1983/84) until the XXX cycle (2012/13).
As part of its agreement with the PhD course, this year Fondazione Bruno Kessler funds scholarships in the following topics and abstracts:
Integrating formal verification and testing for parametric software product lines
The increasing complexity and configurability of systems requires the development of new methods and tools to design and test parametric software systems and product lines characterized by variability from the point of view of the space of the possible functional configurations, the space of the deployment architectures, and of the aspects related to their dynamic reconfiguration at run-time. This Ph.D. thesis aims at defining novel approaches to the testing, verification and validation of this class of systems exploiting the integration of formal verification techniques and software testing approaches. On the one hand, formal techniques have the potential to analyze and cover all the behaviors of a software system; on the other hand, testing techniques are able to ensure adequate levels of coverage also in case of large and complex systems. The challenge here is the search for new directions and methods to interleave the two approaches in the specific case of large parametric systems, in order to optimize the balance between the adequacy of the coverage of the system’s behaviors and the effort of the overall verification activity.
Reconfigurable and trustworthy pandemic simulation
Simulation tools are fundamental to predict the evolution of pandemic, and to assess the quality of counter-measures, e.g. the effect of travel restrictions on the spread of the coronavirus. However, they come with two fundamental requirements. The first is the need for a fast reconfiguration of the simulation, in order to be able to describe the mutating scenarios of the pandemics. The second is the ability to produce correct and explainable results, so that they can be trusted and independently validated. The topic of this research is to devise a model-based approach that is able to represent at a high-level the features of a generic pandemic, from which an efficient simulator can be produced. By means of formal methods, the results of the simulation are guaranteed to be correct by construction, with proofs that can be properly visualized and independently checked. The activity will be carried out as a collaboration of the Center for Health Emergencies (https://www.fbk.eu/it/health-emergencies/), that played a major role during the ongoing pandemics, and the Center for Digital Industry (https://dicenter.fbk.eu/), a leading centers in model-based design.
Condition monitoring and predictive maintenance of complex industrial systems: Model-based reasoning meets Data Science
The advent of Industry 4.0 has made it possible to collect huge quantities of data on the operation of complex systems and components, such as production plants, power stations, engines and bearings. Based on such information, deep learning techniques can be applied to assess the state of the equipment under observation, to detect if anomalous conditions have arised, and to predict the remaining useful lifetime, so that suitable maintenance actions can be planned. Unfortunately, data driven approaches often require very expensive training sessions, and may have problems in learning very rare conditions such as faults. Interestingly, the systems under inspection often come with substantial background knowledge on the structure of the design, the operation conditions, and the typical malfunctions. The goal of this PhD thesis is to empower machine learning algorithms to exploit such background knowledge, thus achieving higher levels of accuracy with less training data.
Planning and scheduling with time and resource constraints for flexible manufacturing
Many application domains require the ability to automatically generate a suitable course of actions that will achieve the desired objectives. Notable examples include the control of truck fleets for logistic problems, the organization of activities of automated production sites, or the synthesis of the missions carried out by unmanned, autonomous robots. Planning and scheduling (P&S) are fundamental research topics in Artificial Intelligence, and increasing interest is being devoted to the problem of dealing with timing and resources. In fact, plans and schedules need to satisfy complex constraints in terms of timing and resource consumption, and must be optimal or quasi-optimal with respect to given cost functions. The Ph.D. activity will concentrate on the definition of an expressive, formal framework for planning with durative actions and continuous resource consumption, and on devising efficient algorithms for resource-optimal planning. The activity will explore the application of formal methods such as model checking for infinite-state transition systems, and Satisfiability and Optimization Modulo Theories, and will focus on practical problems emerging from the flexible manufacturing domain.
Meta-learning for efficient 3D representations
Learning-based algorithms for 3D object description, recognition and retrieval suffer from lack of annotated data, computationally
inefficient processing pipelines and poor generalisation ability across different application domains, such as robotic manipulation, automotive, and augmented and virtual reality. All these factors together often hinder the employment of 3D processing pipelines in real-world applications. The goal of this Ph.D. position is to conduct research on novel efficient algorithms for 3D feature representations using deep learning that can effectively replace traditional hand-crafted modules to ultimately improve performance, ease deployment and foster scalability.
The PhD course in Computer Science and Artificial Intelligence will graduate students with top skills, in topics that are listed below with links to the involved scientists belonging to the PhD Board, also in the context of a multi-disciplinary research plan: