PhD course in
Computer Science and Artificial Intelligence
The Department of Mathematics, Computer Science and Physics of the University of Udine hosts the PhD course in Computer Science and Artificial Intelligence in agreement with Fondazione Bruno Kessler. The course continues an outstanding tradition in computer science teaching and research at the University of Udine, and ideally links up with the best science education courses in Italy at master level, as stated by the 2020/21 official ranking made by CENSIS. This tradition is further enriched by the dynamic, project-oriented knowledge production running at Fondazione Bruno Kessler, generating an ideal environment where top students can meet excellence in both theoretical as well as applied research fields.
The course is active since the XXXVII cycle (2021/22), and originates from the splitting of the PhD course in Computer Science, Mathematics and Physics. It resumes the tradition of the previous PhD course in Computer Science, active across thirty years since the first national cycle (1983/84) until the XXX cycle (2012/13).
The PhD course in Computer Science and Artificial Intelligence will graduate students with top skills, in topics that are listed below with links to the involved scientists belonging to the PhD Board, also in the context of a multi-disciplinary research plan:
Reconfigurable and trustworthy pandemic simulation
Simulation tools are fundamental to predict the evolution of pandemic, and to assess the quality of counter-measures, e.g. the effect of travel restrictions on the spread of the coronavirus. However, they come with two fundamental requirements. The first is the need for a fast reconfiguration of the simulation, in order to be able to describe the mutating scenarios of the pandemics. The second is the ability to produce correct and explainable results, so that they can be trusted and independently validated. The topic of this research is to devise a model-based approach that is able to represent at a high-level the features of a generic pandemic, from which an efficient simulator can be produced. Using formal methods, the results of the simulation are guaranteed to be correct by construction, with proofs that can be properly visualized and independently checked. The activity will be carried out as a collaboration of the Center for Health Emergencies (https://www.fbk.eu/it/health-emergencies/), that played a major role during the ongoing pandemics, and the Center for Digital Industry (https://dicenter.fbk.eu/), a leading centers in model-based design.
Reverse Engineering via Abstraction
Many artifacts in the development process (requirements, specifications, code) tend to become legacy, hard to understand and to modify. This results in lack of reuse and additional development costs. A reverse engineering activity is necessary to understand what the system is doing. Goal of the thesis is to provide automated techniques to analyse the inherent behavior of legacy artifacts, extract interface specifications, and to support re-engineering activities. The thesis will combine techniques from language learning, applicable to black-box artifacts, and formal techniques for the automated construction of abstractions in the form of extended finite state machines.
Epistemic Runtime Verification
Runtime verification is a light weight verification technique based on the analysis of system logs. A key factor is that the internal state of the system is not observable, but partial knowledge on its behaviour may be available. The thesis will investigate the use of temporal epistemic logics (i.e. logics of knowledge and believe over time) to specify and verify hyperproperties for runtime verification. Different logical aspects, like distributed knowledge and common knowledge, and the communication between reasoning agents, will be used to model hierarchical architectures for fault detection and identification, and for prognosis. Techniques for planning in belief space will be used for the design of fault reconfiguration policies.
Condition monitoring and predictive maintenance of complex industrial systems: Model-based reasoning meets Data Science
The advent of Industry 4.0 has made it possible to collect huge quantities of data on the operation of complex systems and components, such as production plants, power stations, engines and bearings. Based on such information, deep learning techniques can be applied to assess the state of the equipment under observation, to detect if anomalous conditions have arised, and to predict the remaining useful lifetime, so that suitable maintenance actions can be planned. Unfortunately, data driven approaches often require very expensive training sessions, and may have problems in learning very rare conditions such as faults. Interestingly, the systems under inspection often come with substantial background knowledge on the structure of the design, the operation conditions, and the typical malfunctions. The goal of this PhD thesis is to empower machine learning algorithms to exploit such background knowledge, thus achieving higher levels of accuracy with less training data.
Planning and scheduling with time and resource constraints for flexible manufacturing
Many application domains require the ability to automatically generate a suitable course of actions that will achieve the desired objectives. Notable examples include the control of truck fleets for logistic problems, the organization of activities of automated production sites, or the synthesis of the missions carried out by unmanned, autonomous robots. Planning and scheduling (P&S) are fundamental research topics in Artificial Intelligence, and increasing interest is being devoted to the problem of dealing with timing and resources. In fact, plans and schedules need to satisfy complex constraints in terms of timing and resource consumption, and must be optimal or quasi-optimal with respect to given cost functions. The Ph.D. activity will concentrate on the definition of an expressive, formal framework for planning with durative actions and continuous resource consumption, and on devising efficient algorithms for resource-optimal planning. The activity will explore the application of formal methods such as model checking for infinite-state transition systems, and Satisfiability and Optimization Modulo Theories, and will focus on practical problems emerging from the flexible manufacturing domain.
AI-based multimodal geospatial data processing for large-scale scene understanding
The EU promotes the creation of data spaces in order to maximize the impact that gathered data can have on society and the environment. At the same time municipalities and regional governments push for new acquisitions of multimodal geospatial data on their territories (cities, forests, rural areas, etc.). However, the integration of such data at different resolution, radiometry and acquisition systems is one of the biggest obstacle for proper data exploitation. Therefore the goals of the proposed PhD are: i) to study, develop and validate innovative solutions to fuse geospatial data (such as 3D point clouds, multi/hyperspectral orthoimages, etc.) of the built/vegetated environment ad extract metric information; ii) to conduct research on novel and efficient algorithms for 3D data semantic segmentation and classification using integrative AI approaches that can effectively replace traditional hand-crafted methods to ultimately improve performance and interpretability, handle unbalanced classes, ease deployment and foster scalability; iii) to analyze, realize and demonstrate new methods to create digital twins of our environment using multimodal geospatial data.
Pareto-based optimization methods to support one-click deployments of EdgeAI application flows
Applications that rely on the most modern sensing devices and technologies and combine complex artificial intelligence tasks are now mainstream. It is sufficient to say, “OK-Google/Alexa/Siri switch on the heating system when the temperature is below 18° C” to appreciate the power of the IoT in combination with an Artificial Intelligence engine. However, the typical approach to enable intelligent applications is cloud-centric, meaning that the intelligence (a home assistant) is hosted in the cloud infrastructure, and the sensor data collected by some IoT devices (a microphone array and a temperature sensor) flow from the cyber-physical-system until reaching a remote endpoint to be processed. Finally, the correct command is transmitted to the IoT actuator (a radiator thermostat). Alternative approaches to this are possible, for instance, by considering a more dynamic and configurable intermediate layer placed between the IoT and the Cloud sides, usually dubbed as the Edge layer. Generally, a configurable edge layer reduces the required bandwidth and latency and improves users’ privacy. Moreover, if portions of the application intelligence could be hosted in this layer, the IoT device lifetime would be enlarged. However, reconfiguring and deploying an end-to-end processing flow that involves the three aforementioned architectural layers poses major challenges. Select a more efficient detection algorithm from a rich machine learning algorithms library and pushing the “deploy” button of an application dashboard to see the selected algorithm up and running more effectively (according to a given metric) on my smart home devices is still a dream, in most of the cases. Moreover, depending on the hardware capabilities, the application requirements in terms of bandwidth and latency, and the accuracy required for the machine learning task to execute, different end-to-end configurations are possible, all sub-optimal and possibly non-dominated in the Pareto meaning. The subject of this Ph.D. is to investigate and propose novel optimization and assessment methodologies to efficiently sample such a complex design space in target application sectors such as home, industry, manufacturing, farming, etc. The reference technological environment covers (but is not limited to) embedded device software engineering (micropython, mbed OS, C languages and dialects, etc.), machine learning frameworks deployable on tiny devices (tinyML, TensorFlow lite, etc.), edge-based frameworks (eclipse Kura, edgeX Fundry, etc.) and cloud-based IoT platforms and services with AI support.
Artificial intelligence and big data analysis in cancer research
In recent years, oncology has improved treatment options and patients’ outcomes. This shift was mainly due to the rise of precision and personalized medicine in daily patient care. Precision and personalized oncology are based on the discovery of molecular mechanisms underlying tumor onset and progression, and consequently the design and discovery of new drugs. These novel drugs are prescribed according to a wide range of molecular signatures of tumor cells or tumor DNA collected from patients.
This novel scenario sets new challenges and new opportunities for the collection and analysis of larger amount of non-homogenous data across different time points during patients’ history (diagnosis, treatments and follow-ups). This fellowship will offer several alternative topics including analysis of genomic and clinical data, development of IT tools and IT platforms in the cancer research field.
Artificial Intelligence and Historical Documents: Character Recognition, Style Classification, Manuscript Dating
The research project, which will also have a Humanities scholar with expertise in ancient scriptures as a co-supervisor, will focus on experimenting with innovative deep learning techniques in the field of the analysis and recognition of ancient manuscript writings, to identify the most suitable features and workflows for the automatic hand and place similarity detection.
See the official PhD page at the University of Udine web site for calls and admission.
A list of the research topics of the PhD members is detailed here.