PhD course in
Computer Science and Artificial Intelligence
The Department of Mathematics, Computer Science and Physics of the University of Udine hosts the PhD course in Computer Science and Artificial Intelligence in agreement with Fondazione Bruno Kessler. The course continues an outstanding tradition in computer science teaching and research at the University of Udine, and ideally links up with the best science education courses in Italy at master level, as stated by the 2020/21 official ranking made by CENSIS. This tradition is further enriched by the dynamic, project-oriented knowledge production running at Fondazione Bruno Kessler, generating an ideal environment where top students can meet excellence in both theoretical as well as applied research fields.
The course is active since the XXXVII cycle (2021/22), and originates from the splitting of the PhD course in Computer Science, Mathematics and Physics. It resumes the tradition of the previous PhD course in Computer Science, active across thirty years since the first national cycle (1983/84) until the XXX cycle (2012/13).
The PhD course in Computer Science and Artificial Intelligence will graduate students with top skills, in topics that are listed below with links to the involved scientists belonging to the PhD Board, also in the context of a multi-disciplinary research plan:
External scholarships (FSE Call)
1) Extended reality for the cultural heritage and the promotion of the Friuli Venezia Giulia area
The research activity will have to explore the different technologies that fall within the extended reality (virtual reality, augmented reality and mixed reality) and design solutions for the enhancement of the cultural heritage and the promotion of the Friuli Venezia Giulia territory.
Advisor: Fabio Buttussi
2) Diagnosis of dysphony and laryngeal pathologies using advanced numerical models of phonation and AI techniques
The research program has as its objective the study of numerical models of the phonation organs, and in particular of the vocal cords, for the design of medical devices aimed at the diagnosis, treatment and rehabilitation of laryngeal pathologies and vocal cord dysfunctions.The study aims to address the development of new integrated solutions by resorting to knowledge in the field of digital imaging diagnostics and acoustic analysis of the voice using numerical models of phonation organs. As part of the study of investigation methods, Artificial Intelligence will be integrated with phonation models and analysis algorithms in order to improve the adaptation of models to clinical observations and to produce diagnoses based on analysis results and interpretation of the data that the models can in turn provide. From a methodological point of view, the project will benefit from access to Big Medical Data databases and repositories and from the possibility of exploiting the computing resources of the Cloud High Performance Computing (HPC) Data Center for the development of complex numerical models public and private, mainly regional. The project will also analyze the possibilities of integrating such a diagnostic system within regional healthcare and research structures.
Advisor: Carlo Drioli
3) Computer Vision for Environmental River Analysis (COVER)
Friuli Venezia Giulia is crossed by several rivers, including the Tagliamento, the Isonzo and the Natisone. These rivers are important for the maritime sector and the strategic position of the region. The Tagliamento provides irrigation for agriculture and habitat for various fish species. The Soča River forms part of the border between Italy and Slovenia, and its valley contains historic battle sites from World War I. The Natisone valley contains archaeological sites of ancient Roman settlements. These rivers and their valleys are essential for the transport, trade and development of the Friuli-Venezia Giulia region. Discovering and preserving the delicate balance of riverine fish habitats while driving regional development is a paramount challenge in our region. To achieve this, we rely on cutting-edge riverine fish habitat models that predict the perfect locations for different species based on their relationship with environmental conditions. These models integrate a comprehensive understanding of the river’s hydromorphology, the biological needs of the target species, and the intricate connection between habitat availability and flow conditions.
However, traditional methods of surveying these habitats are slow, complex, and costly. Even with extensive efforts, these only provide a limited understanding of the upstream and lateral river processes at play. The study aims to exploit the power of drones and computer vision techniques to gather and transform data relevant to habitat modeling. We will analyze sediment distribution and other crucial environmental factors, albeit with some limitations. Astonishingly, these game-changing techniques have yet to be fully utilized in assessing river channel dynamics, particularly for planning and management purposes.
The project will explore and introduce groundbreaking deep learning algorithms to showcase their prowess in extracting valuable information from remote sensing data. From classifying landscapes to mapping plant communities and floods. This will allow researchers from other fields to swiftly collect and analyze river data in a semi-automated, non-invasive manner. The project will develop cutting-edge techniques to exploit a vast volumes of images such that the algorithms will be able to unveil intricate details about habitat types, physical parameters, and environmental conditions.
This invaluable knowledge will open to the creation of digital river twins, intricate habitat maps, identify critical areas, and assess the impact of water changes. This project proposal aims to actively respond to the trajectory 2. SMART MOBILITY: intelligent technologies, systems and solutions for ships, shipyards, ports and their land connections by acting on 3 relevant points.
1) Data-driven life cycle design: The project aims to monitor and analyze riverine fish habitats to maintain their suitability and contribute to regional development. This involves the use of riverine habitat models, which predict the suitability of a location for a species or group of species based on its observed relationship to environmental conditions. These models integrate a hydromorphological description of the river, a biological model of the target species and a description of the relationship between habitat availability and flow conditions. The use of remote sensing techniques and computer vision algorithms can help analyze and extract valuable information from large volumes of images, contributing to the development of digital river twins. This data-driven approach can improve safety, efficiency and functionality while reducing environmental impact and material use.
2) Development of digital twins integrating river models with ports/resorts/etc.: Recent project advances in the use of drones and image analysis for data acquisition and transformation are critical for the development of digital twins of the regional port/interport system and its connections. These digital twins, integrated with the maritime environmental system and its monitoring network, can optimize the management of the entire system, also reducing the environmental impact.
3) Sharing: The project’s attention to the monitoring and analysis of river fish habitats can also contribute to the development of boats for shared tourism purposes. Information extracted from computer vision algorithms can be used to identify critical areas of habitat and assess the impact of water changes. This can help in the design of nautical means, mainly electric, suitable for shared use, accelerating the transition towards the MAAS (mobility as a service) paradigm for nautical tourism. The project therefore combines technological innovation and environmental conservation, contributing to the sustainable development of the Friuli Venezia Giulia region.
Advisor: Niki Martinel
4) The role of new technologies in the Green Deal: more efficient models for Artificial Intelligence and Deep Learning
Artificial Intelligence and Deep Learning systems currently represent an emerging and fundamental component for companies in the process of creating new technologies and in the implementation of services and products destined for the market. Currently, these systems are based on extremely complex models, characterized by an impressive number of parameters and require a significant amount of energy to operate.
In order to address this issue, the project aims to study and develop new lighter and less energy demanding models, without compromising the performance and accuracy of the results. In particular, techniques for model optimization, the use of data compression algorithms, the implementation of quantization and precision reduction techniques, as well as the adoption of intelligent resource management strategies will be analyzed and developed. The primary objective of these initiatives is to encourage the mitigation of the environmental impact, offering direct support for the achievement of the objectives of the Green Deal promoted by the FVG region.
Advisor: Giuseppe Serra
External scholarships (PNRR call 117)
1) Integration of artificial intelligence systems based on large language models, code interpreters and generative AI with data analysis solutions
The research will focus on the study and analysis of Artificial Intelligence techniques and methods with particular reference to generative AI techniques and techniques that make use of large language models and code interpreters applied to data analysis solutions. In particular, generative AI techniques will be studied for the creation of new learning models capable of generating new contents, such as texts, images, music or videos. Furthermore, large language models (such as CHAT-GPT) will be studied which, trained on large amounts of textual data, are able to understand and generate a language similar to the human one. The process of training large language models involves exposing the model to a large body of data. The model learns to predict the next word in a sentence based on the context provided by previous words. This process allows the model to understand the grammar, syntax and even the semantic relationships between words.
Company: beanTech s.r.l.
Advisor: Gian Luca Foresti
2) Industrial application of “copilot” for the purpose of parameterizing recipes of industrial systems (e.g. tuning algorithms on new productions)
The research project aims to study and implement a co-pilot based on AI algorithms capable of guiding an operator in the design and development of industrial solutions for the analysis and processing of complex images. The goal is to develop a method that allows to adapt a neural network, trained on the data of a particular industrial process, to a new process, automatically modifying the parameters of the network based on the characteristics and performance of the new process. The developed system can be tested for the study and analysis of new approaches to existing problems, for the definition and choice of parameters in new application contexts, for the possible extension of functionality in existing application systems. The project includes a phase of collection and analysis of data relating to various industrial processes, a phase of analysis of the state of the art on the main algorithms for transfer learning and for the automatic tuning of the intrinsic parameters of a neural network, the development of a neural network model capable of learning from different types of data and processes starting from a structure and a set of initial parameters fixed a priori and the development of a training algorithm that performs the tuning of the developed neural network adapting to the process from model.
Company: beanTech s.r.l.
Advisor: Gian Luca Foresti
3) Study and design of artificial intelligence algorithms for quality control of production processes
The objectives that we intend to achieve through the research project concern the study and design of an intelligence system based on machine vision algorithms, which is able to recognize anomalies and defects on specific components made by industries operating in the furniture and automotive sectors where, currently, the identification of “defective” products, even in small percentages , involves the waste of the entire production batch with very high economic repercussions for the company that produced them. The research activities in this area will have to follow a path divided into several phases starting from the study and analysis of the state of the art for the search for the best artificial intelligence algorithms in the field of vision to be used or adapted for the set purposes, at the prototype development of algorithms capable of processing the data acquired using heterogeneous sensors (traditional cameras, depth sensors, 3D scanners, etc.) also supported by robotic automation systems. The expected results will have to demonstrate the effectiveness and robustness of the algorithms created for their subsequent introduction and use in industrial operating environments where quality control increasingly needs support from innovative artificial intelligence systems.
Company: Eye-Tech s.r.l.
Advisor: Niki Martinel
4) Artificial intelligence for decision support in Pathological Anatomy
The main objective concerns the study of methodologies and techniques for decision support in Pathological Anatomy, with particular but not exclusive reference to the use of machine learning techniques that contribute to simplifying the work of pathologists, for example by prioritizing work lists , identifying suspicious areas in tissue, simplifying the quantification of immunohistochemical markers, but also supporting the traceability of samples in the laboratory workflow.
Advisor: Vincenzo Della Mea
External scholarships (PNRR call 118)
1) Computer vision for tracking wildlife in uncontrolled environments
Wildlife monitoring has become increasingly important for conservation purposes, particularly in remote or hard-to-access areas where traditional methods, such as manual counting or camera-traps, are not practical. CV techniques have emerged as a powerful tool for automating this process by identifying and tracking animals through video footage. However, these approaches still face significant challenges due to the variability of lighting conditions, pose changes and backgrounds present in realistic scenarios. To overcome these limitations, the primary objective was to develop new techniques to identify animals in images and videos captured in natural habitats with minimal supervision. To achieve this, we propose a two-step approach, in which we proceed by initially introducing large-scale self-supervised learning solutions that use “label-free” images to learn generic characteristics applicable to different scenarios, followed by a possible finalizations applied using a limited number of labeled datasets adapted to particular species or environments. Our overall goal is to reduce the reliance on expensive manual labeling while enabling efficient deployment of state-of-the-art templates for real-world use cases. We expect the proposed solutions to provide better performance than current approaches that rely solely on fully supervised training or only unimodal feature extraction. Additionally, we plan to evaluate robustness against common sources of uncertainty faced by field operators or autonomous systems that collect media assets, such as variable lighting, occlusion, motion blur, etc. Finally, by sharing the knowledge gained in the course of this endeavor with a wider audience spanning multiple disciplines (particularly in the world of biology), we hope to stimulate a thoughtful dialogue about the potential ethical implications of the widespread adoption of intelligent monitoring equipment in unregulated environments.
Advisor: Niki Martinel
2) Machine Learning for decision support in the interpretation of images in anatomy
Microscope images can be acquired with special scanners, which produce images – called digital slides or WSI – at typical resolutions of 0.2-0.5 micron/pixel, on samples in the order of several squared mm-cm. The result is Gpixel images, rich in information which, precisely because of the size of the images, is still not fully exploited today. For the same reason, their systematic digitization is still rarely carried out, even if some laboratories or entire regional networks of laboratories are starting full digitization processes. It is a sector whose strong development began late compared to other medical specialties precisely due to the size of the images to be treated, which made their processing too complex for a long time, but which is now starting to have results of scientific interest both from the point of informatics and clinical point of view.
Advisor: Vincenzo Della Mea