The following project opportunities are open for Scottish businesses or public sector organisations interested in collaborating with Scottish Universities and benefit from an applied research project with a doctoral student. Funding is available through our current call for funding. If any of the following projects are of interest to you or your organisation, please get in touch.
Economic feasibility and environmental impacts of bioenergy in supporting net-zero energy building (NZEB+Bio) in the UK
The energy consumed by a net-zero energy building (NZEB) is as much as the renewable energy generated onsite or elsewhere. It is expected that NZEB will play an important role in mitigating greenhouse gas (GHG) emission and has received significant attention in recent years. Biomass accounts for around 12% of the world’s renewable energy resources. Distributed bioenergy production serves as a potential way of fulfilling NZEB.
It is important to understand the economic feasibility and environmental impacts of bioenergy on the design of NZEB. This project will design a novel configuration of bioenergy-supported NZEB and will decide the profitability and carbon footprint of the configuration using big-data supported cost-benefit analysis and life cycle assessment. The results will enable policymakers to make informed decisions for the fulfilment of NZEB in the UK.
The project is looking for an industry sponsor in the sector of (not limited to) sustainable/green building development and design, bioenergy technology development, or distributed bioenergy application, that will could potentially provide input data on the design of bioenergy-supported net-zero energy building (NZEB). The partnership will enable the PhD candidate to receive training from an industry supervisor and to design a bioenergy-supported NZEB configuration driven by future building industry standards and market demands.
Recovering losses: transfer and dictionary learning for restoring damaged radar/RF data
Novel techniques from the machine learning community can potentially help restore these missing data based on manipulation of data and knowledge previously acquired (like people, able to recognise a known or expected face even if part of it is covered by a hat), and keep acceptable performance.
The student will perform a mix of software development and experimental validation work in this project. Novel implementation and adaptation of these techniques to the specific format and characteristics of radar data will be needed, and these will have to be validated with experimental work using the radar and software defined radio platforms available in the research group.
Mining Arguments from Natural Language Text
Giving machines the ability to understand natural language has been an AI goal for decades. A recent research direction in this area has focussed on “Argument Mining”. This is the automatic identification, extraction, and reuse of arguments from textual resources.
This project will involve a detailed study of the structure of natural language arguments from the industrial partners domain, with the aim of devising new and effective computational mining techniques. The successful candidate will be expected to further focus their project, and may choose for example, to focus on the effective application or extension of existing natural language or machine learning techniques applied to the argument mining domain of the industrial partner.
The core research themes of this project would be to:
- develop & evaluate automated argument mining techniques that can be applied to real-world problems
- extend extant tools for manual argument analysis through the addition of automated mining features so that they can be applied at scale to the creation of training data for supervised machine learning approaches
- research novel techniques for visualising and presenting mined argumentative data to support sense-making of the target domain.
Contact the project supervisor Dr Simon Wells for more information.
A unified approach based on semantic models and continuous deep learning to data uncertainty and inconsistency in smart IoT systems
Smart IoT-based Applications, such as smart city and smart factory, are characterized as sensor-driven technology, which has the tendency of producing huge volume of data with increasing velocity. The resulting data produced by these applications are mostly used to support organisation, planning, interpretation and decision-making activities. However, these data come with a number of quality issues that collectively results in uncertainties and inconsistencies.
In this project, we aim to innovatively integrate semantics-based data modelling and analysis with continuous deep learning to provide a novel effective solution to the above problem.
The semantic data model will provide a machine-understandable foundation for the IoT data and its analysis, and will be able to produce near real-time solution for the detection and correction of IoT data uncertainties. However, this model may be static and imprecise to cope with the highly dynamic nature of IoT systems and the data they have been generating. Therefore, we propose to use deep learning to support the continuous evolution of the semantic model and its data analysis algorithms.
Collaboration sought: We are looking for an industrial partner in the following area(s):
- 1. Provider of smart IoT applications, e.g. smart city, smart building, smart factory, smart transport, smart vehicle, etc;
- 2. Developer of smart IoT Applications, e.g. smart city, smart building, smart factory, smart transport, smart vehicle, etc;
- 3. Company specializing in data modeling and analysis;
- 4. Company specializing in smart sensors, IoT networks and devices;
Contact the project supervisor Prof Xiaodong Liu for more information.
Artificial Intelligence Based Communication System for Collaborative and Fault-tolerant Multicast Music Distribution
Advances of data communication make it easy today for a group of users to work remotely and collaboratively and produce rich multimedia content and distribute it to a large audience in the Internet. IP multicast constitutes an effective communication method that saves both the network bandwidth and the processing overhead especially when different sources are involved. However, real-time communications are highly sensitive to packet loss. This is especially the case for live music concerts. To address this issue, several traffic engineering and fault tolerance approaches have been devised. These techniques include and not limited to audio video compression, networking buffer management, queuing algorithms, traffic classification and prioritisation, etc.
The artificial intelligence could bring another level to improve the reliability of the transmission especially when multiple sources are involved in one single broadcast application. By monitoring the communication pattern and the network performance, artificial intelligence processes could be introduced in the communication framework to address any audio/video quality degradation or loss. A possible solution consists of creating virtual packets inside the network infrastructure or inject artificial made ones at the user’s end to replace missing critical data packets.
This thesis project aims therefore to explore how artificial intelligence and deep learning techniques could be applied to improve the reliability and the quality of multicast distributed concerts where a set of musicians collaborate remotely to record or play an album. The new proposed techniques could be embedded into the advanced audio-visual streaming technology LOLA that has been developed by Edinburgh Napier University and tested with musicians in Edinburgh, London, and Boston [ https://www.napier.ac.uk/about-us/news/word-first-for-transatlantic-real-time-album-recording]
Collaboration sought: A multimedia content publisher or distributor is required. Their expertise in audio/video compression, transmission over IP network will help to develop new reliability and quality of service techniques to broadcast real time multimedia content to a large audience.
Contact the project supervisor Dr Imed Romdhani for further information.
Smart algorithms to solve large-scale optimisation problems
Optimisation problems can be found everywhere. Examples include: finding good parameters for a model or process, scheduling and logistics, resource allocation, or finding the shortest paths for a vehicle. Sometimes there is more than one goal (such as improving both monetary cost and efficiency), and here the optimisation problem is about finding the trade-off between these goals so an informed choice of solution can be made. These problems are usually also rooted in an underlying data set, capturing the specifics of the application (e.g. databases of orders that need satisfied, resource demand over time, or regional-scale maps of locations).
The core research themes of this project would be to:
(1) devise methods to intelligently search through possible answers to an optimisation problem; exploiting what human experts already know about the problem; and specifically for large-scale problems how to break the space of possibilities down to make it easier to solve
(2) develop approaches for communicating the answers to large-scale problems in an intuitive way
(3) research ways of explaining why particular solutions were chosen.
Contact the project supervisor Sandy Brownlee firstname.lastname@example.org for more information.
A Framework for Generating Explanations of Large-Volumes of Data to assist Decision Making
Decision making based on large volumes of heterogeneous data, such as financial data, customer reviews in textual format, images, graphs, etc., is essential for businesses and the public sector. This data is normally presented through graphical representations and rely on analysts’ attention to spot irregularities such as anomalies in data or patterns. When handling large volumes of data, spotting irregularities is hard to the naked eye. This proposed project aims to develop a framework which automatically identifies interesting patterns or anomalies in the data and then produces descriptions of the data in natural language.
Our previous research has shown that people make better decisions when they view descriptions of data together with their graphical representations (Gkatzia et al., 2017), compared to when they are shown just graphs of data.
Contact the academic supervisor Dr Dimitra Gkatzia – D.Gkatzia@napier.ac.uk for further information.
Machine Learning For Security of Underwater Wireless Sensor Networks
My research core theme is based on Machine Learning for security of Underwater Wireless Sensor Networks for Intrusion Detection. The applied business domain is for underwater surveillance and monitoring products. The innovation in the project is derived from using the machine learning framework with multi-objective optimization. The intrusion detection application is based on data analysis of underwater noise database. The technical outcome of the project is a sonar based real time surveillance system.
This project can be useful for surveillance and monitoring of Scotland’s tidal energy farm and Microsoft data center underwater.
Contact the Academic Supervisor, Dr Mohammad Hamdan, M.Hamdan@hw.ac.uk, for further information
Realising a Flexible Quality Framework for Managing Data Assets
This project will explore the following questions:
– How can data veracity measures (metrics) be encoded and enacted within a data ecosystem?
– How can data provenance be used to support new forms of veracity checking and anomaly detection?
– How can data policies be framed to reason about data veracity, and recommend appropriate decision-making actions?
Transparent & Accountable Data Management for the Internet of Things
Building on an existing portfolio of research into data transparency and provenance, the proposed project will examine the following questions: What characteristics of IoT devices and their behaviours are necessary to formulate a model of transparency? How do we represent norms against which devices (and the ecosystems of which they are a part) can be held to account?
Big-data and AI-based personalised flood preparation and evacuation system
This project is targeted to couple a coastal flood model and a transportation network with the big-data flood monitoring platform that we developed in Wang et al. (2018). The coupling will allow us to use the flood model to predict the flood development and the transportation model to optimise the flood evacuation routes.
Using data science to understand the food system
In this project I would propose to look at available data on some aspect of the food system and see if we can use that to inform modelling and make predictions about how to improve or optimise the system. The area could be at the food production end of industry or the retail end. It depends on who has got data that they think they could make more of, in terms of using it to make forecasts about yield, or sales, for example.