RESEARCH (during PhD, 2021 - PRESENT)
(All researches are supported by National Science and Technology Council, Taiwan)
I. ONGOING RESEARCH
1.1 Generative AI-based Quantum Semantic Communications for the Metaverse
My ongoing research centers around the development of Generative AI-based Quantum Semantic Communications for the Metaverse. The Metaverse, an emerging interconnected virtual space, offers unparalleled human interaction and experiences. To realize seamless and immersive experiences, the Metaverse demands robust solutions for computation latency, communication bandwidth, data privacy, and transmission delay. My research focuses on leveraging machine learning and quantum computing to establish a transformative paradigm shift in communication, specifically towards semantics-centric communication. By incorporating quantum anonymous communication and variational quantum computing, my work aims to create a quantum semantic communication framework that enhances reliability and security in Metaverse interactions. This framework employs quantum embedding and quantum machine learning to encode semantic information into quantum states, ensuring privacy and efficient data processing. The ultimate goal is to capitalize on the unique properties of quantum resources to pave the way for a more advanced and secure Metaverse.
II. SIGNIFICANT RESEARCH CONTRIBUTIONS
2.1 Resource Allocation in Digital Twin-Driven UAV-Aided IoV Networks
The research focuses on optimizing resource allocation strategies in Internet of Vehicle (IoV) networks using the combination of Digital Twin technology and Unmanned Aerial Vehicles (UAVs). In this context, the IoV network incorporates mobile edge computing (MEC) servers at roadside units (RSUs) to ensure seamless connectivity even in areas with limited RSU coverage. UAVs play a crucial role by acting as intermediaries between RSUs and vehicles, establishing a virtual representation of the IoV network as a Digital Twin (DT) in the aerial network. This DT continually captures real-time dynamics to enable efficient resource allocation, particularly for tasks that require minimal delays.
A key contribution of this research is the introduction of an intelligent and novel task offloading scheme tailored to the dynamic nature of vehicular environments. This scheme offers multiple task execution modes, including local execution, vehicle-to-vehicle (V2V) offloading, and vehicle-to-roadside-unit (V2I) offloading, with the choice of mode based on energy consumption considerations. The research also proposes a novel resource allocation algorithm based on multi-network deep reinforcement learning (DRL), termed as RADiT. This algorithm is designed to optimize the utility of the IoV network while simultaneously refining task offloading strategies.
The research investigates the performance of RADiT under different scenarios, including the presence of V2V computation mode, and compares it against other algorithms such as soft actor-critic (SAC) and a non-DRL greedy approach. Extensive simulations are conducted to evaluate the efficiency of the proposed RADiT algorithm, showcasing its superior utility, energy efficiency, and reduced network delay. Additionally, the research explores the benefits of UAV relay assistance, indicating how the network's efficiency can be further improved by leveraging UAVs for task completion.
For further details, including equations and in-depth insights, please refer to the published paper-
"RADiT:Resource allocation in digital twin-driven UAV-aided internet of vehicle networks"
Publications from this research:
- “RADiT : Resource Allocation in Digital Twin Assisted and UAV aided IoV Networks,” accepted in IEEE Journal on Selected Areas in Communications.
- “Digital Twin-Assisted Resource Allocation in UAV-Aided Internet of Vehicles Networks,” in Proc. IEEE ICC, Rome, Italy, 2023.
2.2 Hybrid Machine Learning Approach for Digital Twin-driven IoV networks
This study builds upon the findings outlined in the publication titled "RADiT: Resource Allocation in Digital Twin Assisted and UAV Aided IoV Networks." It delves into an innovative hybrid asynchronous federated learning methodology aimed at the training of a newly conceived multi-agent DRL algorithm known as MARS. The primary objective of MARS is to enhance resource allocation within the IoV network, concurrently mitigating both system delay and energy consumption. Notably, the MARS algorithm operates on the foundation of two types distinct and independent reward functions.
This work is currently submitted for publication and is under review in the following:
- “Hybrid Machine Learning Approach for Resource Allocation of Digital Twin in UAV-aided Internet-of-Vehicles Networks,” under review in IEEE Internet of Things Journal.
- “Hybrid Federated and Multi-agent DRL-based Resource Allocation in Digital Twin-IoV Networks,” under review in IEEE Globecom Workshop, 2023.
2.3 Resource Management for URLLC-IoV Networks
The research proposes a novel approach for optimal resource management and caching in ultra-reliable low-latency communication (URLLC)-enabled IoV networks. The proposed framework MEC servers into RSUs, UAVs and base stations to facilitate hybrid vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. This approach aims to enhance the accuracy of the global model while accounting for vehicle mobility characteristics. An asynchronous federated learning (AFL) algorithm is leveraged for accurate modeling, considering the dynamic nature of vehicle movements.
The research addresses two main optimization problems. The first problem focuses on joint optimization of frequency, computation, and caching resources across BSs, RSUs, and UAVs to maximize the number of offloaded tasks while meeting quality-of-service (QoS) requirements. The second problem centers on optimizing caching policies to minimize transmission delays during the caching process. To address the non-convex nature of these problems, a multi-agent actor-critic type deep reinforcement learning (DMAAC) algorithm is introduced.
Furthermore, a cooperative caching scheme known as Co-Ca is proposed, employing an AFL framework to predict frequently accessed contents efficiently. A Dueling Deep-Q-Network (DDQN) algorithm is utilized to minimize transmission delay by caching frequently accessed contents.
The research provides a unified framework that combines decentralized multi-agent DRL with distributed AFL for optimal resource management and an efficient cooperative caching scheme in URLLC-enabled IoV networks. The primary contributions include exploring UAV-assisted URLLC-IoV networks, formulating optimization problems for resource allocation and caching, leveraging AFL for accurate modeling, and introducing DMAAC and Co-Ca algorithms for efficient resource management and caching.
The system architecture encompasses various components, including MEC servers, vehicles (both task vehicles and service vehicles), UAVs, RSUs, and BSs. Vehicles are selected based on mobility traits for AFL training, and the asynchronous aggregation of local models is used to update the global model. Moreover, the caching scheme involves predicting frequently accessed contents, optimizing content transmission rates, and considering content location. Extensive simulations demonstrate the effectiveness of the proposed framework and algorithms compared to existing approaches. The research contributes to improving resource allocation, caching efficiency, and overall network performance in URLLC-enabled IoV networks.
For further details, including equations and in-depth insights, please refer to the published paper-
"AFL-DMAAC: Integrated resource management and cooperative caching for URLLC-IoV networks"
Publications from this research:
- “AFL-DMAAC: Integrated Resource Management and Cooperative Caching for URLLC-IoV Networks,” IEEE Transactions on Intelligent Vehicles, doi : 10.1109/TIV.
- 2. “Asynchronous Federated Learning Based Resource Management in URLLC IoV Networks” accepted in IEEE Globecom, Kuala Lumpur, Malaysia, 2023.
2.4 Reconfigurable Intelligent Surface aided and DRL-based task offloading in IoV networks
The crux of this research revolves around revolutionizing the performance of IoV networks through the strategic integration of Reconfigurable Intelligent Surfaces (RISs). In this innovative paradigm, Multi-Access Edge Computing (MAEC) servers are co-located with base stations, which are further enhanced by the presence of multiple Reconfigurable Intelligent Surfaces (RISs). These RISs serve as intelligent mirrors, adept at manipulating radio waves to optimize both uplink and downlink transmissions. The research is underpinned by a comprehensive goal: to design and implement an intelligent task offloading methodology that optimizes resource allocation strategies within the IoV network. This optimization hinges on a nuanced understanding of network criticality, task priority, and task size.
Central to this research is the development of a Multi-Agent Deep Reinforcement Learning (MA-DRL) framework that operates within the framework of a Markov game. This framework revolves around optimizing the decision strategy for task offloading. The MA-DRL algorithm, a star feature of this research, is meticulously crafted to maximize the mean utility of the IoV network, thereby augmenting communication quality across vehicles and base stations. A comprehensive suite of numerical simulations rigorously explores the algorithm's capabilities and potential impact, benchmarking it against contemporary networks that lack RIS assistance, as well as against baseline DRL algorithms such as Soft Actor-Critic (SAC), Deep Deterministic Policy Gradient (DDPG), and Twin Delayed DDPG (TD3).
For further details, including equations and in-depth insights, please refer to the published paper-
"Multi-agent DRL-based task offloading in multiple RIS-aided IoV networks"
Publications from this research:
- “Multi-agent DRL-based Task Offloading in Multiple RIS-aided IoV Networks,” accepted in IEEE Transactions on Vehicular Technology.
- “Multi-Agent DRL-Based Computation Offloading in Multiple RIS-Aided IoV Networks” in Proc. IEEE MILCOM, Maryland, USA, 2022.
2.5 DRL-Based Computation Offloading in IoV Networks
The study addresses the challenge of real-time resource allocation in the dynamic environment of IoV networks. It introduces an innovative priority-sensitive task offloading and resource allocation scheme that leverages vehicle-to-vehicle communication facilitated by beacon messages. This communication framework allows vehicles to share information about available services and critical data needed for informed task offloading decisions. The proposed methodology harnesses the power of DRL algorithms, specifically SAC, DDPG, and TD3, to effectively manage the resource allocation process.
Central to the research is the creation of a comprehensive framework that integrates various vehicular entities, including both stationary and moving vehicles, as potential service providers. The scheme intelligently categorizes tasks based on their priority, computation size, and network criticality, thereby enabling adaptive resource allocation decisions. Tasks can follow different execution modes – local execution within the vehicle, offloading to other vehicles (V2V), or offloading to the edge/cloud infrastructure. Utility functions are meticulously tailored to capture the distinct characteristics of these tasks, considering parameters such as delay tolerance and computation resources
The DRL algorithms, SAC, DDPG, and TD3, form the core of the proposed optimization strategy. These algorithms work cohesively to determine optimal policies for task offloading by maximizing the overall utility of the network. This is achieved through dynamic decision-making processes that consider the real-time status of the network, the priority of tasks, and the available computation resources. The study provides in-depth insights into the design and implementation of these algorithms, highlighting their potential to significantly enhance resource allocation efficiency in IoV networks.
To validate the effectiveness of the proposed approach, the research conducts extensive simulations under various network conditions. The study proves the DRL-based framework's superiority in efficiency and adaptability over conventional methods through thorough comparisons. It advances the understanding of distributed reinforcement learning's potential for resource allocation, applicable to future IoV networks.
For further details, including equations and in-depth insights, please refer to the published paper-
"DRL-based resource allocation for computation offloading in IoV networks"
Publications from this research:
- “DRL-Based Resource Allocation for Computation Offloading in IoV Networks,” IEEE Transactions on Industrial Informatics, vol. 18, no. 11, pp. 8027-8038, Nov. 2022, doi: 10.1109/TII.2022.3168292.
- “Sac-based resource allocation for computation offloading in iov networks” in Proc. IEEE EuCNC/6G Summit, Grenoble, France, 2022.