Finished PhD Theses

Private 5G and its Suitability for Industrial Networking

by Dr.-Ing. Justus Rischke, 2023, available at tud.qucosa.de
 
Justus Rischke
 

Abstract

5G was and is still surrounded by many promises and buzzwords, such as the famous 1 ms, real-time, and Ultra-Reliable and Low-Latency Communications (URLLC). This was partly intended to get the attention of vertical industries to become new customers for mobile networks, which shall be deployed in their factories. With the allowance of federal agencies, companies deployed their own private 5G networks to test new use cases enabled by 5G. But what has been missing, apart from all the marketing, is the knowledge of what 5G can really do?

Private 5G networks are envisioned to enable new use cases with strict latency requirements, such as robot control. This work has examined in great detail the capabilities of the current 5G Release 15 as private network, and in particular its suitability with regard to time-critical communications. For that, a testbed was designed to measure One-Way Delays (OWDs) and Round-Trip Times (RTTs) with high accuracy. The measurements were conducted in 5G Non-Standalone (NSA) and Standalone (SA) net-works and are the first published results.

The evaluation revealed results that were not obvious or identified by previous work. For example, a strong impact of the packet rate on the resulting OWD and RTT was found. It was also found that typically 95% of the SA downlink end-to-end packet delays are in the range of 4 ms to 10 ms, indicating a fairly wide spread of packet delays, with the Inter-Packet Delay Variation (IPDV) between consecutive packets distributed in the millisecond range. Surprisingly, it also seems to matter for the RTT from which direction, i.e. Downlink (DL) or Uplink (UL), a round-trip communication was initiated. Another important factor plays especially the Inter-Arrival Time (IAT) of packets on the RTT distribution. These examples from the results found demonstrate the need to critically examine 5G and any successors in terms of their real-time capabilities.

In addition to the end-to-end OWD and RTT, the delays caused by 4G and 5G Core processing has been investigated as well. Current state-of-the-art 4G and 5G Core implementations exhibit long-tailed delay distributions. To overcome such limitations, modern packet processing have been evaluated in terms of their respective tail-latency. The hardware-based solution was able to process packets with deterministic delay, but the software-based solutions also achieved soft real-time results. These results allow the selection of the right technology for use cases depending on their tail-latency requirements.

In summary, many insights into the suitability of 5G for time-critical communications were gained from the study of the current 5G Release 15. The measurement framework, analysis methods, and results will inform the further development and refinement of private 5G campus networks for industrial use cases.

Accelerating Audio Data Analysis with In-Network Computing

by Dr.-Ing. Huanzhuo Wu, 2023, available at tud.qucosa.de
 
Huanzhuo Wu
 

Abstract

Digital transformation will experience massive connections and massive data handling. This will imply a growing demand for computing in communication networks due to network softwarization. Moreover, digital transformation will host very sensitive verticals, requiring high end-to-end reliability and low latency. Accordingly, the emerging concept “in-network computing” has been arising. This means integrating the network communications with computing and also performing computations on the transport path of the network. This can be used to deliver actionable information directly to end users instead of raw data.

However, this change of paradigm to in-network computing raises disruptive challenges to the current communication networks. In-network computing (i) expects the network to host general-purpose softwarized network functions and (ii) encourages the packet payload to be modified. Yet, today’s networks are designed to focus on packet forwarding functions, and packet payloads should not be touched in the forwarding path, under the current end-to-end transport mechanisms. This dissertation presents fullstack in-network computing solutions, jointly designed from network and computing perspectives to accelerate data analysis applications, specifically for acoustic data analysis.

In the computing domain, two design paradigms of computational logic, namely progressive computing and traffic filtering, are proposed in this dissertation for data reconstruction and feature extraction tasks. Two widely used practical use cases, Blind Source Separation (BSS) and anomaly detection, are selected to demonstrate the design of computing modules for data reconstruction and feature extraction tasks in the in-network computing scheme, respectively. Following these two design paradigms of progressive computing and traffic filtering, this dissertation designs two computing modules: progressive ICA (pICA) and You only hear once (Yoho) for BSS and anomaly detection, respectively. These lightweight computing modules can cooperatively perform computational tasks along the forwarding path. In this way, computational virtual functions can be introduced into the network, addressing the first challenge mentioned above, namely that the network should be able to host general-purpose softwarized network functions. In this dissertation, quantitative simulations have shown that the computing time of pICA and Yoho in in-network computing scenarios is significantly reduced, since pICA and Yoho are performed, simultaneously with the data forwarding. At the same time, pICA guarantees the same computing accuracy, and Yoho’s computing accuracy is improved.

Furthermore, this dissertation proposes a stateful transport module in the network domain to support in-network computing under the end-to-end transport architecture. The stateful transport module extends the IP packet header, so that network packets carry message-related metadata (message-based packaging). Additionally, the forwarding layer of the network device is optimized to be able to process the packet payload based on the computational state (state-based transport component). The second challenge posed by in-network computing has been tackled by supporting the modification of packet payloads.

The two computational modules mentioned above and the stateful transport module form the designed in-network computing solutions. By merging pICA and Yoho with the stateful transport module, respectively, two emulation systems, i.e., in-network pICA and in-network Yoho, have been implemented in the Communication Networks Emulator (ComNetsEmu). Through quantitative emulations, the experimental results showed that in-network pICA accelerates the overall service time of BSS by up to 32.18%. On the other hand, using in-network Yoho accelerates the overall service time of anomaly detection by a maximum of 30.51%. These are promising results for the design and actual realization of future communication networks.

Computational Complexity and Delay Reduction for RLNC Single and Multi-hop Communications

by Dr.-Ing. Elif Tasdemir, 2023, available at tud.qucosa.de
 

 

Abstract

Today’s communication network is changing rapidly and radically. Demand for low latency, high reliability and low energy consumption increases as well the variety of characteristics of the connected devices. It is also expected that the number of connected devices will be massive in coming years. Some devices will be connected to the new generation base stations directly, while some of them will be connected through other devices via multi-hops. Reliable communication between these massive devices can be done via re-transmission, repetition of packets several times or via Forward Error Correction (FEC). In re-transmission method, when packets are negatively acknowledged or the sender’s acknowledgment timer expires, packets are re-transmitted. In repetition method, every packet can be send several times. Both aforementioned methods can cause a huge delay, particularly, in multi-hop network. On the contrary of these methods, FEC methods are preferred for low latency applications. Source information are transmitted together with redundant information. Hence, the number of transmissions are reduced comparing to the methods mentioned above.

Random Linear Network Coding (RLNC) is a packet level erasure correcting codes which aims to reduce latency. Specifically, source packets are combined and these combinations or coded packets are sent to the destination. Lost packets do no need to be re-sent since another coded packet can be substituted to the lost coded packet. Hence, the feedback mechanism and re-sending process becomes unnecessary. There are many variations of RLNC. One variation is called sliding window RLNC which apples FEC mechanism. This coding scheme achieves low latency via interleaved coded packets in between source packets. Another variation of the RLNC is Fulcrum, which is a versatile code. Fulcrum provides three different decoding options. Received coded packets can be decoded with low, high or middle complexity. This is a very important feature since connected devices will have different computation capabilities and proving a versatile code will allow them flexibility.

Although the aforementioned coding schemes are well suited to error prone network, there are still remaining challenges need to be studied. For instance, Fulcrum RLNC has high encoding and decoding complexity which increase the computation time and energy consumption. Moreover, although original Fulcrum RLNC strengths the reliability, it needs to be improved for low latency applications. Another remaining challenges is that recoding strategy of RLNC is not optimal for low latency. Allowing the intermediate nodes to combine received packets is referred as recoding. As described earlier, data packets will pass many hops until they reach destination. Therefore, compute-and-forward paradigm will be preferred rather than store-and-forward. Although recoding capability of RLNC differs it from other coding schemes (Raptor, LT), the conventional way of recoding is not efficient for low latency. Hence, the aim of this thesis is to address the aforementioned remaining challenges.

One way to address the remaining challenges is to employ sparsity. In other words, a few source packets can be combined rather than a large set of source packets to generate coded packets. Particularly, a dynamic sparse mechanism is proposed to vary the number of combined source packets during the encoding without a signaling between sender and receiver for Fulcrum RLNC to speed up encoding and decoding process without increasing overhead amount. Then, two different sliding window schemes were integrated into Fulcrum RLNC to make Fulcrum RLNC gain the low latency property. Sending source packets systematically and then spreading sparse coded packets in between systematic source packets can be referred as systematic sparsity. Moreover, different sparse and systematic recoding strategies have been proposed in this thesis to lower the delay and computation time at the intermediate nodes and destination. Finally, one of the proposed recoding strategy has been applied to the vehicle platooning scenario to increase reliability. All proposed coding schemes were analyzed and performed on KODO which is well known network coding library.

Low-latency and Resource-efficient Orchestration for Applications in Mobile Edge Cloud

by Dr.-Ing. Tung Van Doan, 2023, available at tud.qucosa.de
 

 

Abstract

Recent years have witnessed an increasing number of mobile devices such as smartphones and tablets characterized by low computing and storage capabilities. Meanwhile, there is an explosive growth of applications on mobile devices that require high computing and storage capabilities. These challenges lead to the introduction of cloud computing empowering mobile devices with remote computing and storage resources. However, cloud computing is centrally designed, thus encountering noticeable issues such as high communication latency and potential vulnerability. To tackle these problems posed by central cloud computing, Mobile Edge Cloud (MEC) has been recently introduced to bring the computing and storage resources in proximity to mobile devices, such as at base stations or shopping centers. Therefore, MEC has become a key enabling technology for various emerging use cases such as autonomous driving and tactile internet.

Despite such a potential benefit, the design of MEC is challenging for the deployment of applications. First, as MEC aims to bring computation and storage resources closer to mobile devices, MEC servers that provide those resources become incredibly diverse in the network. Moreover, MEC servers typically have a small footprint design to flexibly place at various locations, thus providing limited resources. The challenge is to deploy applications in a cost-efficient manner. Second, applications have stringent requirements such as high mobility or low latency. The challenge is to deploy applications in MEC to satisfy their needs.

Considering the above challenges, this thesis aims to study the orchestration of MEC applications. In particular, for computation offloading, we propose offloading schemes for immersive applications in MEC such as Augmented Reality or Virtual Reality (AR/VR) by employing application characteristics. For resource optimization, since many MEC applications such as gaming and streaming applications require the support of network functions such as encoder and decoder, we first present placement schemes that allow efficiently sharing network functions between multiple MEC applications. We then introduce the design of the proposed MANO framework in MEC, advocating the joint orchestration between MEC applications and network functions. For mobility support, low latency applications for use cases such as autonomous driving have to seamlessly migrate from one MEC server to another MEC server following the mobility of mobile device, to guarantee low latency communication. Traditional migration approaches based on virtual machine (VM) or container migration attempt to suspend the application at one MEC server and then recover it at another MEC server. These approaches require the transfer of the entire VM or container state and consequently lead to service interruption due to high migration time. Therefore, we advocate migration techniques that takes advantage of application states.

Reliable Packet Streams with Multipath Network Coding

by Dr.-Ing. Frank Gabriel, 2022
 

 

Abstract

With increasing computational capabilities and advances in robotics, technology is at the verge of the next industrial revolution. An growing number of tasks can be performed by artificial intelligence and agile robots. This impacts almost every part of the economy, including agriculture, transportation, industrial manufacturing and even social interactions. In all applications of automated machines, communication is a critical component to enable cooperation between machines and exchange of sensor and control signals.

The mobility and scale at which these automated machines are deployed also challenges todays communication systems. These complex cyber-physical systems consisting of up to hundreds of mobile machines require highly reliable connectivity to operate safely and efficiently. Current automation systems use wired communication to guarantee low latency connectivity. But wired connections cannot be used to connect mobile robots and are also problematic to deploy at scale. Therefore, wireless connectivity is a necessity. On the other hand, it is subject to many external influences and cannot reach the same level of reliability as the wired communication systems.

This thesis aims to address this problem by proposing methods to combine multiple unreliable wireless connections to a stable channel. The foundation for this work is Caterpillar Random Linear Network Coding (CRLNC), a new variant of network code designed to achieve low latency. CRLNC performs similar to block codes in recovery of lost packets, but with a significantly decreased latency. CRLNC with Feedback (CRLNC-FB) integrates a Selective-Repeat ARQ (SR-ARQ) to optimize the tradeoff between delay and throughput of reliable communication. The proposed protocol allows to slightly increase the overhead to reduce the packet delay at the receiver. With CRLNC, delay can be reduced by more than 50 % with only a 10 % reduction in throughput. Finally, CRLNC is combined with a statistical multipath scheduler to optimize the reliability and service availability in wireless network with multiple unreliable paths. This multipath CRLNC scheme improves the reliability of a fixed-rate packet stream by 10% in a system model based on real-world measurements of LTE and WiFi.

All the proposed protocols have been implemented in the software library NCKernel. With NCKernel, these protocols could be evaluated in simulated and emulated networks, and were also deployed in several real-world testbeds and demonstrators.

Ultra-reliable Low-latency, Energy-efficient and Computing-centric Software Data Plane for Network Softwarization

by Dr.-Ing. Zuo Xiang, 2022, available at tud.qucosa.de
 

 

Abstract

Network softwarization plays a significantly important role in the development and deployment of the latest communication system for 5G and beyond. A more flexible and intelligent network architecture can be enabled to provide support for agile network management, rapid launch of innovative network services with much reductionin Capital Expense (CAPEX) and Operating Expense (OPEX). Despite these benefits,
5G system also raises unprecedented challenges as emerging machine-to-machine and human-to-machine communication use cases require Ultra-Reliable Low Latency Communication (URLLC). According to empirical measurements performed by the author of this dissertation on a practical testbed, State of the Art (STOA) technologies and systems are not able to achieve the one millisecond end-to-end latency requirement of the 5G standard on Commercial Off-The-Shelf (COTS) servers. This dissertation performs a comprehensive introduction to three innovative approaches that can be used to improve different aspects of the current software-driven network data plane. All three approaches are carefully designed, professionally implemented and rigorously evaluated. According to the measurement results, these novel approaches put forward the research in the design and implementation of ultra-reliable low-latency, energy-efficient and computing-first software data plane for 5G communication system and beyond.

On the Design of Future Communication Systems with Coded Transport, Storage, and Computing

by Dr.-Ing. Juan Alberto Cabrera Guerrero, 2022, available at tud.qucosa.de
 

 

Abstract

Communication systems are experiencing a fundamental change. There are novel applications that require an increased performance not only of throughput but also latency, reliability, security, and heterogeneity support from these systems. To fulfil the requirements, future systems understand communication not only as the transport of bits but also as their storage, processing, and relation. In these systems, every network node has transport storage and computing resources that the network operator and its users can exploit through virtualisation and softwarisation of the resources. It is within this context that this work presents its results. We proposed distributed coded approaches to improve communication systems. Our results improve the reliability and latency performance of the transport of information. They also increase the reliability, flexibility, and throughput of storage applications. Furthermore, based on the lessons that coded approaches improve the transport and storage performance of communication systems, we propose a distributed coded approach for the computing of novel in-network applications such as the steering and control of cyber-physical systems. Our proposed approach can increase the reliability and latency performance of distributed in-network computing in the presence of errors, erasures, and attackers.

Computing on the Edge of the Network

by Dr.-Ing. Mahshid Mehrabi, 2022, available at tud.qucosa.de
 

 

Abstract

Enabling Fifth Generation of Cellular Communication Networks (5G) systems requires energy efficient architectures which can provide a reliable service platform to deliver 5G services and beyond. Device-enhanced edge computing is a derivation of Multi- access edge computing (MEC), which provides computing and storage resources very on the end-devices. The importance of this concept has been proved by the rising demands of computation-intensive and ultra-low latency applications which over- whelm the MEC server and the wireless channel. This dissertation presents a com- putation offloading framework with energy-, mobility-, and incentive-awareness in a multiple-user multiple-task device-enhanced MEC system which considers the inter- dependency of tasks as well as latency requirements of the applications.

Zero-padding Network Coding and Compressed Sensing for Optimized Packets Transmission

by Dr.-Ing. Maroua Taghouti, 2021, available at tud.qucosa.de
 

 

Abstract

Ubiquitous Internet of Things (IoT) is destined to connect everybody and everything on a never-before-seen scale. Such networks, however, have to tackle the inherent issues created by the presence of very heterogeneous data transmissions over the same shared network. This very diverse communication, in turn, produces network packets of various sizes ranging from very small sensory readings to comparatively humongous video frames. Such a massive amount of data itself, as in the case of sensory networks, is also continuously captured at varying rates and contributes to increasing the load on the network itself, which could hinder transmission efficiency. However, they also open up possibilities to exploit various correlations in the transmitted data due to their sheer number. Reductions based on this also enable the networks to keep up with the new wave of big data-driven communications by simply investing in the promotion of select techniques that efficiently utilize the resources of the communication systems.
One of the solutions to tackle the erroneous transmission of data employs linear coding techniques, which are ill-equipped to handle the processing of packets with differing sizes. Random Linear Network Coding (RLNC), for instance, generates unreasonable amounts of padding overhead to compensate for the different message lengths, thereby suppressing the pervasive benefits of the coding itself. We propose a set of approaches that overcome such issues, while also reducing the decoding delays at the same time. Specifically, we introduce and elaborate on the concept of macro-symbols and the design of different coding schemes. Due to the heterogeneity of the packet sizes, our progressive shortening scheme is the first RLNC-based approach that generates and recodes unequal-sized coded packets. Another of our solutions is deterministic shifting that reduces the overall number of transmitted packets. Moreover, the RaSOR scheme employs coding using XORing operations on shifted packets, without the need for coding coefficients, thus favoring linear encoding and decoding complexities.
Another facet of IoT applications can be found in sensory data known to be highly correlated, where compressed sensing is a potential approach to reduce the overall transmissions. In such scenarios, network coding can also help. Our proposed joint compressed sensing and real network coding design fully exploit the correlations in cluster-based wireless sensor networks, such as the ones advocated by Industry 4.0. This design focused on performing one-step decoding to reduce the computational complexities and delays of the reconstruction process at the receiver and investigates the effectiveness of combined compressed sensing and network coding.

Fulcrum: Versatile Network Codes for Heterogeneous Communication Networks

by Dr.-Ing. Vu Nguyen, 2021, available at tud.qucosa.de
 

 

Abstract

Two main approaches to achieve reliable data transfer over error-prone networks are retransmission and Forward Error Correction (FEC). Retransmission techniques retransmit packets when they are lost or despaired, causing significant delays, especially on multihop connections. On the contrary, to reduce latency FEC sends redundant data together with the original one. In particular, FEC through Random Linear Network Coding (RLNC) reduces the number of distinct packet transmissions in a network and minimizes packet transmissions due to poor network conditions. Consequently, RLNC has the potential to both improve energy efficiency and reduce the overall latency in a network.

Fulcrum network coding (FNC), proposed as an RLNC’s variation, has partly solved the challenge of heterogeneous communication networks by providing two-layer coding, enabling the destinations to decode packets based on their computing capabilities. However, coding parameters are statically chosen before data transmission, while using feedback or retransmission is impractical in rapidly changing network conditions. FNC is unadaptable to the available capabilities of nodes in a network and thus negatively impacts the coding performance.

The main objective of this thesis is to design a versatile network coding scheme supporting heterogeneous communication networks that allow a node to adjust and adapt the coding process depending on the network condition and computing capabilities of the node. The research also focuses on reducing computational complexity in nodes while maintaining a high successful decoding probability and employing as simple operations as possible in intermediate nodes.

Particularly, three main approaches are investigated in a source, intermediate, and destination node to achieve the objectives. First, the research examines both static and dynamic combinations of original packets in the encoding process by proposing dynamic sparsity and expansion packets (DSEP). This scheme significantly increases the coding throughput at both source and destination. Second, a new recoding scheme is proposed to manage the number of packets stored and recoded. Thus, this recoding scheme reduces memory usage and computing complexity at intermediate nodes, processing huge traffic. Finally, the research proposes adaptive decoding algorithms, which allow the destinations to choose the proper decoder depending on the network conditions. These algorithms improve the decoding probability in an unreliable network while reducing the computational complexity in a reliable network. For each proposed approach, both mathematical analysis and practical implementation were performed. Especially, the implementation leverages Kodo, a well-known network coding library for simulation and real-time implementation using during the last decade.

Network Coding Strategies for Multi-Core Architectures

by Dr.-Ing. Simon Wunderlich, 2021, available at tud.qucosa.de
 

 

Abstract

Random Linear Network Coding (RLNC) is a new coding technique which can provide higher reliability and efficiency in wireless networks. Applying it on the fifth generation of cellular networks (5G) is now possible due to the softwarization approach ofthe 5G architecture. However, the complex computations necessary to encode and decode symbols in RLNC are limiting the achievable throughput and energy efficiency on todays mobile computers.

Most computers, phones, TVs, or network equipment nowadays come with multiple, possibly heteregoneous (i.e. slow low-power and fast high-power) processing cores. Previous multi core research focused on RLNC optimization for big data chunks which are useful for storage, however network operations tend to use smaller packets (e.g.Ethernet MTUs of 1500 byte) and code over smaller generations of packets. Also latency is an increasingly important performance aspect in the upcoming Tactile Internet, however latency has received only small attention in RLNC optimization so far. The primary research question of my thesis is therefore how to optimize throughput and delay of RLNC on todays most common computing architectures. By fully leveraging the resources of todays consumer electronics hardware, RLNC can be practically adopted in todays wireless systems with just a software update and improve the network efficiency and user experience.

I am generally following a constructive approach by introducing algorithms and methods, and then demonstrating their performance by benchmarking actual implementations on common consumer electronics hardware against the state of the art. Inspired by linear algebra parallelization methods used in high performance computers (HPC), I’ve developed a RLNC encoder/decoder which schedules matrix block tasks for multiple cores using a directed acyclic graph (DAG) based on data dependencies betweenthe tasks. A non-progressive variant works with pre-computed DAG schedules which can be re-used to push throughput even higher. I’ve also developed a progressive variant which can be used to minimize latency. Both variants are achieving higher throughput performance than the fastest currently known RLNC decoder, with up to three times the throughput for small generation size and short packets. Unlike previous approaches, they can utilize all cores also on heterogeneous architectures. The progressive decoder greatly reduces latency while allowing to keep a high throughput, reducing the latency up to a factor ten compared to the non-progressive variant.

Progressive decoders need special low-delay codes to release packets early instead of waiting for more dependent packets from the network. I’m introducing Caterpillar RLNC (CRLNC), a sliding window code using a fixed sliding window over a stream of packets. CRLNC can be implemented on top of a conventional generation based RLNC decoder. CRLNC combines the resilience against packet loss and fixed resource boundaries (number of computations and memory) of conventional generation based RLNC decoders with the low delay of an infinite sliding window decoder.

The DAG RLNC coders and the Caterpillar RLNC method together provide a powerful toolset to practically enable RLNC in 5G or other wireless systems while achieving high throughput and low delay as required by upcoming immersive and machine control applications.

High-Throughput Air-to-Ground Connectivity for Aircraft

by Dr.-Ing. Sandra Hoppe, 2021, available at tud.qucosa.de
 

 

Abstract

Permanent connectivity to the Internet has become the defacto standard in the second decade of the 21st century. However, on-board aircraft connectivity is still limited. While the number of airlines offering in-flight connectivity increases, the current performance is insufficient to satisfy several hundreds of passengers simultaneously. There are several options to connect aircraft to the ground, i.e. direct air-to-ground, satellites and relaying via air-to-air links. However, each single solution is insufficient. The direct air-to-ground coverage is limited to the continent and coastal regions, while the satellite links are limited in the minimum size of the spot beams and air-to-air links need to be combined with a link to the ground. Moreover, even if a direct air-to-ground or satellite link is available, the peak throughput offered on each link is rarely achieved, as the capacity needs to be shared with other aircraft flying in the same coverage area. The main challenge in achieving a high throughput per aircraft lies in the throughput allocation. All aircraft should receive a fair share of the available throughput. More specifically, as an aircraft contains a network itself, a weighted share according to the aircraft size should be provided. To address this problem, an integrated air-to-ground network, which is able to provide a high throughput to aircraft, is proposed here. Therefore, this work introduces a weighted-fair throughput allocation scheme to provide such a desired allocation. While various aspects of aircraft connectivity are studied in literature, this work is the first to address an integrated air-to-ground network to provide high-throughput connectivity to aircraft. This work models the problem of throughput allocation as a mixed integer linear program. Two throughput allocation schemes are proposed, a centralized optimal solution and a distributed heuristic solution. For the optimal solution, two different objectives are introduced, a max-min-based and a threshold-based objective. The optimal solution is utilized as a benchmark for the achievable throughput for small scenarios, while the heuristic solution offers a distributed approach and can process scenarios with a higher number of aircraft. Additionally, an option for weighted-fair throughput allocation is included. Hence, large aircraft obtain a larger share of the throughput than smaller ones. This leads to fair throughput allocation with respect to the size of the aircraft. To analyze the performance of throughput allocation in the air-to-ground network, this work introduces an air-to-ground network model. It models the network realistically, but independent from specific network implementations, such as 5G or WiFi. It is also adaptable to different scenarios. The aircraft network is studied based on captured flight traces. Extensive and representative parameter studies are conducted, including, among others, different link setups, geographic scenarios, aircraft capabilities, link distances and link capacities. The results show that the throughput can be distributed optimally during high-aircraft-density times using the optimal solution and close to optimal using the heuristic solution. The mean throughput during these times in the optimal reference scenario with low Earth orbit satellites is 20 Mbps via direct air-to-ground links and 4 Mbps via satellite links, which corresponds to 10.7% and 1.9% of the maximum link throughput, respectively. Nevertheless, during low-aircraft-density times, which are less challenging, the throughput can reach more than 200 Mbps. Therefore, the challenge is on providing a high throughput during high-aircraft-density times. In the larger central European scenario, using the heuristic scheme, a minimum of 22.9 Mbps, i.e. 3.2% of the maximum capacity, can be provided to all aircraft during high-aircraft-density times. Moreover, the critical parameters to obtain a high throughput are presented. For instance, this work shows that multi-hop air-to-air links are dispensable for aircraft within direct air-to-ground coverage. While the computation time of the optimal solution limits the number of aircraft in the scenario, larger scenarios can be studied using the heuristic scheme. The results using the weighted-fair throughput allocation show that the introduction of weights enables a user-fair throughput allocation instead of an aircraft-fair throughput allocation. As a conclusion, using the air-to-ground model and the two introduced throughput allocation schemes, the achievable weighted-fair throughput per aircraft and the respective link choices can be quantified.

Agile Mobile Edge Computing and Network-coded Cooperation in 5G

by Dr.-Ing. Roberto Torre Arranz, 2021, available at tud.qucosa.de
 
M.Sc. Roberto Torre
 

Abstract

The architecture of the network is undergoing a series of structural changes from the core network to the user to pave the way for 5G. New infrastructure elements are being massively deployed, thus making 5G more heterogeneous. This emerging paradigm, along with new services and handheld devices, creates a massive, highly mobile, heterogeneous environment with hard constraints in throughput, latency, resilience, and power consumption. This dissertation presents Agile MEC (AMEC), a shift in the concept of MEC to support user’s mobility with the rapid relocation of services; and Network-coded Cooperation (NCC), a new system for massive content distribution in cellular networks. In summary, AMEC provides a mobility framework that reliably reduces the latency and power consumption in the system, and NCC improves network throughput, network resilience, and power consumption by offloading cellular traffic to underlay networks.

Next Generation Header Compression

by Dr.-Ing. Máté Tömösközi, 2021, available at tud.qucosa.de
 

 

Abstract

Header compression is one of the technologies, which enables packet-switched computer networks to operate with higher efficiency even if the underlying physical link is limited. Since its inception, the compression was meant to improve dial-up Telnet connections, and has evolved into a complex multi-faceted compression library, which has been integrated into the third and fourth generation of cellular networks, among others. Beyond the promised benefit of decreased bandwidth usage, header compression has shown that it is capable of improving the quality of already existing services, such as real-time audio calls, and is a developing hot topic to this day, realising, for example, Internet Protocol (IP) version 6 support on resource constrained low-power devices.

However, header compression is ill equipped to handle the stringent requirements and challenges, which are posed by the coming fifth generation of wireless and cellular networks (5G) and its applications. Even though it can be considered as an already well developed area of computer networks that can compress protocol headers with unparalleled efficiency, header compression is still operating under some assumptions and restrictions that could deny its employment outside of cellular Voice over IP transmissions to certain degrees. Albeit some improvements in the latency domain could be achieved with its help, the application of header compression in both largely interconnected networks and very dynamic ones — such as the wireless mesh and vehicular networks — is not yet assured, as the topic, in this perspective, is still relatively new and unexplored.

The main goal of my theses is the presentation and evaluation of novel ideas, which support the application of header compression concepts for the future wireless use-cases, as it holds alluring benefits for the coming network generations, if applied correctly. The dissertation provides a detailed treatment of my contribution in the specific research areas of header compression and network coding, which encompass novel proposals for their enhancement in 5G uses, such as broadcastability and online optimisation, as well as their subsequent analysis from various perspectives, including the achievable compression gains, delay reduction, transmission efficiency, and energy consumption, to name a few. Besides the focus on enabling header compression in 5G, the development of traffic-agnostic and various network-coded compression concepts are also introduced to attain the benefits of both techniques at the same time, namely, reduced bandwidth usage and high reliability in latency sensitive heterogeneous and error prone mesh networks. The generalisation of compression is achieved by the employment of various machine learning concepts, which could approximate the compression characteristics of any packet-based communication flow, while network coding facilitates the exploitation of the low-latency benefits of error correcting codes in heavily interconnected wireless networks.

Opportunistic Routing with Network Coding in Powerline Communications

by Dr.-Ing. Ievgenii Tsokalo, 2017, available at tud.qucosa.de
 

 

Abstract

Opportunistic Routing (OR) can be used as an alternative to the legacy routing (LR) protocols in networks with a broadcast lossy channel and possibility of overhearing the signal. The power line medium creates such an environment. OR can better exploit the channel than LR because it allows the cooperation of all nodes that receive any data. With LR, only a chain of nodes is selected for communication. Other nodes drop the received information. We investigate OR for the one-source one-destination scenario with one traffic flow. First, we evaluate the upper bound on the achievable data rate and advocate the decentralized algorithm for its calculation. This knowledge is used in the design of Basic Routing Rules (BRR). They use the link quality metric that equals the upper bound on the achievable data rate between the given node and the destination. We call it the node priority. It considers the possibility of multi-path communication and the packet loss correlation. BRR allows achieving the optimal data rate pertaining to certain theoretical assumptions. The Extended BRR (BRR-E) are free of them. The major difference between BRR and BRR-E lie in the usage of Network Coding (NC) for prognosis of the feedback. In this way, the protocol overhead can be severely reduced. We also study the Automatic Repeat-reQuest (ARQ) mechanism that is applicable to OR. It differs to ARQ with LR in that each sender has several sinks and none of the sinks except destination require the full recovery of the original message. Using BRR-E, ARQ and other services like network initialization and link state control, we design the Advanced Network Coding based Opportunistic Routing protocol (ANChOR). With the analytic and simulation results we demonstrate the near optimum performance of NChOR. For the triangular topology, the achievable data rate is just 2% away from the theoretical maximum and it is up to 90% higher than it is possible to achieve with LR. Using the G.hn standard, we also show the full protocol stack simulation results (including IP/UDP and realistic channel model). In this simulation, we revealed that the gain of OR to LR can be even more increased by reducing the head-of-the-line problem in ARQ. Even considering the ANChOR overhead through additional headers and feedbacks, it outperforms the original G.hn setup in data rate up to 40% and in latency up to 60%.

Adaptive MAC-Layer Protocol Switching for Power Line Communications (PLC)

by Dr.-Ing. Stanislav Mudriievskyi, 2017, available at dr.hut-verlag.de
 

 

Abstract

The ongoing change from conventional electrical grids towards smart grids brings new use cases to communication technologies. These influence the selection of appropriate communication systems according to the requirements of smart grid applications. Power Line Communications (PLC) is the most native communication technology for the smart grid as it uses the same medium for energy and data transmission. In order to meet the requirements of smart grid application perfectly the corresponding PLC system should be adapted dynamically. In this thesis adaptation mechanisms at the Medium Access Control (MAC) layer are studied on the example of a G.hn PLC system.

At first, the Physical (PHY) layer of G.hn is studied. Its model is implemented in the network simulator 3 (ns-3) according to the standard. At the MAC layer two channel access schemes can be used: Time Division Multiple Access (TDMA) or Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). The CSMA/CA access scheme has numerous variations. In this thesis an adaptive version of CSMA/CA from another PLC system is analyzed in detail and improved. For the TDMA scheme the equal slot assignment is selected. It is used for symmetric traffic in uplink. The assignment of slots for scenarios with downlink traffic and for special cases with repeaters are studied as well.

It is known that two MAC access schemes under consideration have their advantages and disadvantages depending on the traffic load. Therefore, in this thesis the Adaptive Layer Switching (ALS) mechanism is proposed, which switches the MAC layer from CSMA/CA to TDMA or vice versa. The switching occurs for the whole network.

The novel ALS algorithm is described and implemented in the discrete event simulator ns-3. The operation of the networks with and without repeaters under variation of offered traffic is investigated. Three realistic network topologies are used: a small size one without repeaters, a medium size with one repeater and a large size one with three repeaters. Those are studied with User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) traffic in uplink and downlink directions. The influence of the impulsive noise is studied as well. Finally, the performance of CSMA/CA, TDMA, and ALS in terms of delay and throughput is evaluated.

Analytische Modellierung zur Systemdimensionierung von 4G WiMAX Mobilfunknetzen

by Dr.-Ing. Volker Richter, 2017, available at dr.hut-verlag.de
 

 

Abstract

Die zunehmende Verbreitung von Smartphones führt zu einem exponentiellen Wachstum des mobilen Datenverkehrs. Dies erfordert einen weiteren Ausbau der Mobilfunknetze der vierten Generation, wie WiMAX und LTE. Gleichzeitig zwingt der harte Wettbewerb die Netzbetreiber, Ausbau- und Betriebskosten deutlich zu reduzieren. Daher ist die mittlere Zellauslastung zu erhöhen und zur Gewährleistung der Dienstqualität QoS-Unterstützung einzuführen. Dies erfordert eine Weiterentwicklung der Planungswerkzeuge unter Einbeziehung von QoS-Einflüssen.

Die vorliegende Arbeit beschreibt analytische Modelle zur Systemdimensionierung des 4G-Mobilfunknetzes WiMAX.

Der zu Grunde liegende IEEE 802.16-2012 Standard spezifiziert das Frameformat, die Nachrichten und das QoS-Konzept mit Verkehrsüberwachung und Scheduling. Konkrete Algorithmen werden allerdings nicht beschrieben.

Kein Ansatz zur Verkehrsüberwachung erfüllt die Forderung des Standards nach einer strikten zeitlichen Begrenzung des Integrationintervalls der Verkehrsmessung. Daher werden mit dem ACC und dem APP zwei eigene Algorithmen vorgestellt. Die Leistungsbewertung zusammen mit einem existierenden Ansatz zeigt, dass nur der ACC die Anforderungen des Standards erfüllt.

Mit dem TRBS wird ein QoS-unterstützender hybrider Scheduler vorgestellt. Nachrangig zu den QoS-Anforderungen erreicht dieser eine Gleichbehandlung der Teilnehmer, bezogen auf Funkressourcen. Im Gegensatz zu anderen Ansätzen unterscheidet der TRBS nicht zwischen Dienstklassen sondern zwischen garantierten und zulässigen Übertragungsanforderungen. Die Gegenüberstellung mit einem verbreiteten PF-Scheduler zeigt eine signifikante Leistungssteigerung, insbesondere für Echtzeit-Verbindungen.

Zur Entwicklung der analytischen Modelle wird zuerst der Einfluss des Frameformates und des Overheads unter idealen Kanalbedingungen untersucht. Die Modelle für garantierten und zulässigen Verkehr übertreffen mit einer Genauigkeit von 10^-3 bekannte Ansätze um den Faktor 10. Darüber hinaus wird mit Burst-Aggregation eine effizientere Variante des Downlink-Subframes modelliert.

Abschließend werden die analytischen Modelle um die Beschreibung realitätsnaher Kanalbedingungen erweitert. Unter diesen Bedingungen sinkt die Genauigkeit auf 10^-1 ab. Die vorgestellten analytischen Modelle können in kommerziellen Planungswerkzeugen zum Einsatz kommen und damit zu einem kosteneffizienteren Aufbau und Betrieb des 4G-Mobilfunknetzes WiMAX beitragen.

Energy-Efficient Indoor Localization Based on Wireless Sensor Networks

by Dr.-Ing. Jorge Juan Robles, 2015, available at dr.hut-verlag.de
 

 

Abstract

This thesis deals with the improvement of the performance of WSN-based localization systems. Particularly, our focus is to increase the energy efficiency of the battery powered nodes. For this, we investigate the main features in three areas of a localization system: the measurement process, the position estimation and the communication protocols.

Experiments were conducted to evaluate the accuracy of the Received Signal Strength Indicator (RSSI) and Phase of Arrival (POA) ranging methods. Based on the measurements, novel distance error models are derived for the POA ranging method.

The position accuracy of several localization algorithms are evaluated in different scenarios by using simulated and real data.

We designed a communication protocol called Highly Configurable Protocol (HCP), which is designed for RSSI-based localization systems. In this thesis we also present HCP version 2 (HCPv2), where the nodes can use both RSSI and POA measurements for the position estimation.