The upcoming deployment of 5/6G networks, online services like 4k/8k HDTV (streamers and online games), the development of the Internet of Things concept, connecting billions of active devices, as well as the high-speed optical access networks, impose progressively higher and higher requirements on the underlying optical networks infrastructure. With current network infrastructures approaching almost unsustainable levels of bandwidth utilization/ data traffic rates, and the electrical power consumption of communications systems becoming a serious concern in view of our achieving the global carbon footprint targets, network operators and system suppliers are now looking for ways to respond to these demands while also maximizing the returns of their investments. The search for a solution to this predicted ªcapacity crunchº led to a renewed interest in alternative approaches to system design, including the usage of high-order modulation formats and high symbol rates, enabled by coherent detection, development of wideband transmission tools, new fiber types (such as multi-mode and ±core), and finally, the implementation of advanced digital signal processing (DSP) elements to mitigate optical channel nonlinearities and improve the received SNR. All aforementioned options are intended to boost the available optical systems’ capacity to fulfill the new traffic demands. This thesis focuses on the last of these possible solutions to the ªcapacity crunch," answering the question: ªHow can machine learning improve existing optical communications by minimizing quality penalties introduced by transceiver components and fiber media nonlinearity?". Ultimately, by identifying a proper machine learning solution (or a bevy of solutions) to act as a nonlinear channel equalizer for optical transmissions, we can improve the system’s throughput and even reduce the signal processing complexity, which means we can transmit more using the already built optical infrastructure. This problem was broken into four parts in this thesis: i) the development of new machine learning architectures to achieve appealing levels of performance; ii) the correct assessment of computational complexity and hardware realization; iii) the application of AI techniques to achieve fast reconfigurable solutions; iv) the creation of a theoretical foundation with studies demonstrating the caveats and pitfalls of machine learning methods used for optical channel equalization. Common measures such as bit error rate, quality factor, and mutual information are considered in scrutinizing the systems studied in this thesis. Based on simulation and experimental results, we conclude that neural network-based equalization can, in fact, improve the channel quality of transmission and at the same time have computational complexity close to other classic DSP algorithms.
- Coherent Optical communications
- Nonlinear Equalization
- Machine Learning
- Computational Complexity
Machine Learning Techniques To Mitigate Nonlinear Impairments In Optical Fiber System
Freire de Carvalho Souza, P. J. (Author). Dec 2022
Student thesis: Doctoral Thesis › Doctor of Philosophy