>
Moiré patterns, caused by frequency aliasing between fine repetitive structures and a camera sensor's sampling process, have been a significant obstacle in various real-world applications, such as consumer photography and industrial defect inspection. With the advancements in deep learning algorithms, numerous studies--predominantly based on convolutional neural networks--have suggested various solutions to address this issue. Despite these efforts, existing approaches still struggle to effectively eliminate artifacts due to the diverse scales, orientations, and color shifts of moiré patterns, primarily because the constrained receptive field of CNN-based architectures limits their ability to capture the complex characteristics of moiré patterns. In this paper, we propose MZNet, a U-shaped network designed to bring images closer to a 'Moiré-Zero' state by effectively removing moiré patterns. It integrates three specialized components: Multi-Scale Dual Attention Block (MSDAB) for extracting and refining multi-scale features, Multi-Shape Large Kernel Convolution Block (MSLKB) for capturing diverse moiré structures, and Feature Fusion-Based Skip Connection for enhancing information flow. Together, these components enhance local texture restoration and large-scale artifact suppression. Experiments on benchmark datasets demonstrate that MZNet achieves state-of-the-art performance on high-resolution datasets and delivers competitive results on lower-resolution dataset, while maintaining a low computational cost, suggesting that it is an efficient and practical solution for real-world applications.
The model follows a U-Net architecture with four levels of encoders and decoders. MSDAB are used in both the encoder and decoder, while the middle block incorporates a single MSLKB. FS modules in the skip connections facilitate feature fusion across scales. (For more details, please refer to our paper!)
We propose a novel architecture composed of several specialized modules to address moiré patterns effectively. First, the Multi-Scale Dual Attention Block (MSDAB) captures moiré patterns at various scales using a combination of Multi-Dilation Convolution Module (MDCM) and Dual Attention Module (DAM). MDCM expands the receptive field through depth-wise dilated convolutions, while DAM applies attention mechanisms (SCA and LKA) to suppress both fine textures and large artifacts. To handle directional diversity, the Multi-Shape Large Kernel Block (MSLKB) uses depth-wise convolutions with square, horizontal, and vertical kernels at the network bottleneck. Finally, the Feature Fusion-Based Skip Connection (FFSC) enhances detail reconstruction by injecting aggregated encoder features into each decoder level, ensuring rich multi-scale context.
Dataset | Metrics | Input | DMCNN | MDDM | WDNet | MopNet | MBCNN | FHDe²Net | ESDNet | ESDNet-L | MCFNet | P-BiC | MZNet (Ours) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TIP2018 | PSNR↑ | 20.30 | 26.77 | - | 28.08 | 27.75 | 30.03 | 27.78 | 29.81 | 30.11 | 30.13 | 30.56 | 30.18 |
SSIM↑ | 0.738 | 0.871 | - | 0.904 | 0.895 | 0.893 | 0.896 | 0.916 | 0.920 | 0.920 | 0.925 | 0.921 | |
FHDMi | PSNR↑ | 17.974 | 21.538 | 20.831 | - | 22.756 | 22.309 | 22.930 | 24.500 | 24.882 | 24.823 | 25.450 | 26.120 |
SSIM↑ | 0.7033 | 0.7727 | 0.7343 | - | 0.7958 | 0.8095 | 0.7885 | 0.8351 | 0.8440 | 0.8426 | 0.8473 | 0.8624 | |
LPIPS↓ | 0.2837 | 0.2477 | 0.2515 | - | 0.1794 | 0.1980 | 0.1688 | 0.1354 | 0.1301 | 0.1288 | 0.1493 | 0.1042 | |
UHDM | PSNR↑ | 17.117 | 19.914 | 20.088 | 20.364 | 19.489 | 21.414 | 20.388 | 22.119 | 22.422 | 22.484 | 23.30 | 23.632 |
SSIM↑ | 0.5089 | 0.7575 | 0.7441 | 0.6497 | 0.7572 | 0.7932 | 0.7496 | 0.7956 | 0.7985 | 0.8001 | 0.8007 | 0.8096 | |
LPIPS↓ | 0.5314 | 0.3764 | 0.3409 | 0.4882 | 0.3857 | 0.3318 | 0.3519 | 0.2551 | 0.2454 | 0.2536 | 0.2324 | 0.2237 | |
Params↓ (M) | - | 1.426 | 7.637 | 3.360 | 58.565 | 14.192 | 13.571 | 5.934 | 10.623 | 6.181 | 4.922 | 14.824 | |
MACs↓ (T) | - | 2.258 | 3.679 | 1.757 | - | 8.522 | 33.23 | 2.247 | 3.689 | 6.903 | 1.223 | 1.190 |
@misc{lee2025moirezeroefficienthighperformance,
title={Moir\'e Zero: An Efficient and High-Performance Neural Architecture for Moir\'e Removal},
author={Seungryong Lee and Woojeong Baek and Younghyun Kim and Eunwoo Kim and Haru Moon and Donggon Yoo and Eunbyung Park},
year={2025},
eprint={2507.22407}}