Self-Supervised Learning for Low-Light Image Enhancement
Main Article Content
Abstract
This paper proposes an end-to-end emotion recognition algorithm based on deep learning to address the problems of insufficient semantic modeling and redundant information interference in emotion recognition tasks. The method employs a multi-layer Transformer architecture to model global dependencies within input sequences and integrates a gating mechanism to selectively enhance emotion-related features. This significantly improves the model's ability to capture complex emotional expressions. In the overall framework, the model first converts raw input into high-dimensional embeddings. It then uses stacked encoders to capture contextual information and applies a gating mechanism to filter core emotional signals. Finally, pooling and a classifier are used to determine emotion categories. To systematically validate the proposed method, a comprehensive evaluation scheme is constructed, including multiple comparative experiments and sensitivity analyses. Model performance is assessed from multiple perspectives such as accuracy, F1 score, and AUC. Experimental results show that the method maintains strong stability and robustness under different learning rates, input perturbations, and data ratio settings. It also outperforms existing mainstream methods across multiple metrics, demonstrating clear structural advantages and expressive capability in emotion recognition tasks.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
Mind forge Academia also operates under the Creative Commons Licence CC-BY 4.0. This allows for copy and redistribute the material in any medium or format for any purpose, even commercially. The premise is that you must provide appropriate citation information.