2023 Computer Science & Technology
The authenticity of information is the key security factor of system in time-sensitive networking (TSN). However, the direct introduction of traditional security authentication mechanism will lead to a significant reduction in schedulability of the system. The existing methods still have the problems of few application scenarios and high resource consumption. To address this problem, a security-aware scheduling method for TSN was proposed. Firstly, based on the traffic characteristics of TSN, a time-efficient one-time signature security mechanism was designed to provide efficient multicast source authentication for messages. Secondly, the corresponding security model was proposed to evaluate the mechanism and describe the impact of the security mechanism on tasks and traffic. Finally, the proposed security-aware scheduling method was modeled mathematically. On the basis of traditional scheduling constraints, some constraints related to security mechanisms were added. At the same time, the optimization objective was to minimize the end-to-end delay of applications, and constraint programming was used to solve the problem. Simulation results show that the introduction of the improved one-time signature mechanism can effectively protect the authenticity of key information in TSN, and has limited impact on scheduling. In multiple test cases of different sizes generated based on real industrial scenarios, the average end-to-end delay and bandwidth consumption of the generated applications only increased by 13.3% and 5.8% respectively. Compared with other similar methods, this method consumes less bandwidth, thus more suitable for TSN networks with strict bandwidth restrictions.
In addition to Gaussian noise, there is sparse noise with impulsive properties in the signal acquisition process. The common robust sparse signal recovery models can recover the original sparse signal under sparse noise environment. However, in many practical applications, the structural sparsity of the original signal, for example, gradient sparsity needs to be considered. In order to recover the sparse structure of the original high-dimensional signal from the coexistence of sparse noise and Gaussian noise, this paper proposed two nonconvex and nonsmooth optimization models based on truncated L1-L2 total variation (TV) and 3D truncated L1-L2 TV, respectively. These optimization models were solved by the proximal alternating linearized minimization algorithm with extrapolation, and the sub-problems involved were solved by the proximal convex difference algorithm with extrapolation. Under the assumption that the potential function has Kurdyka-Lojasiewicz (KL) property, the convergence analysis of these algorithms was given. The numerical experiments test grey images with Gaussian noise, color images with mixed noise, grey video with mixed noise and so on. The peak signal-to-noise ratio (PSNR) was used as the evaluation criterion for recovered quality. The experimental results show that the new models can correctly recover the original structured sparse signal, and have better PSNR values in the same noisy environment.
In recent years, with the increasing number of smart contracts and the increasing economic losses caused by contract loopholes, the security of smart contracts has attracted more and more attention. The vulnerability detection method based on deep learning can solve the problems of low detection efficiency and insufficient accuracy of the early traditional smart contract vulnerability detection method. However, most of the existing deep learning-based vulnerability detection methods directly use smart contract source code, opcode sequence or bytecode sequence as the input of the deep learning model. This fact will weaken the effective information due to the introduction of too much invalid information. To solve this problem, this paper proposed a smart contract vulnerability detection method based on capsule network and attention mechanism. Considering the execution timing information of the program, the study extracted key operation code sequence of the smart contract as the source code feature. Then a hybrid network structure of capsule network and attention mechanism was used for training. The capsule network extracts the context information of the smart contract and the connection between the part and the whole; while the attention mechanism is used to assign different weights to different opcodes according to their importance. The experimental results show that the F1 score and accuracy of the algorithm proposed in this paper in the smart contract data set are 94.48% and 97.15%, indicating that this algorithm is superior to other detection methods in performance.
Because of its important role in model compression, knowledge distillation has attracted much attention in the field of deep learning. However, the classical knowledge distillation algorithm only uses the information of a single sample, and neglects the importance of the relationship between samples, leading to its poor performance. To improve the efficiency and performance of knowledge transfer in knowledge distillation algorithm, this paper proposed a feature-space-embedding based contrastive knowledge distillation (FSECD) algorithm. The algorithm adopts efficient batch construction strategy, which embeds the student feature into the teacher feature space so that each student feature builds N contrastive pairs with N teacher features. In each pair, the teacher feature is optimized and fixed, while student feature is to be optimized and tunable. In the training process, the distance for positive pairs is narrowed and the distance for negative pairs is expanded, so that student model can perceive and learn the inter-sample relations of teacher model and realize the transfer of knowledge from teacher model to student model. Extensive experiments with different teacher/student architecture settings on CIFAR-100 and ImageNet datasets show that, FSECD algorithm achieves significant performance improvement without additional network structures and data when compared with other cutting-edge distillation methods, which further proves the importance of the inter-sample relations in knowledge distillation.
In order to adapt to the limited computing performance and energy of numerous lightweight sensor nodes in the encrypted transmission of IoT (Internet of Things), this paper proposed a fast modulus algorithm (CZ-Mod algorithm) based on Mersenne-like numbers to slove the bottleneck problems of computing speed, power consumption and so on during the sensors run PKI (Public Key Infrastructure) encryption algorithms such as RSA (Rivest-Shamir-Adleman), DHM (Diffie-Hellman-Merkle), Elgamal, etc., and to simplify the corresponding hardware encrypting circuit logic design. CZ-Mod algorithm uses the mathematic characteristics of Mersenne numbers, and lowers the time complexity of its essential operation mod (modulo) into O(n). Firstly, a fast modulus algorithm mod1 using Mersenne-like numbers as modulus was presented, changing complex mod operation into simple binary shift/add operation; secondly, a fast modulus algorithm mod2 using any positive integers near Mersenne-like numbers as modulus was presented, expanding the modulus value range while simplifying mod operation; and then logic circuits of mod1 and mod2 operations were designed, simplifying mod operation hardware circuit. Finally, the above work was applied to the key exchange of IoT nodes, so as to lower the computing complexity and improve the speed of PKI encryption algorithms. The experiment test results indicate that the speed of DHM key exchange with CZ-Mod algorithm can reach 2.5 to 4 times of that of the conventional algorithm; CZ-Mod algorithm is concise and fits the hardware circuit design for the IoT sensors.
Intelligence education is the key research direction of artificial intelligence. The most important is to describe the students’ cognitive process by ultilizing the knowledge points in the test questions. Aiming at the problem that the cognitive diagnosis model is insufficient for mining students, test questions and their interactive information, this study proposed a cognitive diagnosis model integrating forgetting and the importance of knowledge points. According to the historical interaction between the test questions and knowledge points, the model introduces forgetting factors in combination with the difficulty information of knowledge points, thus alleviates the problem of insufficient information mining for students. Through the attention mechanism, the importance information of the test questions to the knowledge points was obtained to alleviate the problem of insufficient information mining of the test questions. Learning the interaction relation between students and test questions through Transformer alleviates the problem of insufficient interaction information between students and test questions. The results of experiments carried out on the classic dataset show that the accuracy Acc, root mean square error (RMSE), and the area under curve (AUC) values of this method on the Math1, Math2, and Assistment datasets are 0.716, 0.445, 0.776, 0.725, 0.432, 0.807, 0.741, 0.427, 0.779, respectively. Compared with other existing models, the proposed method has better results. The proposed method illustrates the importance of knowledge importance and timeliness for cognitive modeling.
To effectively solve the problems of low feature utilization and poor image structure coherence occurred when existing algorithms are used to repair large irregularly missing images, this study proposed an image repair algorithm based on dense feature inference (DFR) and hybrid loss function. The repair network consists of multiple inference modules (FRs) densely connected. Firstly, after the image to be restored was fed into the first inference module for feature inference, the output feature map channels were merged and sent to the next inference module. The input of each subsequent inference module was the inferred features from all the previous inference modules and so on, so as to make full use of the feature information captured by each reasoning module. Subsequently, a propagation consistent attention (PCA) mechanism was proposed to improve the overall consistency of the patched regions with the known regions. Finally, a hybrid loss function (ML) was proposed to optimize the structural coherence of the repair results. The whole DFR network adopted group normalization (GN), and excellent repair results can be achieved even using small training batches. The performance of the proposed algorithm was verified on Paris StreetView and CelebA face datasets, which are internationally recognized datasets. The objective and subjective experimental results show that the proposed algorithm can effectively repair large irregular missing images, improve feature utilization and structural coherence. Its average peak signal-to-noise ratio (PSNR), average structural similarity ( SSIM), mean square error (MSE), Fréchet distance (FID) and learning perceptual image block similarity (LPIPS) metrics all outperform the comparison algorithms.
In the field of multi-view clustering, many methods learn the similarity matrix directly from the original data, but this ignores the effect of noise in the original data. In addition, some methods must perform a feature decomposition on the graph Laplacian matrix, which leads to reduced interpretability and requires post-processing such as k-means. To address these issues, this paper proposed a fast multi-view clustering based on a unified label matrix. Firstly, a non-negative constraint was added to the objective function from the unified viewpoint of the normalized cut of the relaxation and the ratio cut. Then, a structured graph reconstruction was performed on the similarity matrix by the indicator matrix to ensure that the obtained graph has strong intra-cluster connections and weak inter-cluster connections. In addition, the number of iterations was reduced by setting a unified label matrix, thus further improving the speed of the method. Finally, the problem was solved optimally based on an alternating direction multiplication strategy. The algorithm aligns the multi-view dataset by randomly selecting the anchor addresses, and aligning the views can significantly improve the accuracy of clustering. The problem of the high computational complexity of traditional spectral clustering algorithms was effectively solved by using singular value decomposition instead of feature decomposition in the iterative process. Labels were obtained directly by indicating the column labels of the largest element of the matrix by row index. Experimental results on four real datasets demonstrate the effectiveness of the algorithm, and show that its clustering performance outperformed the nine existing benchmark algorithms.
Bearing is one of the important components in many production equipment, and the study of its remaining useful life is of great value. A prediction method for remaining useful life of bearings based on spatial-temporal dual-cell state self-adaptive network (ST-DCSN) was proposed for the prediction error caused by the degradation state change and timing correlation that are not fully considered in the traditional bearing remaining life prediction in different environments. This paper adopted an embedded convolution of coexisting temporal and spatial states to operate the dual-state recurrent network and introduced spatio-temporal dual-cell state and sub-cell state differential mechanism to realize adaptive perception of bearing attenuation states. This method effectively captures the feature state of the bearing monitoring data in both temporal and spatial dimensions, so as to solve the influence of environmental and timing problems on the prediction performance of bearing remaining life prediction. To investigate the effectiveness of the proposed method and compare it with other state-of-the-art approaches, two real bearing life accelerated degradation datasets, namely FEMTO-ST and XJTU-SY, were used for validation. Both ablation experiments and comparative experiments were conducted, and four evaluation metrics were employed to assess the prediction performance. The ablation results demonstrate that the complete version of ST-DCSN outperforms the experiment groups with removed spatial cell and dynamic and static sub-cell in terms of stability and performance metrics. Compared to other methods, the proposed method achieves superior prediction performance with higher fitness and better stability in the prediction results at the end of life of bearing. This demonstrates that the ST-DCSN method can effectively improve the accuracy of bearing’s remaining useful life prediction.
Although the pre-trained language models like BERT/RoBERTa/MacBERT can learn the grammatical, semantic and contextual features of characters and words well through the language mask model MLM pre-training task, they lack the ability to detect and correct spelling errors. What’s more, they faces the problem of inconsistency between the pre-training and downstream fine-tuning stages in Chinese spelling correction CSC task. In order to further improve BERT/RoBERTa/MacBERT’s ability of spelling error detection and correction, this paper proposed a self-supervised pre-training method MASC for CSC, which converts the prediction of masked words into recognition and correction of misspelled words on the basis of MLM. First of all, MASC expands the normal word-masking in MLM to whole word masking, aiming to improve BERT’s ability of learning semantic representation at word-level. Then, the masked words are replaced with candidate words from the aspects of the same tone, similar tone and similar shape with the help of external confusion set, and the training target is changed to recognize the correct words, thus enhancing BERT’s ability of detecting and correcting spelling errors. Finally, the experimental results on three open CSC corpora, sighan13, sighan14 and sighan15, show that MASC can further improve the effect of the pre-training language model, i.e. BERT/RoBERTA/MacBERT, in downstream CSC tasks without changing their structures. Ablation experiments also confirm the importance of whole word masking, phonetic and glyph information.
The harm caused by video tampering has been endangering people’s lives, which makes deep forgery detection technology gradually obtain widespread attention and development. However, current detection methods could not effectively capture noisy residuals due to the use of inflexible constraints. In addition, they ignore the correlation between texture and semantic features and the impact of temporal features on detection performance improvement. To solve these problems, this paper proposed an adaptive network (AdfNet) with diverse features for deep forgery detection. It helps the classifier to judge authenticity by extracting semantic features, texture features and temporal features. The paper explored the adaptive texture noise extraction (ATNEM) mechanism, and flexibly captured the noise residuals in non-fixed frequency bands through unpooled feature mapping and frequency-based channel attention mechanism. The deep semantic analysis guidance strategy (DSAGS) was designed to highlight the tampering traces through spatial attention mechanism, and guide the feature extractor to focus on the deep features of the focus region. The paper studied multi-scale temporal feature processing (MTFPM), and used temporal attention mechanism to assign weights to different video frames and capture the difference of time series in tampered videos. The experimental results show that the ACC score of the proposed network in the HQ mode of FaceForensics++(FF++) dataset is 97.41%, which is significantly better than that of the existing mainstream algorithms. Moreover, while maintaining the AUC value of 99.80% on the FF++ dataset, the AUC value can reach 76.41% on Celeb-DF, reflecting strong generalization.