Loading...

Table of Content

    30 July 2025, Volume 43 Issue 4
    CBCC2024
    Smart Contract Vulnerability Detection Technology Based on Machine Learning
    LIU Lili, SHI Yijie, QIN Sujuan
    2025, 43(4):  541-558.  doi:10.3969/j.issn.0255-8297.2025.04.001
    Asbtract ( 26 )   PDF (1776KB) ( 15 )  
    References | Related Articles | Metrics
    To address the limitations of the existing smart contract vulnerability detection technology, including low detection efficiency, inadequate automation, and the inability to realize large-scale smart contract sample detection, this study proposed a method for smart contract vulnerability detection technology based on machine learning. The method first preprocessed the smart contract dataset, converted the source code of the smart contract into a sequence of opcodes, and formulated the opcode abstraction simplification rules for simplification. On this basis, 2025-dimensional bigram features were extracted from the simplified opcode sequence dataset using the N-gram model, and three feature representations were constructed by using the embedding method for feature selection and principal component analysis for feature dimensionality reduction, respectively. Then, the Borderline SMOTE method, an improved algorithm of SMOTE, was used to equalize the positive and negative sample imbalance dataset. Finally, four algorithms, namely, decision tree, support vector machine, random forest, and XGBoost, were applied to construct the vulnerability detection model, respectively. The experimental results show that the vulnerability detection model of random forest has an average accuracy of 93.60%, and the overall performance Macro-F1 reaches 93.91%, which can efficiently detect multiple vulnerabilities.
    Formal Definition and Instance Verification Analysis of General Blockchain Cross-Chain Transactions
    ZHANG Zhuang, ZOU Yilin, LIN Zepeng, LIU Jiayuan, ZI Zongqing, TAN Liang, SHE Kun
    2025, 43(4):  559-585.  doi:10.3969/j.issn.0255-8297.2025.04.002
    Asbtract ( 31 )   PDF (1765KB) ( 6 )  
    References | Related Articles | Metrics
    Blockchain has experienced rapid development, giving rise to a variety of underlying platforms. However, whether applications are built on the same or different platforms, achieving seamless cross-platform interoperability and collaboration remains a significant challenge. Against this backdrop, cross-chain technology for building trusted inter-chain interaction channels has gradually become the focus of attention in the industry. Despite this growing interest, both academia and industry lack a unified and accurate definition of cross-chain transactions. Existing cross-chain technologies mainly focus on asset exchange and swap scenarios but suffer from insufficient universality, fragmented key technologies, and inconsistent implementation methods. Aiming at the above problems, this paper analyzes current mainstream cross-chain technologies, proposes a formal definition of general blockchain cross-chain transaction, and designs a cross-chain algorithm based on this definition taking asset exchange cross-chain mode as an example. Finally, the above algorithm is validated using existing classic cross-chain platforms such as BitXHub and Polkadot. Experimental results show that the formal definition and cross-chain algorithm proposed in this paper can not only provide guidance for improving existing platforms, but also offer effective references for future data-based cross-chain platforms that have not yet appeared or have not been applied in practice.
    RV_IOTA Consensus Algorithm Based on Reputation Value
    WANG Chengxiang, ZHAO Jindong, LIU Weiqi, LIU Minghao, SHAN Jia
    2025, 43(4):  586-599.  doi:10.3969/j.issn.0255-8297.2025.04.003
    Asbtract ( 29 )   PDF (923KB) ( 5 )  
    References | Related Articles | Metrics
    A consensus mechanism based on node reputation values is proposed to address malicious attacks on nodes in the IOTA network. In RV_IOTA, a dynamic reputation value system with time decay is introduced. The reputation value of nodes is adjusted based on their historical transaction performance: valid transactions lead to an increase in the reputation value, while conflicting transactions (such as double-spending attacks) cause a decrease in it, thereby limiting the influence of malicious nodes. RV_IOTA optimizes the selection of Tips algorithm based on node reputation values, which adjusts the probability of Tips being referenced according to reputation values and cumulative transaction weights, making transactions issued by high-reputation nodes more likely to be verified. The proposed mechanism effectively suppresses double-spending attacks in the early stage of the network, reduces the success rate of attacks, and restricts the transaction submission capabilities of malicious nodes, promoting honest nodes to dominate the consensus process and ensuring the robustness and security of the network. Experimental results show that with a scale of 500 nodes, RV_IOTA achieves a throughput of 39 TPS, a 15% improvement over traditional IOTA. Meanwhile the transaction confirmation delay for high-reputation nodes is reduced to 1.2 seconds. By reducing the verification scope of Tips selection from global to neighborhood, the algorithm complexity is decreased. With only a 25% increase in memory overhead, it provides an efficient and reliable decentralized solution for IoT applications.
    A Blockchain-Based Precise Authorization Mechanism for Data Elements
    PAN Xuan, ZHANG Kangkang, CHENG Ao
    2025, 43(4):  600-616.  doi:10.3969/j.issn.0255-8297.2025.04.004
    Asbtract ( 34 )   PDF (2422KB) ( 24 )  
    References | Related Articles | Metrics
    In the circulation of data elements, different trust domains often adopt independent identity authentication systems and access control standards, making precise authorization for cross-domain access challenging. To address this issue, a blockchain-based precise authorization mechanism for data elements is proposed. This mechanism adopts a collaborative on-chain and off-chain architecture. On-chain, a smart contract-driven dynamic metadata update mechanism for non-fungible tokens (NFTs) is designed, mapping user identities and roles into codable NFTs to enable real-time updates of identity and permissions. Off-chain, a trust evaluation model and a dynamic parsing cache mechanism are deployed to convert user trust values into dynamic authorization evaluation factors, enabling hierarchical and automated permission mapping in heterogeneous trust domains. Experimental results show that the proposed mechanism achieves finer-grained access control, accelerates policy updates, and effectively isolates potential risks.
    Key Technologies for a Dual-Chain Structure-Based University Financial Reimbursement System
    YANG Yaoke, WEI Yabin, WANG Wenqi, YANG Duxiang, HONG Feiyang
    2025, 43(4):  617-629.  doi:10.3969/j.issn.0255-8297.2025.04.005
    Asbtract ( 24 )   PDF (1817KB) ( 5 )  
    References | Related Articles | Metrics
    This paper introduces a project-oriented blockchain-based reimbursement platform model for universities, aimed at addressing the trust deficiency and inefficiency issues inherent in the traditional financial reimbursement process. To accommodate the dynamic roles of participants in a multi-project environment, an attribute-based access control (ABAC) model is employed to facilitate fine-grained permission management. The traditional single-chain structure is insufficient for managing the complex logical relation-ships between projects and invoices. To overcome this limitation, a dual-chain storage structure, comprising a main chain and sub-chains, along with a corresponding logical transaction algorithm, is devised. This methodology effectively resolves the intricate correspondence between various reimbursement statuses and projects. Furthermore, a Merkle tree index table query algorithm (MTIT) is designed to enhance query efficiency. Experimental results demonstrate that the proposed design exhibits robust performance stability across various transaction volumes and meets the daily financial management requirements of universities.
    Research and Implementation of Cross Chain System Based on Oracle Component Technology
    HE Xurong, FANG Youxuan, ZHENG Xuxiao, XIN Yanshuang
    2025, 43(4):  630-642.  doi:10.3969/j.issn.0255-8297.2025.04.006
    Asbtract ( 35 )   PDF (692KB) ( 7 )  
    References | Related Articles | Metrics
    Existing cross-chain solutions face significant challenges in interoperability, transaction integrity, and security, which limit the widespread application of blockchain technology. To address these issues, this paper proposes a bi-directional cross-chain solution based on oracles. The proposed approach aggregates data from multiple trustworthy sources and employs composite national cryptographic techniques to enhance the integrity and security of cross-chain transactions. Experimental results demonstrate that the proposed solution significantly outperforms traditional cross-chain methods in terms of adaptability, reliability, and security. This work provides a novel framework for solving cross-chain interoperability among heterogeneous blockchain systems and lays a foundation for the application of blockchain technology in various fields.
    Computer Science and Applications
    Seismic Phase Clustering Analysis Technology Based on Multi-granularity Ensemble Learning
    LUO Hongmei, WANG Changjiang, YANG Peijie, GUAN Xiaoyan, ZHOU Xiaojie, YU Hang
    2025, 43(4):  643-655.  doi:10.3969/j.issn.0255-8297.2025.04.007
    Asbtract ( 23 )   PDF (7441KB) ( 6 )  
    References | Related Articles | Metrics
    To mitigate the impact of geological structural variations on reservoir prediction, this study proposes a novel seismic phase clustering analysis technique based on multi-granularity ensemble learning. The technique first extracts features at three scales: coarse-grained, fine-grained, and micro-grained. Coarse-grained features are derived using the Spearman correlation coefficient to reflect the macroscopic relationships between strata. Fine-grained features are extracted via long short-term memory (LSTM) networks to capture detailed characteristics among waveforms. Micro-grained features are obtained through dynamic time warping (DTW) distances to capture the microscopic differences within individual waveforms. Subsequently, through self-organizing map methods, clustering results are obtained for each granularity. A soft alignment-based ensemble learning technique is then applied to integrate the clustering results from different granularities, effectively addressing the limitations of single-granularity approaches influenced by geological structural variations. Experimental results demonstrate that the proposed multi-granularity ensemble learning algorithm significantly enhances seismic clustering accuracy and provides a valuable reference for reservoir prediction across different regions.
    Scene-Level Building Change Detection Based on Dense Connection and Multiple Instance
    SHAO Zilong, QI Lin, CHEN Kun, XU Yubin, QIN Kun, YU Changhui
    2025, 43(4):  656-671.  doi:10.3969/j.issn.0255-8297.2025.04.008
    Asbtract ( 21 )   PDF (7947KB) ( 4 )  
    References | Related Articles | Metrics
    This paper proposes a lightweight scene-level change detection network MIDF-Net, designed to address the issues of high background noise impact and low detection efficiency in large-scale building change detection within complex airport clearance zones. MIDF-Net consists of three parts: a dense connection feature extractor, a differential feature extractor, and a multi-instance classifier. The dense connection feature extractor uses a siamese dense connection network to extract dual temporal image features, while the difference feature extractor focuses on generating variation features by combining dual temporal image features. The multi-instance classifier obtains scene classification results from key local semantic features. Using image data from airport clearance zones across seven different cities, this study constructs a building change detection dataset. Experimental results show that MIDF-Net achieves high detection performance on this dataset. Furthermore, ablation experiments verify the efficacy of each module in MIDF-Net.
    Signal and Information Processing
    Lightweight Cerebral Angiography Quality Assessment Model Based on Segmentation-Classification Multi-task Learning
    HUANG Yifan, LU Xiaofeng, SUN Jun, TANG Jialü, LIU Xuefeng
    2025, 43(4):  672-683.  doi:10.3969/j.issn.0255-8297.2025.04.009
    Asbtract ( 26 )   PDF (3329KB) ( 5 )  
    References | Related Articles | Metrics
    To address the instability and non-real-time problems of manual quality control in cerebral angiography and realize real-time quality assessment, a lightweight segmentation-classification multi-task learning (MTL) model is proposed. This model comprises three main components: a feature extraction backbone module, a blood vessel segmentation module, and an angiography quality classification module. Depth-separable convolution is employed instead of traditional convolution to reduce the number of parameters. Additionally, a local-global self-attention module (L-GSAM) is proposed to enhance the model’s ability to extract global information. A feature aggregation module (FAM) is introduced in the vessel segmentation module to optimize feature connections. The segmentation results are then combined with backbone features to assess the quality of angiography in the classification module, and a joint loss function is designed for model training. Experimental results show that the proposed model achieves good segmentation and classification performance with only 3.4342×106 parameters, the quality assessment accuracy reaches 0.818 2, and the model exhibits high real-time performance.
    Coverless Steganography Based on Character Recognition
    LU Zhen, WU Jianbin
    2025, 43(4):  684-693.  doi:10.3969/j.issn.0255-8297.2025.04.010
    Asbtract ( 27 )   PDF (932KB) ( 4 )  
    References | Related Articles | Metrics
    In order to enhance the hiding capacity of coverless steganography, this paper proposes and implements a coverless steganography method that uses small icons of Chinese characters as construction elements. Inspired by the semi-constructive approach and the principle of English-Chinese proverb translation, the method integrates a deep learning framework to achieve effective information hiding. Firstly, a carrier library of Chinese character small icons is constructed, and a one-to-one mapping relationship between small icons and binary streams is designed. At the sender, the input secret messages are grouped by 12 bits, and the corresponding small icons of Chinese characters are found from the carrier library and stitched into the secret carrier image. At the receiver, the secret carrier image is first segmented, and the Chinese characters in the carrier image are recognized using deep learning method. The secret message is extracted according to the mapping relationship between the Chinese characters and the binary stream. In addition, in order to improve the robustness of the scheme, a data augmentation strategy is introduced to synthesize text image datasets manually. Experimental results demonstrate that, compared to existing coverless steganography methods, the proposed method significantly improves hiding capacity while maintaining strong robustness.
    Forest Canopy Height Inversion Method Based on GEDI, Sentinel-2 and Airborne LiDAR
    JI Cuicui, YUE Lianggaoke, LI Xiaosong, SUN Bin
    2025, 43(4):  694-708.  doi:10.3969/j.issn.0255-8297.2025.04.011
    Asbtract ( 30 )   PDF (21057KB) ( 7 )  
    References | Related Articles | Metrics
    Large-scale monitoring of forest canopy height is essential for accurate estimation of forest carbon emissions and analysis of forest growth. In this study, we integrate canopy height metrics obtained from global ecosystem dynamics investigation (GEDI), Sentinel-2 image spectral information and ASTER GDEM terrain data to estimate canopy height in a forest area dominated by trees and shrubs in Xichang City, Sichuan Province. Random forest, gradient boosting decision tree, and multivariable linear regression algorithms are employed for canopy height inversion. Algorithm validation demonstrates that the combination of spectral information, vegetation index, and terrain information yields the highest inversion accuracy. The random forest algorithm performs best, achieving a coefficient of determination R2 of 0.58, a root mean square error RMSE of 4.78 m, and an estimation accuracy EA of 56%. Meanwhile, we use the RF algorithm to invert the canopy height and verify the accuracy with the laser point cloud data obtained by DJI UAV, resulting in R2 of 0.52, RMSE of 2.71 m, and EA of 85%. Overall, this study confirms that the small spot diameter and high-density spot of GEDI spaceborne full-waveform liDAR data offers potential for spatially continuous forest canopy height mapping. The findings contribute a theoretical basis for accurately grasping the growth analysis of forest degra-dation and restoration.
    A Method of Building Segmentation in Remote Sensing Image Based on Contour Measurement of Convolutional Neural Network
    XIONG Jun, LIU Shouquan, AN Xu, GUO Tian, TAI Baoyu
    2025, 43(4):  709-720.  doi:10.3969/j.issn.0255-8297.2025.04.012
    Asbtract ( 18 )   PDF (13510KB) ( 4 )  
    References | Related Articles | Metrics
    Accurate building segmentation in remote sensing images remains a significant challenge due to varying building sizes, occlusion by trees and unstable illumination. The convolutional neural network (CNN) model often loses high-frequency details such as target boundaries and fine structures. This makes the precise segmentation of buildings in remote sensing images a challenging problem. To solve this problem, this paper proposes a deep convolutional neural network model based on contour measurement. By introducing the Sobel edge detector, the network obtains additional edges to enhance the boundary of image segmentation in an unsupervised manner. In addition, a denoising module is incor-porated to suppress noise hidden in low-level features. During training, in addition to the commonly used Dice coefficient and cross-entropy loss, a contour constraint loss function is introduced to further enhance the edge information and preserve the geometric topology of the buildings. This method is tested on the remote sensing images of buildings from the Inria Aerial Image Labeling dataset and Massachusetts Buildings dataset. Experimental results show that the proposed model effectively captures the edge details of weak light and occluded targets, thereby improving the accuracy of building segmentation. The proposed model achieves an average intersection over union (IoU) of 0.7860 and 0.7655, and a boundary IoU of 0.7359 and 0.7168, respectively, indicating enhanced accuracy in both regional and edge-level evaluation.