
		<paper>
			<loc>https://jjcit.org/paper/282</loc>
			<title>GEA-COPE: AN EFFECTIVE MODEL FOR CROSS- DOMAIN GRAPH PRE-TRAINING</title>
			<doi>10.5455/jjcit.71-1757505898</doi>
			<authors>Yiming Zhao,Yongqing Wu</authors>
			<keywords>Graph neural networks,Graph pre-training,Transfer learning,External attention</keywords>
			<views>612</views>
			<downloads>134</downloads>
			<received_date>10-Sep.-2025</received_date>
			<revised_date>21-Nov.-2025, 10-Dec.-2025 and 13-Jan.-2026</revised_date>
			<accepted_date> 14-Jan.-2026</accepted_date>
			<abstract>This paper addresses the negative transfer problem in cross-domain graph pre-training under few-shot learning 
scenarios, 
it proposes a multi-component pre-training framework called Graph External Attention-enhanced 
Coordinators for Pre-training (GEA-CoPe). This framework integrates multi-head external attention with a graph 
coordinator. Tackling the structural and semantic discrepancies between cross-domain 
graphs is crucial for 
mitigating negative transfer; however, conventional methods often lack adaptability to complex, dynamic inter-
domain variations and 
explicit constraints for intermediate feature-distribution consistency. The proposed 
framework leverages an external attention-based 
coordinator to mediate between different graph datasets, 
dynamically generating cross-graph semantic-alignment strategies 
to alleviate negative transfer induced by 
structural heterogeneity. 
It employs a dual-feature normalization strategy that incorporates a cross-layer 
distribution alignment loss on top of intra-layer node-similarity constraints, effectively suppressing feature drift. 
Furthermore, Kolmogorov-Arnold Networks (KANs) 
are introduced, whose parameter-adaptive activation 
functions better capture non-linear topological dependencies and enhance model interpretability. Experiments on 
ten real-world graph datasets demonstrate that 
GEA-CoPe exhibits superior cross-domain generalization 
capability and significantly improves 
performance in few-shot node classification tasks, with an average 
improvement of about 13.3% compared to other methods. The model can more accurately focus on critical graph 
structures, providing a theoretical foundation and practical paradigms for deploying graph neural networks 
in 
complex scenarios.</abstract>
		</paper>


