I'm now a Master Student in School of Computing, National University of Singapore. My research interests lie in graph neural networks, social network and spatio-temporal data mining.

Prior to my Master studies, I worked as a research intern in School of Computing in NUS, supervised by Prof. David S. Rosenblum. Even before that, I also worked as a research intern and a algorithm engineer in JD Intelligent Cities Research, Beijing, China, supervised by Prof. Yu Zheng. I obtained my B.Eng. degree from the School of Computer Science and Technology, Xidian University in 2019.

> My Curriculum Vitae <
  • [new] Yan Xiao, Ivan Beschastnikh, David S. Rosenblum,Changsheng Sun, Sebastian Elbaum, Yun Lin, Jin Song Dong.
    Self-Checking Deep Neural Networks in Deployment. ICSE 2021. [paper]

    The widespread adoption of Deep Neural Networks (DNNs) in important domains raises questions about the trustworthiness of DNN outputs. Even a highly accurate DNN will make mistakes some of the time, and in settings like self-driving vehicles these mistakes must be quickly detected and properly dealt with in deployment. Just as our community has developed effective techniques and mechanisms to monitor and check programmed components, we believe it is now necessary to do the same for DNNs. In this paper we present DNN self-checking as a process by which internal DNN layer features are used to check DNN predictions. We detail SelfChecker, a self-checking system that monitors DNN outputs and triggers an alarm if the internal layer features of the model are inconsistent with the final prediction. SelfChecker also provides advice in the form of an alternative prediction. We evaluated SelfChecker on four popular image datasets and three DNN models and found that SelfChecker triggers correct alarms on 60.56% of wrong DNN predictions, and false alarms on 2.04% of correct DNN predictions. This is a substantial improvement over prior work (SELFORACLE, DISSECTOR, and ConfidNet). In experiments with self-driving car scenarios, SelfChecker triggers more correct alarms than SELFORACLE for two DNN models (DAVE-2 and Chauffeur) with comparable false alarms. Our implementation is available as open source. Index Terms—deep learning, trustworthiness, deployment

  • Zekun Tong, Yuxuan Liang,Changsheng Sun, David Rosenblum, Andrew Lim.
    Digraph Inception Convolutional Networks. NeurIPS 2020. [paper] [poster] [code]

    Graph Convolutional Networks (GCNs) have shown promising results in modeling graph-structured data. However, they have difficulty with processing digraphs because of two reasons: 1) transforming directed to undirected graph to guarantee the symmetry of graph Laplacian is not reasonable since it not only misleads message passing scheme to aggregate incorrect weights but also deprives the unique characteristics of digraph structure; 2) due to the fixed receptive field in each layer, GCNs fail to obtain multi-scale features that can boost their performance. In this paper, we theoretically extend spectral-based graph convolution to digraphs and derive a simplified form using personalized PageRank. Specifically, we present the Digraph Inception Convolutional Networks (DiGCN) which utilizes digraph convolution and k th-order proximity to achieve larger receptive fields and learn multi-scale features in digraphs. We empirically show that DiGCN can encode more structural information from digraphs than GCNs and help achieve better performance when generalized to other models. Moreover, experiments on various benchmarks demonstrate its superiority against the state-of-the-art methods.

  • Hui Li, Mengting Xu, Sourav S Bhowmick, Changsheng Sun, Zhongyuan Jiang, Jiangtao Cui.
    DISCO: Influence Maximization Meets Network Embedding and Deep Learning. Submitted to KDD 2021. [arXiv:1906.07378]

    Since its introduction in 2003, the influence maximization (IM) problem has drawn significant research attention in the literature. The aim of IM is to select a set of k users who can influence the most individuals in the social network. The problem is proven to be NP-hard. A large number of approximate algorithms have been proposed to address this problem. The state-of-the-art algorithms estimate the expected influence of nodes based on sampled diffusion paths. As the number of required samples have been recently proven to be lower bounded by a particular threshold that presets tradeoff between the accuracy and efficiency, the result quality of these traditional solutions is hard to be further improved without sacrificing efficiency. In this paper, we present an orthogonal and novel paradigm to address the IM problem by leveraging deep learning models to estimate the expected influence. Specifically, we present a novel framework called DISCO that incorporates network embedding and deep reinforcement learning techniques to address this problem. Experimental study on real-world networks demonstrates that DISCO achieves the best performance w.r.t efficiency and influence spread quality compared to state-of-the-art classical solutions. Besides, we also show that the learning model exhibits good generality.

* Click the [+] button to show abstracts.


LinkedIn | Google scholar | Instagram | GitHub | Zhihu | Twitter


Computing 2,
15 Computing Drive, National University of Singapore,
Singapore, 117418

Email: changsheng_sun[at]outlook[dot]com(Preferred) | cssun[at]u[dot]nus[dot]edu
Mobile: See my cv.
Skype: Same as Outlook E-mail

© 2021 SUN Changsheng | Powered by Skeleton and Jiarui Gan's template | Updated on 2021-01-26