Computer Science Research / PRojects
Abstract: Automatic seizure type classification from electroencephalogram (EEG) data can help clinicians to better diagnose epilepsy. Although many previous studies have focused on the classification problem of seizure EEG data, most of these methods require that there is no distribution shift between training data and test data, which greatly limits the applicability in real-world scenarios. In this paper, we propose an invariant spatiotemporal representation learning method for cross-patient seizure classification. To be specific, we first split the spatiotemporal EEG data into different
environments based on heterogeneous risk minimization to reflectthe spurious correlations. We then learn invariant spatiotemporal representations and train the seizure classification model based on the learned representations to accoumplish accurate seizure-type classification across various environments. The experiments are conducted on the largest public EEG dataset, the Temple University Hospital Seizure Corpus (TUSZ) dataset, and the experimental results demonstrate the effectiveness of our method.
Abstract: Utilizing electroencephalogram (EEG) for epilepsy classification is of significant importance. However, traditional classification methods often struggle to effectively classify EEG signals from unknown patients. Therefore, we propose a graph neural network (GNN)-based method that utilizes EEG representations based on distance and correlation. We employ adversarial training using user IDs through cohomology clustering to enhance the generalization performance of epilepsy classification tasks across different
patients. Our study represents the first method to consider epilepsy classification across different patients’ scenarios, achieving state-of-the-art results on large-scale publicly available datasets. This significantly enhances the accuracy of seizure classification tasks. With the improvement in performance of our study on cross-patient epilepsy classification tasks, a foundation is laid for personalized medicine, while also facilitating the rational allocation and optimal utilization of societal healthcare resources.
Abstract: Colonoscopy is recognized as a primary standard for detecting colorectal cancer and its precursory symptoms. With hardware development in recent years, contemporary methods using deep learning models have already gained great progress, but holding limitations of relatively high miss rates of abnormalities. Some current models are also highly biased toward local features and fail to detect the global aspects of their input. In this case, we proposed GloFF-Net, a convolutional neural network architecture.
Our model exploits an encoder-decoder structure converging with custom attention mechanisms that fuse global features and local features. We have preformed excellent results and validated the improvements using several publicly attainable benchmark datasets. Furthermore, we compare our model with other state-of-the-art methods. Our approach indicates strong capabilities of generalization, accomplishing great performance under limited training data.
Abstract: In recent years, deep learning has demonstrated great capability in classifying the label and severity grade of different diseases, some of which try to give explanations on how to make predictions. Inspired by Koch's Postulates, which serve as a cornerstone in evidence-based medicine (EBM) for identifying pathogens, our work aims to harness the interpretability of deep learning for medical diagnosis. We endeavor to elucidate the decision-making process of a diabetic retinopathy (DR) detector by identifying and isolating the neuron activation patterns it relies upon, thereby establishing a
connection between patterns and the presence of lesions for a pathologically informed explanation. Specifically, based on former researchers introducing pathological descriptors derived from activated neurons within the DR detector, capsulizing both spatial and visual characteristics of lesions, we present Patho-GAN2, an innovative network designed to generate medically accurate retinal images, facilitating the visualization of symptoms encoded by these descriptors. The images produced by our method surpass those generated by earlier approaches in terms of quantitative measures.
Abstract: With the continuous development of technology, long-term effective data accumulation has gradually enabled models to overcome the dependency on data caused by overfitting problems. However, the vast amount of data makes it impossible for models to fully acquire effective features of data in limited time, resulting in significant differences in performance between training and testing sets. Atrous convolution enhances the feature extraction ability of models by expanding the receptive field of convolution kernels. However, incorrect atrous values can result in a
decrease in convolution efficiency. To address this issue, this paper proposes an adaptive atrous convolutional neural network based on online inference strategy, which enables convolution kernels to adjust their atrous values based on different contents at the pixel level of images. Experiments show that the proposed adaptive atrous convolutional neural network can effectively improve the feature extraction ability of models and can be flexibly embedded into various convolutional neural networks and applied to various computer vision tasks.