Skip to content

Latest commit

 

History

History
71 lines (71 loc) · 42.2 KB

20200928.md

File metadata and controls

71 lines (71 loc) · 42.2 KB

ArXiv cs.CV --Mon, 28 Sep 2020

1.SuPEr-SAM: Using the Supervision Signal from a Pose Estimator to Train a Spatial Attention Module for Personal Protective Equipment Recognition ⬇️

We propose a deep learning method to automatically detect personal protective equipment (PPE), such as helmets, surgical masks, reflective vests, boots and so on, in images of people. Typical approaches for PPE detection based on deep learning are (i) to train an object detector for items such as those listed above or (ii) to train a person detector and a classifier that takes the bounding boxes predicted by the detector and discriminates between people wearing and people not wearing the corresponding PPE items. We propose a novel and accurate approach that uses three components: a person detector, a body pose estimator and a classifier. Our novelty consists in using the pose estimator only at training time, to improve the prediction performance of the classifier. We modify the neural architecture of the classifier by adding a spatial attention mechanism, which is trained using supervision signal from the pose estimator. In this way, the classifier learns to focus on PPE items, using knowledge from the pose estimator with almost no computational overhead during inference.

2.Are scene graphs good enough to improve Image Captioning? ⬇️

Many top-performing image captioning models rely solely on object features computed with an object detection model to generate image descriptions. However, recent studies propose to directly use scene graphs to introduce information about object relations into captioning, hoping to better describe interactions between objects. In this work, we thoroughly investigate the use of scene graphs in image captioning. We empirically study whether using additional scene graph encoders can lead to better image descriptions and propose a conditional graph attention network (C-GAT), where the image captioning decoder state is used to condition the graph updates. Finally, we determine to what extent noise in the predicted scene graphs influence caption quality. Overall, we find no significant difference between models that use scene graph features and models that only use object detection features across different captioning metrics, which suggests that existing scene graph generation models are still too noisy to be useful in image captioning. Moreover, although the quality of predicted scene graphs is very low in general, when using high quality scene graphs we obtain gains of up to 3.3 CIDEr compared to a strong Bottom-Up Top-Down baseline.

3.CAD2Real: Deep learning with domain randomization of CAD data for 3D pose estimation of electronic control unit housings ⬇️

Electronic control units (ECUs) are essential for many automobile components, e.g. engine, anti-lock braking system (ABS), steering and airbags. For some products, the 3D pose of each single ECU needs to be determined during series production. Deep learning approaches can not easily be applied to this problem, because labeled training data is not available in sufficient numbers. Thus, we train state-of-the-art artificial neural networks (ANNs) on purely synthetic training data, which is automatically created from a single CAD file. By randomizing parameters during rendering of training images, we enable inference on RGB images of a real sample part. In contrast to classic image processing approaches, this data-driven approach poses only few requirements regarding the measurement setup and transfers to related use cases with little development effort.

4.Locally orderless tensor networks for classifying two- and three-dimensional medical images ⬇️

Tensor networks are factorisations of high rank tensors into networks of lower rank tensors and have primarily been used to analyse quantum many-body problems. Tensor networks have seen a recent surge of interest in relation to supervised learning tasks with a focus on image classification. In this work, we improve upon the matrix product state (MPS) tensor networks that can operate on one-dimensional vectors to be useful for working with 2D and 3D medical images. We treat small image regions as orderless, squeeze their spatial information into feature dimensions and then perform MPS operations on these locally orderless regions. These local representations are then aggregated in a hierarchical manner to retain global structure. The proposed locally orderless tensor network (LoTeNet) is compared with relevant methods on three datasets. The architecture of LoTeNet is fixed in all experiments and we show it requires lesser computational resources to attain performance on par or superior to the compared methods.

5.Database Annotation with few Examples: An Atlas-based Framework using Diffeomorphic Registration of 3D Trees ⬇️

Automatic annotation of anatomical structures can help simplify workflow during interventions in numerous clinical applications but usually involves a large amount of annotated data. The complexity of the labeling task, together with the lack of representative data, slows down the development of robust solutions. In this paper, we propose a solution requiring very few annotated cases to label 3D pelvic arterial trees of patients with benign prostatic hyperplasia. We take advantage of Large Deformation Diffeomorphic Metric Mapping (LDDMM) to perform registration based on meaningful deformations from which we build an atlas. Branch pairing is then computed from the atlas to new cases using optimal transport to ensure one-to-one correspondence during the labeling process. To tackle topological variations in the tree, which usually degrades the performance of atlas-based techniques, we propose a simple bottom-up label assignment adapted to the pelvic anatomy. The proposed method achieves 97.6% labeling precision with only 5 cases for training, while in comparison learning-based methods only reach 82.2% on such small training sets.

6.From Pixel to Patch: Synthesize Context-aware Features for Zero-shot Semantic Segmentation ⬇️

Zero-shot learning has been actively studied for image classification task to relieve the burden of annotating image labels. Interestingly, semantic segmentation task requires more labor-intensive pixel-wise annotation, but zero-shot semantic segmentation has only attracted limited research interest. Thus, we focus on zero-shot semantic segmentation, which aims to segment unseen objects with only category-level semantic representations provided for unseen categories. In this paper, we propose a novel Context-aware feature Generation Network (CaGNet), which can synthesize context-aware pixel-wise visual features for unseen categories based on category-level semantic representations and pixel-wise contextual information. The synthesized features are used to finetune the classifier to enable segmenting unseen objects. Furthermore, we extend pixel-wise feature generation and finetuning to patch-wise feature generation and finetuning, which additionally considers inter-pixel relationship. Experimental results on Pascal-VOC, Pascal-Context, and COCO-stuff show that our method significantly outperforms the existing zero-shot semantic segmentation methods.

7.Improved Dimensionality Reduction of various Datasets using Novel Multiplicative Factoring Principal Component Analysis (MPCA) ⬇️

Principal Component Analysis (PCA) is known to be the most widely applied dimensionality reduction approach. A lot of improvements have been done on the traditional PCA, in order to obtain optimal results in the dimensionality reduction of various datasets. In this paper, we present an improvement to the traditional PCA approach called Multiplicative factoring Principal Component Analysis (MPCA). The advantage of MPCA over the traditional PCA is that a penalty is imposed on the occurrence space through a multiplier to make negligible the effect of outliers in seeking out projections. Here we apply two multiplier approaches, total distance and cosine similarity metrics. These two approaches can learn the relationship that exists between each of the data points and the principal projections in the feature space. As a result of this, improved low-rank projections are gotten through multiplying the data iteratively to make negligible the effect of corrupt data in the training set. Experiments were carried out on YaleB, MNIST, AR, and Isolet datasets and the results were compared to results gotten from some popular dimensionality reduction methods such as traditional PCA, RPCA-OM, and also some recently published methods such as IFPCA-1 and IFPCA-2.

8.Tarsier: Evolving Noise Injection in Super-Resolution GANs ⬇️

Super-resolution aims at increasing the resolution and level of detail within an image. The current state of the art in general single-image super-resolution is held by NESRGAN+, which injects a Gaussian noise after each residual layer at training time. In this paper, we harness evolutionary methods to improve NESRGAN+ by optimizing the noise injection at inference time. More precisely, we use Diagonal CMA to optimize the injected noise according to a novel criterion combining quality assessment and realism. Our results are validated by the PIRM perceptual score and a human study. Our method outperforms NESRGAN+ on several standard super-resolution datasets. More generally, our approach can be used to optimize any method based on noise injection.

9.Training CNNs in Presence of JPEG Compression: Multimedia Forensics vs Computer Vision ⬇️

Convolutional Neural Networks (CNNs) have proved very accurate in multiple computer vision image classification tasks that required visual inspection in the past (e.g., object recognition, face detection, etc.). Motivated by these astonishing results, researchers have also started using CNNs to cope with image forensic problems (e.g., camera model identification, tampering detection, etc.). However, in computer vision, image classification methods typically rely on visual cues easily detectable by human eyes. Conversely, forensic solutions rely on almost invisible traces that are often very subtle and lie in the fine details of the image under analysis. For this reason, training a CNN to solve a forensic task requires some special care, as common processing operations (e.g., resampling, compression, etc.) can strongly hinder forensic traces. In this work, we focus on the effect that JPEG has on CNN training considering different computer vision and forensic image classification problems. Specifically, we consider the issues that rise from JPEG compression and misalignment of the JPEG grid. We show that it is necessary to consider these effects when generating a training dataset in order to properly train a forensic detector not losing generalization capability, whereas it is almost possible to ignore these effects for computer vision tasks.

10.AIM 2020 Challenge on Real Image Super-Resolution: Methods and Results ⬇️

This paper introduces the real image Super-Resolution (SR) challenge that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2020. This challenge involves three tracks to super-resolve an input image for $\times$2, $\times$3 and $\times$4 scaling factors, respectively. The goal is to attract more attention to realistic image degradation for the SR task, which is much more complicated and challenging, and contributes to real-world image super-resolution applications. 452 participants were registered for three tracks in total, and 24 teams submitted their results. They gauge the state-of-the-art approaches for real image SR in terms of PSNR and SSIM.

11.In-sample Contrastive Learning and Consistent Attention for Weakly Supervised Object Localization ⬇️

Weakly supervised object localization (WSOL) aims to localize the target object using only the image-level supervision. Recent methods encourage the model to activate feature maps over the entire object by dropping the most discriminative parts. However, they are likely to induce excessive extension to the backgrounds which leads to over-estimated localization. In this paper, we consider the background as an important cue that guides the feature activation to cover the sophisticated object region and propose contrastive attention loss. The loss promotes similarity between foreground and its dropped version, and, dissimilarity between the dropped version and background. Furthermore, we propose foreground consistency loss that penalizes earlier layers producing noisy attention regarding the later layer as a reference to provide them with a sense of backgroundness. It guides the early layers to activate on objects rather than locally distinctive backgrounds so that their attentions to be similar to the later layer. For better optimizing the above losses, we use the non-local attention blocks to replace channel-pooled attention leading to enhanced attention maps considering the spatial similarity. Last but not least, we propose to drop background regions in addition to the most discriminative region. Our method achieves state-of-theart performance on CUB-200-2011 and ImageNet benchmark datasets regarding top-1 localization accuracy and MaxBoxAccV2, and we provide detailed analysis on our individual components. The code will be publicly available online for reproducibility.

12.Deep Adversarial Transition Learning using Cross-Grafted Generative Stacks ⬇️

Current deep domain adaptation methods used in computer vision have mainly focused on learning discriminative and domain-invariant features across different domains. In this paper, we present a novel "deep adversarial transition learning" (DATL) framework that bridges the domain gap by projecting the source and target domains into intermediate, transitional spaces through the employment of adjustable, cross-grafted generative network stacks and effective adversarial learning between transitions. Specifically, we construct variational auto-encoders (VAE) for the two domains, and form bidirectional transitions by cross-grafting the VAEs' decoder stacks. Furthermore, generative adversarial networks (GAN) are employed for domain adaptation, mapping the target domain data to the known label space of the source domain. The overall adaptation process hence consists of three phases: feature representation learning by VAEs, transitions generation, and transitions alignment by GANs. Experimental results demonstrate that our method outperforms the state-of-the art on a number of unsupervised domain adaptation benchmarks.

13.Tied Block Convolution: Leaner and Better CNNs with Shared Thinner Filters ⬇️

Convolution is the main building block of convolutional neural networks (CNN). We observe that an optimized CNN often has highly correlated filters as the number of channels increases with depth, reducing the expressive power of feature representations. We propose Tied Block Convolution (TBC) that shares the same thinner filters over equal blocks of channels and produces multiple responses with a single filter. The concept of TBC can also be extended to group convolution and fully connected layers, and can be applied to various backbone networks and attention modules. Our extensive experimentation on classification, detection, instance segmentation, and attention demonstrates TBC's significant across-the-board gain over standard convolution and group convolution. The proposed TiedSE attention module can even use 64 times fewer parameters than the SE module to achieve comparable performance. In particular, standard CNNs often fail to accurately aggregate information in the presence of occlusion and result in multiple redundant partial object proposals. By sharing filters across channels, TBC reduces correlation and can effectively handle highly overlapping instances. TBC increases the average precision for object detection on MS-COCO by 6% when the occlusion ratio is 80%. Our code will be released.

14.Going to Extremes: Weakly Supervised Medical Image Segmentation ⬇️

Medical image annotation is a major hurdle for developing precise and robust machine learning models. Annotation is expensive, time-consuming, and often requires expert knowledge, particularly in the medical field. Here, we suggest using minimal user interaction in the form of extreme point clicks to train a segmentation model which, in effect, can be used to speed up medical image annotation. An initial segmentation is generated based on the extreme points utilizing the random walker algorithm. This initial segmentation is then used as a noisy supervision signal to train a fully convolutional network that can segment the organ of interest, based on the provided user clicks. Through experimentation on several medical imaging datasets, we show that the predictions of the network can be refined using several rounds of training with the prediction from the same weakly annotated data. Further improvements are shown utilizing the clicked points within a custom-designed loss and attention mechanism. Our approach has the potential to speed up the process of generating new training datasets for the development of new machine learning and deep learning-based models for, but not exclusively, medical image analysis.

15.An original framework for Wheat Head Detection using Deep, Semi-supervised and Ensemble Learning within Global Wheat Head Detection (GWHD) Dataset ⬇️

In this paper, we propose an original object detection methodology applied to Global Wheat Head Detection (GWHD) Dataset. We have been through two major architectures of object detection which are FasterRCNN and EfficientDet, in order to design a novel and robust wheat head detection model. We emphasize on optimizing the performance of our proposed final architectures. Furthermore, we have been through an extensive exploratory data analysis and adapted best data augmentation techniques to our context. We use semi supervised learning to boost previous supervised models of object detection. Moreover, we put much effort on ensemble to achieve higher performance. Finally we use specific post-processing techniques to optimize our wheat head detection results. Our results have been submitted to solve a research challenge launched on the GWHD Dataset which is led by nine research institutes from seven countries. Our proposed method was ranked within the top 6% in the above mentioned challenge.

16.CoFF: Cooperative Spatial Feature Fusion for 3D Object Detection on Autonomous Vehicles ⬇️

To reduce the amount of transmitted data, feature map based fusion is recently proposed as a practical solution to cooperative 3D object detection by autonomous vehicles. The precision of object detection, however, may require significant improvement, especially for objects that are far away or occluded. To address this critical issue for the safety of autonomous vehicles and human beings, we propose a cooperative spatial feature fusion (CoFF) method for autonomous vehicles to effectively fuse feature maps for achieving a higher 3D object detection performance. Specially, CoFF differentiates weights among feature maps for a more guided fusion, based on how much new semantic information is provided by the received feature maps. It also enhances the inconspicuous features corresponding to far/occluded objects to improve their detection precision. Experimental results show that CoFF achieves a significant improvement in terms of both detection precision and effective detection range for autonomous vehicles, compared to previous feature fusion solutions.

17.Deep Multi-Scale Feature Learning for Defocus Blur Estimation ⬇️

This paper presents an edge-based defocus blur estimation method from a single defocused image. We first distinguish edges that lie at depth discontinuities (called depth edges, for which the blur estimate is ambiguous) from edges that lie at approximately constant depth regions (called pattern edges, for which the blur estimate is well-defined). Then, we estimate the defocus blur amount at pattern edges only, and explore an interpolation scheme based on guided filters that prevents data propagation across the detected depth edges to obtain a dense blur map with well-defined object boundaries. Both tasks (edge classification and blur estimation) are performed by deep convolutional neural networks (CNNs) that share weights to learn meaningful local features from multi-scale patches centered at edge locations. Experiments on naturally defocused images show that the proposed method presents qualitative and quantitative results that outperform state-of-the-art (SOTA) methods, with a good compromise between running time and accuracy.

18.daVinciNet: Joint Prediction of Motion and Surgical State in Robot-Assisted Surgery ⬇️

This paper presents a technique to concurrently and jointly predict the future trajectories of surgical instruments and the future state(s) of surgical subtasks in robot-assisted surgeries (RAS) using multiple input sources. Such predictions are a necessary first step towards shared control and supervised autonomy of surgical subtasks. Minute-long surgical subtasks, such as suturing or ultrasound scanning, often have distinguishable tool kinematics and visual features, and can be described as a series of fine-grained states with transition schematics. We propose daVinciNet - an end-to-end dual-task model for robot motion and surgical state predictions. daVinciNet performs concurrent end-effector trajectory and surgical state predictions using features extracted from multiple data streams, including robot kinematics, endoscopic vision, and system events. We evaluate our proposed model on an extended Robotic Intra-Operative Ultrasound (RIOUS+) imaging dataset collected on a da Vinci Xi surgical system and the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our model achieves up to 93.85% short-term (0.5s) and 82.11% long-term (2s) state prediction accuracy, as well as 1.07mm short-term and 5.62mm long-term trajectory prediction error.

19.A Computer Vision Approach to Combat Lyme Disease ⬇️

Lyme disease is an infectious disease transmitted to humans by a bite from an infected Ixodes species (blacklegged ticks). It is one of the fastest growing vector-borne illness in North America and is expanding its geographic footprint. Lyme disease treatment is time-sensitive, and can be cured by administering an antibiotic (prophylaxis) to the patient within 72 hours after a tick bite by the Ixodes species. However, the laboratory-based identification of each tick that might carry the bacteria is time-consuming and labour intensive and cannot meet the maximum turn-around-time of 72 hours for an effective treatment. Early identification of blacklegged ticks using computer vision technologies is a potential solution in promptly identifying a tick and administering prophylaxis within a crucial window period. In this work, we build an automated detection tool that can differentiate blacklegged ticks from other ticks species using advanced deep learning and computer vision approaches. We demonstrate the classification of tick species using Convolution Neural Network (CNN) models, trained end-to-end from tick images directly. Advanced knowledge transfer techniques within teacher-student learning frameworks are adopted to improve the performance of classification of tick species. Our best CNN model achieves 92% accuracy on test set. The tool can be integrated with the geography of exposure to determine the risk of Lyme disease infection and need for prophylaxis treatment.

20.Image-Based Sorghum Head Counting When You Only Look Once ⬇️

Modern trends in digital agriculture have seen a shift towards artificial intelligence for crop quality assessment and yield estimation. In this work, we document how a parameter tuned single-shot object detection algorithm can be used to identify and count sorghum head from aerial drone images. Our approach involves a novel exploratory analysis that identified key structural elements of the sorghum images and motivated the selection of parameter-tuned anchor boxes that contributed significantly to performance. These insights led to the development of a deep learning model that outperformed the baseline model and achieved an out-of-sample mean average precision of 0.95.

21.Exposing GAN-generated Faces Using Inconsistent Corneal Specular Highlights ⬇️

Sophisticated generative adversary network (GAN) models are now able to synthesize highly realistic human faces that are difficult to discern from real ones visually. GAN synthesized faces have become a new form of online disinformation. In this work, we show that GAN synthesized faces can be exposed with the inconsistent corneal specular highlights between two eyes. We show that such artifacts exist widely and further describe a method to extract and compare corneal specular highlights from two eyes. Qualitative and quantitative evaluations of our method suggest its simplicity and effectiveness in distinguishing GAN synthesized faces.

22.PK-GCN: Prior Knowledge Assisted Image Classification using Graph Convolution Networks ⬇️

Deep learning has gained great success in various classification tasks. Typically, deep learning models learn underlying features directly from data, and no underlying relationship between classes are included. Similarity between classes can influence the performance of classification. In this article, we propose a method that incorporates class similarity knowledge into convolutional neural networks models using a graph convolution layer. We evaluate our method on two benchmark image datasets: MNIST and CIFAR10, and analyze the results on different data and model sizes. Experimental results show that our model can improve classification accuracy, especially when the amount of available data is small.

23.Predicting galaxy spectra from images with hybrid convolutional neural networks ⬇️

Galaxies can be described by features of their optical spectra such as oxygen emission lines, or morphological features such as spiral arms. Although spectroscopy provides a rich description of the physical processes that govern galaxy evolution, spectroscopic data are observationally expensive to obtain. We are able to robustly predict and reconstruct galaxy spectra directly from broad-band imaging. We present a powerful new approach using a hybrid convolutional neural network with deconvolution instead of batch normalization; this hybrid CNN outperforms other models in our tests. The learned mapping between galaxy imaging and spectra will be transformative for future wide-field surveys, such as with the Vera C. Rubin Observatory and \textit{Nancy Grace Roman Space Telescope}, by multiplying the scientific returns for spectroscopically-limited galaxy samples.

24.SemanticVoxels: Sequential Fusion for 3D Pedestrian Detection using LiDAR Point Cloud and Semantic Segmentation ⬇️

3D pedestrian detection is a challenging task in automated driving because pedestrians are relatively small, frequently occluded and easily confused with narrow vertical objects. LiDAR and camera are two commonly used sensor modalities for this task, which should provide complementary information. Unexpectedly, LiDAR-only detection methods tend to outperform multisensor fusion methods in public benchmarks. Recently, PointPainting has been presented to eliminate this performance drop by effectively fusing the output of a semantic segmentation network instead of the raw image information. In this paper, we propose a generalization of PointPainting to be able to apply fusion at different levels. After the semantic augmentation of the point cloud, we encode raw point data in pillars to get geometric features and semantic point data in voxels to get semantic features and fuse them in an effective way. Experimental results on the KITTI test set show that SemanticVoxels achieves state-of-the-art performance in both 3D and bird's eye view pedestrian detection benchmarks. In particular, our approach demonstrates its strength in detecting challenging pedestrian cases and outperforms current state-of-the-art approaches.

25.Style-invariant Cardiac Image Segmentation with Test-time Augmentation ⬇️

Deep models often suffer from severe performance drop due to the appearance shift in the real clinical setting. Most of the existing learning-based methods rely on images from multiple sites/vendors or even corresponding labels. However, collecting enough unknown data to robustly model segmentation cannot always hold since the complex appearance shift caused by imaging factors in daily application. In this paper, we propose a novel style-invariant method for cardiac image segmentation. Based on the zero-shot style transfer to remove appearance shift and test-time augmentation to explore diverse underlying anatomy, our proposed method is effective in combating the appearance shift. Our contribution is three-fold. First, inspired by the spirit of universal style transfer, we develop a zero-shot stylization for content images to generate stylized images that appearance similarity to the style images. Second, we build up a robust cardiac segmentation model based on the U-Net structure. Our framework mainly consists of two networks during testing: the ST network for removing appearance shift and the segmentation network. Third, we investigate test-time augmentation to explore transformed versions of the stylized image for prediction and the results are merged. Notably, our proposed framework is fully test-time adaptation. Experiment results demonstrate that our methods are promising and generic for generalizing deep segmentation models.

26.Brain Tumor Segmentation using 3D-CNNs with Uncertainty Estimation ⬇️

Automation of brain tumors in 3D magnetic resonance images (MRIs) is key to assess the diagnostic and treatment of the disease. In recent years, convolutional neural networks (CNNs) have shown improved results in the task. However, high memory consumption is still a problem in 3D-CNNs. Moreover, most methods do not include uncertainty information, which is specially critical in medical diagnosis. This work proposes a 3D encoder-decoder architecture, based on V-Net \cite{vnet} which is trained with patching techniques to reduce memory consumption and decrease the effect of unbalanced data. We also introduce voxel-wise uncertainty, both epistemic and aleatoric using test-time dropout and data-augmentation respectively. Uncertainty maps can provide extra information to expert neurologists, useful for detecting when the model is not confident on the provided segmentation.

27.Goal-Directed Occupancy Prediction for Lane-Following Actors ⬇️

Predicting the possible future behaviors of vehicles that drive on shared roads is a crucial task for safe autonomous driving. Many existing approaches to this problem strive to distill all possible vehicle behaviors into a simplified set of high-level actions. However, these action categories do not suffice to describe the full range of maneuvers possible in the complex road networks we encounter in the real world. To combat this deficiency, we propose a new method that leverages the mapped road topology to reason over possible goals and predict the future spatial occupancy of dynamic road actors. We show that our approach is able to accurately predict future occupancy that remains consistent with the mapped lane geometry and naturally captures multi-modality based on the local scene context while also not suffering from the mode collapse problem observed in prior work.

28.Enhancing MRI Brain Tumor Segmentation with an Additional Classification Network ⬇️

Brain tumor segmentation plays an essential role in medical image analysis. In recent studies, deep convolution neural networks (DCNNs) are extremely powerful to tackle tumor segmentation tasks. We propose in this paper a novel training method that enhances the segmentation results by adding an additional classification branch to the network. The whole network was trained end-to-end on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. On the BraTS's validation set, it achieved an average Dice score of 78.43%, 89.99%, and 84.22% respectively for the enhancing tumor, the whole tumor, and the tumor core.

29.DPN: Detail-Preserving Network with High Resolution Representation for Efficient Segmentation of Retinal Vessels ⬇️

Retinal vessels are important biomarkers for many ophthalmological and cardiovascular diseases. It is of great significance to develop an accurate and fast vessel segmentation model for computer-aided diagnosis. Existing methods, such as U-Net follows the encoder-decoder pipeline, where detailed information is lost in the encoder in order to achieve a large field of view. Although detailed information could be recovered in the decoder via multi-scale fusion, it still contains noise. In this paper, we propose a deep segmentation model, called detail-preserving network (DPN) for efficient vessel segmentation. To preserve detailed spatial information and learn structural information at the same time, we designed the detail-preserving block (DP-Block). Further, we stacked eight DP-Blocks together to form the DPN. More importantly, there are no down-sampling operations among these blocks. As a result, the DPN could maintain a high resolution during the processing, which is helpful to locate the boundaries of thin vessels. To illustrate the effectiveness of our method, we conducted experiments over three public datasets. Experimental results show, compared to state-of-the-art methods, our method shows competitive/better performance in terms of segmentation accuracy, segmentation speed, extensibility and the number of parameters. Specifically, 1) the AUC of our method ranks first/second/third on the STARE/CHASE_DB1/DRIVE datasets, respectively. 2) Only one forward pass is required of our method to generate a vessel segmentation map, and the segmentation speed of our method is over 20-160x faster than other methods on the DRIVE dataset. 3) We conducted cross-training experiments to demonstrate the extensibility of our method, and results revealed that our method shows superior performance. 4) The number of parameters of our method is only around 96k, less then all comparison methods.

30.A Unified Plug-and-Play Framework for Effective Data Denoising and Robust Abstention ⬇️

The success of Deep Neural Networks (DNNs) highly depends on data quality. Moreover, predictive uncertainty makes high performing DNNs risky for real-world deployment. In this paper, we aim to address these two issues by proposing a unified filtering framework leveraging underlying data density, that can effectively denoise training data as well as avoid predicting uncertain test data points. Our proposed framework leverages underlying data distribution to differentiate between noise and clean data samples without requiring any modification to existing DNN architectures or loss functions. Extensive experiments on multiple image classification datasets and multiple CNN architectures demonstrate that our simple yet effective framework can outperform the state-of-the-art techniques in denoising training data and abstaining uncertain test data.

31.Influence of segmentation accuracy in structural MR head scans on electric field computation for TMS and tES ⬇️

In several diagnosis and therapy procedures based on electrostimulation effect, the internal physical quantity related to the stimulation is the induced electric field. To estimate the induced electric field in an individual human model, the segmentation of anatomical imaging, such as (magnetic resonance image (MRI) scans, of the corresponding body parts into tissues is required. Then, electrical properties associated with different annotated tissues are assigned to the digital model to generate a volume conductor. An open question is how segmentation accuracy of different tissues would influence the distribution of the induced electric field. In this study, we applied parametric segmentation of different tissues to exploit the segmentation of available MRI to generate different quality of head models using deep learning neural network architecture, named ForkNet. Then, the induced electric field are compared to assess the effect of model segmentation variations. Computational results indicate that the influence of segmentation error is tissue-dependent. In brain, sensitivity to segmentation accuracy is relatively high in cerebrospinal fluid (CSF), moderate in gray matter (GM) and low in white matter for transcranial magnetic stimulation (TMS) and transcranial electrical stimulation (tES). A CSF segmentation accuracy reduction of 10% in terms of Dice coefficient (DC) lead to decrease up to 4% in normalized induced electric field in both applications. However, a GM segmentation accuracy reduction of 5.6% DC leads to increase of normalized induced electric field up to 6%. Opposite trend of electric field variation was found between CSF and GM for both TMS and tES. The finding obtained here would be useful to quantify potential uncertainty of computational results.

32.G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling ⬇️

In the realms of computer vision, it is evident that deep neural networks perform better in a supervised setting with a large amount of labeled data. The representations learned with supervision are not only of high quality but also helps the model in enhancing its accuracy. However, the collection and annotation of a large dataset are costly and time-consuming. To avoid the same, there has been a lot of research going on in the field of unsupervised visual representation learning especially in a self-supervised setting. Amongst the recent advancements in self-supervised methods for visual recognition, in SimCLR Chen et al. shows that good quality representations can indeed be learned without explicit supervision. In SimCLR, the authors maximize the similarity of augmentations of the same image and minimize the similarity of augmentations of different images. A linear classifier trained with the representations learned using this approach yields 76.5% top-1 accuracy on the ImageNet ILSVRC-2012 dataset. In this work, we propose that, with the normalized temperature-scaled cross-entropy (NT-Xent) loss function (as used in SimCLR), it is beneficial to not have images of the same category in the same batch. In an unsupervised setting, the information of images pertaining to the same category is missing. We use the latent space representation of a denoising autoencoder trained on the unlabeled dataset and cluster them with k-means to obtain pseudo labels. With this apriori information we batch images, where no two images from the same category are to be found. We report comparable performance enhancements on the CIFAR10 dataset and a subset of the ImageNet dataset. We refer to our method as G-SimCLR.

33.Iterative regularization algorithms for image denoising with the TV-Stokes model ⬇️

We propose a set of iterative regularization algorithms for the TV-Stokes model to restore images from noisy images with Gaussian noise. These are some extensions of the iterative regularization algorithm proposed for the classical Rudin-Osher-Fatemi (ROF) model for image reconstruction, a single step model involving a scalar field smoothing, to the TV-Stokes model for image reconstruction, a two steps model involving a vector field smoothing in the first and a scalar field smoothing in the second. The iterative regularization algorithms proposed here are Richardson's iteration like. We have experimental results that show improvement over the original method in the quality of the restored image. Convergence analysis and numerical experiments are presented.

34.Alternating minimization for a single step TV-Stokes model for image denoising ⬇️

The paper presents a fully coupled TV-Stokes model, and propose an algorithm based on alternating minimization of the objective functional whose first iteration is exactly the modified TV-Stokes model proposed earlier. The model is a generalization of the second order Total Generalized Variation model. A convergence analysis is given.

35.Multidimensional TV-Stokes for image processing ⬇️

A complete multidimential TV-Stokes model is proposed based on smoothing a gradient field in the first step and reconstruction of the multidimensional image from the gradient field. It is the correct extension of the original two dimensional TV-Stokes to multidimensions. Numerical algorithm using the Chambolle's semi-implicit dual formula is proposed. Numerical results applied to denoising 3D images and movies are presented. They show excellent performance in avoiding the staircase effect, and preserving fine structures.