Skip to content

Latest commit

 

History

History
103 lines (103 loc) · 64.7 KB

20201022.md

File metadata and controls

103 lines (103 loc) · 64.7 KB

ArXiv cs.CV --Thu, 22 Oct 2020

1.In-the-wild Drowsiness Detection from Facial Expressions ⬇️

Driving in a state of drowsiness is a major cause of road accidents, resulting in tremendous damage to life and property. Developing robust, automatic, real-time systems that can infer drowsiness states of drivers has the potential of making life-saving impact. However, developing drowsiness detection systems that work well in real-world scenarios is challenging because of the difficulties associated with collecting high-volume realistic drowsy data and modeling the complex temporal dynamics of evolving drowsy states. In this paper, we propose a data collection protocol that involves outfitting vehicles of overnight shift workers with camera kits that record their faces while driving. We develop a drowsiness annotation guideline to enable humans to label the collected videos into 4 levels of drowsiness: alert', slightly drowsy', moderately drowsy' and extremely drowsy'. We experiment with different convolutional and temporal neural network architectures to predict drowsiness states from pose, expression and emotion-based representation of the input video of the driver's face. Our best performing model achieves a macro ROC-AUC of 0.78, compared to 0.72 for a baseline model.

2.3D Meta Point Signature: Learning to Learn 3D Point Signature for 3D Dense Shape Correspondence ⬇️

Point signature, a representation describing the structural neighborhood of a point in 3D shapes, can be applied to establish correspondences between points in 3D shapes. Conventional methods apply a weight-sharing network, e.g., any kind of graph neural networks, across all neighborhoods to directly generate point signatures and gain the generalization ability by extensive training over a large amount of training samples from scratch. However, these methods lack the flexibility in rapidly adapting to unseen neighborhood structures and thus generalizes poorly on new point sets. In this paper, we propose a novel meta-learning based 3D point signature model, named 3Dmetapointsignature (MEPS) network, that is capable of learning robust point signatures in 3D shapes. By regarding each point signature learning process as a task, our method obtains an optimized model over the best performance on the distribution of all tasks, generating reliable signatures for new tasks, i.e., signatures of unseen point neighborhoods. Specifically, the MEPS consists of two modules: a base signature learner and a meta signature learner. During training, the base-learner is trained to perform specific signature learning tasks. In the meantime, the meta-learner is trained to update the base-learner with optimal parameters. During testing, the meta-learner that is learned with the distribution of all tasks can adaptively change parameters of the base-learner, accommodating to unseen local neighborhoods. We evaluate the MEPS model on two datasets, e.g., FAUST and TOSCA, for dense 3Dshape correspondence. Experimental results demonstrate that our method not only gains significant improvements over the baseline model and achieves state-of-the-art results, but also is capable of handling unseen 3D shapes.

3.Black-Box Ripper: Copying black-box models using generative evolutionary algorithms ⬇️

We study the task of replicating the functionality of black-box neural models, for which we only know the output class probabilities provided for a set of input images. We assume back-propagation through the black-box model is not possible and its training images are not available, e.g. the model could be exposed only through an API. In this context, we present a teacher-student framework that can distill the black-box (teacher) model into a student model with minimal accuracy loss. To generate useful data samples for training the student, our framework (i) learns to generate images on a proxy data set (with images and classes different from those used to train the black-box) and (ii) applies an evolutionary strategy to make sure that each generated data sample exhibits a high response for a specific class when given as input to the black box. Our framework is compared with several baseline and state-of-the-art methods on three benchmark data sets. The empirical evidence indicates that our model is superior to the considered baselines. Although our method does not back-propagate through the black-box network, it generally surpasses state-of-the-art methods that regard the teacher as a glass-box model. Our code is available at: this https URL.

4.One Model to Reconstruct Them All: A Novel Way to Use the Stochastic Noise in StyleGAN ⬇️

Generative Adversarial Networks (GANs) have achieved state-of-the-art performance for several image generation and manipulation tasks. Different works have improved the limited understanding of the latent space of GANs by embedding images into specific GAN architectures to reconstruct the original images. We present a novel StyleGAN-based autoencoder architecture, which can reconstruct images with very high quality across several data domains. We demonstrate a previously unknown grade of generalizablility by training the encoder and decoder independently and on different datasets. Furthermore, we provide new insights about the significance and capabilities of noise inputs of the well-known StyleGAN architecture. Our proposed architecture can handle up to 40 images per second on a single GPU, which is approximately 28x faster than previous approaches. Finally, our model also shows promising results, when compared to the state-of-the-art on the image denoising task, although it was not explicitly designed for this task.

5.UAV LiDAR Point Cloud Segmentation of A Stack Interchange with Deep Neural Networks ⬇️

Stack interchanges are essential components of transportation systems. Mobile laser scanning (MLS) systems have been widely used in road infrastructure mapping, but accurate mapping of complicated multi-layer stack interchanges are still challenging. This study examined the point clouds collected by a new Unmanned Aerial Vehicle (UAV) Light Detection and Ranging (LiDAR) system to perform the semantic segmentation task of a stack interchange. An end-to-end supervised 3D deep learning framework was proposed to classify the point clouds. The proposed method has proven to capture 3D features in complicated interchange scenarios with stacked convolution and the result achieved over 93% classification accuracy. In addition, the new low-cost semi-solid-state LiDAR sensor Livox Mid-40 featuring a incommensurable rosette scanning pattern has demonstrated its potential in high-definition urban mapping.

6.Representing Point Clouds with Generative Conditional Invertible Flow Networks ⬇️

In this paper, we propose a simple yet effective method to represent point clouds as sets of samples drawn from a cloud-specific probability distribution. This interpretation matches intrinsic characteristics of point clouds: the number of points and their ordering within a cloud is not important as all points are drawn from the proximity of the object boundary. We postulate to represent each cloud as a parameterized probability distribution defined by a generative neural network. Once trained, such a model provides a natural framework for point cloud manipulation operations, such as aligning a new cloud into a default spatial orientation. To exploit similarities between same-class objects and to improve model performance, we turn to weight sharing: networks that model densities of points belonging to objects in the same family share all parameters with the exception of a small, object-specific embedding vector. We show that these embedding vectors capture semantic relationships between objects. Our method leverages generative invertible flow networks to learn embeddings as well as to generate point clouds. Thanks to this formulation and contrary to similar approaches, we are able to train our model in an end-to-end fashion. As a result, our model offers competitive or superior quantitative results on benchmark datasets, while enabling unprecedented capabilities to perform cloud manipulation tasks, such as point cloud registration and regeneration, by a generative network.

7.Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization ⬇️

Neural networks have made great progress in pixel to pixel image processing tasks, e.g. super resolution, style transfer and image denoising. However, recent algorithms have a tendency to be too structurally complex to deploy on embedded systems. Traditional accelerating methods fix the options for pruning network weights to produce unstructured or structured sparsity. Many of them lack flexibility for different inputs. In this paper, we propose a Feature Regularization method that can generate input-dependent structured sparsity for hidden features. Our method can improve sparsity level in intermediate features by 60% to over 95% through pruning along the channel dimension for each pixel, thus relieving the computational and memory burden. On BSD100 dataset, the multiply-accumulate operations can be reduced by over 80% for super resolution tasks. In addition, we propose a method to quantitatively control the level of sparsity and design a way to train one model that supports multi-sparsity. We identify the effectiveness of our method for pixel to pixel tasks by qualitative theoretical analysis and experiments.

8.What is Wrong with Continual Learning in Medical Image Segmentation? ⬇️

Continual learning protocols are attracting increasing attention from the medical imaging community. In a continual setup, data from different sources arrives sequentially and each batch is only available for a limited period. Given the inherent privacy risks associated with medical data, this setup reflects the reality of deployment for deep learning diagnostic radiology systems. Many techniques exist to learn continuously for classification tasks, and several have been adapted to semantic segmentation. Yet most have at least one of the following flaws: a) they rely too heavily on domain identity information during inference, or b) data as seen in early training stages does not profit from training with later data. In this work, we propose an evaluation framework that addresses both concerns, and introduce a fair multi-model benchmark. We show that the benchmark outperforms two popular continual learning methods for the task of T2-weighted MR prostate segmentation.

9.Synthetic Expressions are Better Than Real for Learning to Detect Facial Actions ⬇️

Critical obstacles in training classifiers to detect facial actions are the limited sizes of annotated video databases and the relatively low frequencies of occurrence of many actions. To address these problems, we propose an approach that makes use of facial expression generation. Our approach reconstructs the 3D shape of the face from each video frame, aligns the 3D mesh to a canonical view, and then trains a GAN-based network to synthesize novel images with facial action units of interest. To evaluate this approach, a deep neural network was trained on two separate datasets: One network was trained on video of synthesized facial expressions generated from FERA17; the other network was trained on unaltered video from the same database. Both networks used the same train and validation partitions and were tested on the test partition of actual video from FERA17. The network trained on synthesized facial expressions outperformed the one trained on actual facial expressions and surpassed current state-of-the-art approaches.

10.Progressive Batching for Efficient Non-linear Least Squares ⬇️

Non-linear least squares solvers are used across a broad range of offline and real-time model fitting problems. Most improvements of the basic Gauss-Newton algorithm tackle convergence guarantees or leverage the sparsity of the underlying problem structure for computational speedup. With the success of deep learning methods leveraging large datasets, stochastic optimization methods received recently a lot of attention. Our work borrows ideas from both stochastic machine learning and statistics, and we present an approach for non-linear least-squares that guarantees convergence while at the same time significantly reduces the required amount of computation. Empirical results show that our proposed method achieves competitive convergence rates compared to traditional second-order approaches on common computer vision problems, such as image alignment and essential matrix estimation, with very large numbers of residuals.

11.Learning to Guide Local Feature Matches ⬇️

We tackle the problem of finding accurate and robust keypoint correspondences between images. We propose a learning-based approach to guide local feature matches via a learned approximate image matching. Our approach can boost the results of SIFT to a level similar to state-of-the-art deep descriptors, such as Superpoint, ContextDesc, or D2-Net and can improve performance for these descriptors. We introduce and study different levels of supervision to learn coarse correspondences. In particular, we show that weak supervision from epipolar geometry leads to performances higher than the stronger but more biased point level supervision and is a clear improvement over weak image level supervision. We demonstrate the benefits of our approach in a variety of conditions by evaluating our guided keypoint correspondences for localization of internet images on the YFCC100M dataset and indoor images on theSUN3D dataset, for robust localization on the Aachen day-night benchmark and for 3D reconstruction in challenging conditions using the LTLL historical image data.

12.2nd Place Solution to Instance Segmentation of IJCAI 3D AI Challenge 2020 ⬇️

Compared with MS-COCO, the dataset for the competition has a larger proportion of large objects which area is greater than 96x96 pixels. As getting fine boundaries is vitally important for large object segmentation, Mask R-CNN with PointRend is selected as the base segmentation framework to output high-quality object boundaries. Besides, a better engine that integrates ResNeSt, FPN and DCNv2, and a range of effective tricks that including multi-scale training and test time augmentation are applied to improve segmentation performance. Our best performance is an ensemble of four models (three PointRend-based models and SOLOv2), which won the 2nd place in IJCAI-PRICAI 3D AI Challenge 2020: Instance Segmentation.

13.Deep learning based registration using spatial gradients and noisy segmentation labels ⬇️

Image registration is one of the most challenging problems in medical image analysis. In the recent years, deep learning based approaches became quite popular, providing fast and performing registration strategies. In this short paper, we summarise our work presented on Learn2Reg challenge 2020. The main contributions of our work rely on (i) a symmetric formulation, predicting the transformations from source to target and from target to source simultaneously, enforcing the trained representations to be similar and (ii) integration of variety of publicly available datasets used both for pretraining and for augmenting segmentation labels. Our method reports a mean dice of $0.64$ for task 3 and $0.85$ for task 4 on the test sets, taking third place on the challenge. Our code and models are publicly available at this https URL and <a class="link-external link-https" href="https://github.com/TheoEst/hippocampus_registration" rel="external noopener nofollow">this https URL.

14.LCD -- Line Clustering and Description for Place Recognition ⬇️

Current research on visual place recognition mostly focuses on aggregating local visual features of an image into a single vector representation. Therefore, high-level information such as the geometric arrangement of the features is typically lost. In this paper, we introduce a novel learning-based approach to place recognition, using RGB-D cameras and line clusters as visual and geometric features. We state the place recognition problem as a problem of recognizing clusters of lines instead of individual patches, thus maintaining structural information. In our work, line clusters are defined as lines that make up individual objects, hence our place recognition approach can be understood as object recognition. 3D line segments are detected in RGB-D images using state-of-the-art techniques. We present a neural network architecture based on the attention mechanism for frame-wise line clustering. A similar neural network is used for the description of these clusters with a compact embedding of 128 floating point numbers, trained with triplet loss on training data obtained from the InteriorNet dataset. We show experiments on a large number of indoor scenes and compare our method with the bag-of-words image-retrieval approach using SIFT and SuperPoint features and the global descriptor NetVLAD. Trained only on synthetic data, our approach generalizes well to real-world data captured with Kinect sensors, while also providing information about the geometric arrangement of instances.

15.A Short Note on the Kinetics-700-2020 Human Action Dataset ⬇️

We describe the 2020 edition of the DeepMind Kinetics human action dataset, which replenishes and extends the Kinetics-700 dataset. In this new version, there are at least 700 video clips from different YouTube videos for each of the 700 classes. This paper details the changes introduced for this new release of the dataset and includes a comprehensive set of statistics as well as baseline results using the I3D network.

16.MonoComb: A Sparse-to-Dense Combination Approach for Monocular Scene Flow ⬇️

Contrary to the ongoing trend in automotive applications towards usage of more diverse and more sensors, this work tries to solve the complex scene flow problem under a monocular camera setup, i.e. using a single sensor. Towards this end, we exploit the latest achievements in single image depth estimation, optical flow, and sparse-to-dense interpolation and propose a monocular combination approach (MonoComb) to compute dense scene flow. MonoComb uses optical flow to relate reconstructed 3D positions over time and interpolates occluded areas. This way, existing monocular methods are outperformed in dynamic foreground regions which leads to the second best result among the competitors on the challenging KITTI 2015 scene flow benchmark.

17.UFO$^2$: A Unified Framework towards Omni-supervised Object Detection ⬇️

Existing work on object detection often relies on a single form of annotation: the model is trained using either accurate yet costly bounding boxes or cheaper but less expressive image-level tags. However, real-world annotations are often diverse in form, which challenges these existing works. In this paper, we present UFO$^2$, a unified object detection framework that can handle different forms of supervision simultaneously. Specifically, UFO$^2$ incorporates strong supervision (e.g., boxes), various forms of partial supervision (e.g., class tags, points, and scribbles), and unlabeled data. Through rigorous evaluations, we demonstrate that each form of label can be utilized to either train a model from scratch or to further improve a pre-trained model. We also use UFO$^2$ to investigate budget-aware omni-supervised learning, i.e., various annotation policies are studied under a fixed annotation budget: we show that competitive performance needs no strong labels for all data. Finally, we demonstrate the generalization of UFO$^2$, detecting more than 1,000 different objects without bounding box annotations.

18.Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies ⬇️

Many recent datasets contain a variety of different data modalities, for instance, image, question, and answer data in visual question answering (VQA). When training deep net classifiers on those multi-modal datasets, the modalities get exploited at different scales, i.e., some modalities can more easily contribute to the classification results than others. This is suboptimal because the classifier is inherently biased towards a subset of the modalities. To alleviate this shortcoming, we propose a novel regularization term based on the functional entropy. Intuitively, this term encourages to balance the contribution of each modality to the classification result. However, regularization with the functional entropy is challenging. To address this, we develop a method based on the log-Sobolev inequality, which bounds the functional entropy with the functional-Fisher-information. Intuitively, this maximizes the amount of information that the modalities contribute. On the two challenging multi-modal datasets VQA-CPv2 and SocialIQ, we obtain state-of-the-art results while more uniformly exploiting the modalities. In addition, we demonstrate the efficacy of our method on Colored MNIST.

19.Dense Dual-Path Network for Real-time Semantic Segmentation ⬇️

Semantic segmentation has achieved remarkable results with high computational cost and a large number of parameters. However, real-world applications require efficient inference speed on embedded devices. Most previous works address the challenge by reducing depth, width and layer capacity of network, which leads to poor performance. In this paper, we introduce a novel Dense Dual-Path Network (DDPNet) for real-time semantic segmentation under resource constraints. We design a light-weight and powerful backbone with dense connectivity to facilitate feature reuse throughout the whole network and the proposed Dual-Path module (DPM) to sufficiently aggregate multi-scale contexts. Meanwhile, a simple and effective framework is built with a skip architecture utilizing the high-resolution feature maps to refine the segmentation output and an upsampling module leveraging context information from the feature maps to refine the heatmaps. The proposed DDPNet shows an obvious advantage in balancing accuracy and speed. Specifically, on Cityscapes test dataset, DDPNet achieves 75.3% mIoU with 52.6 FPS for an input of 1024 X 2048 resolution on a single GTX 1080Ti card. Compared with other state-of-the-art methods, DDPNet achieves a significant better accuracy with a comparable speed and fewer parameters.

20.Semantics-Guided Representation Learning with Applications to Visual Synthesis ⬇️

Learning interpretable and interpolatable latent representations has been an emerging research direction, allowing researchers to understand and utilize the derived latent space for further applications such as visual synthesis or recognition. While most existing approaches derive an interpolatable latent space and induces smooth transition in image appearance, it is still not clear how to observe desirable representations which would contain semantic information of interest. In this paper, we aim to learn meaningful representations and simultaneously perform semantic-oriented and visually-smooth interpolation. To this end, we propose an angular triplet-neighbor loss (ATNL) that enables learning a latent representation whose distribution matches the semantic information of interest. With the latent space guided by ATNL, we further utilize spherical semantic interpolation for generating semantic warping of images, allowing synthesis of desirable visual data. Experiments on MNIST and CMU Multi-PIE datasets qualitatively and quantitatively verify the effectiveness of our method.

21.Towards Real-time Drowsiness Detection for Elderly Care ⬇️

The primary focus of this paper is to produce a proof of concept for extracting drowsiness information from videos to help elderly living on their own. To quantify yawning, eyelid and head movement over time, we extracted 3000 images from captured videos for training and testing of deep learning models integrated with OpenCV library. The achieved classification accuracy for eyelid and mouth open/close status were between 94.3%-97.2%. Visual inspection of head movement from videos with generated 3D coordinate overlays, indicated clear spatiotemporal patterns in collected data (yaw, roll and pitch). Extraction methodology of the drowsiness information as timeseries is applicable to other contexts including support for prior work in privacy-preserving augmented coaching, sport rehabilitation, and integration with big data platform in healthcare.

22.Reinforcement learning using Deep Q Networks and Q learning accurately localizes brain tumors on MRI with very small training sets ⬇️

Purpose Supervised deep learning in radiology suffers from notorious inherent limitations: 1) It requires large, hand-annotated data sets, 2) It is non-generalizable, and 3) It lacks explainability and intuition. We have recently proposed Reinforcement Learning to address all threes. However, we applied it to images with radiologist eye tracking points, which limits the state-action space. Here we generalize the Deep-Q Learning to a gridworld-based environment, so that only the images and image masks are required.
Materials and Methods We trained a Deep Q network on 30 two-dimensional image slices from the BraTS brain tumor database. Each image contained one lesion. We then tested the trained Deep Q network on a separate set of 30 testing set images. For comparison, we also trained and tested a keypoint detection supervised deep learning network for the same set of training / testing images.
Results Whereas the supervised approach quickly overfit the training data, and predicably performed poorly on the testing set (11% accuracy), the Deep-Q learning approach showed progressive improved generalizability to the testing set over training time, reaching 70% accuracy.
Conclusion We have shown a proof-of-principle application of reinforcement learning to radiological images, here using 2D contrast-enhanced MRI brain images with the goal of localizing brain tumors. This represents a generalization of recent work to a gridworld setting, naturally suitable for analyzing medical images.

23.ApproxDet: Content and Contention-Aware Approximate Object Detection for Mobiles ⬇️

Advanced video analytic systems, including scene classification and object detection, have seen widespread success in various domains such as smart cities and autonomous transportation. With an ever-growing number of powerful client devices, there is incentive to move these heavy video analytics workloads from the cloud to mobile devices to achieve low latency and real-time processing and to preserve user privacy. However, most video analytic systems are heavyweight and are trained offline with some pre-defined latency or accuracy requirements. This makes them unable to adapt at runtime in the face of three types of dynamism -- the input video characteristics change, the amount of compute resources available on the node changes due to co-located applications, and the user's latency-accuracy requirements change. In this paper we introduce ApproxDet, an adaptive video object detection framework for mobile devices to meet accuracy-latency requirements in the face of changing content and resource contention scenarios. To achieve this, we introduce a multi-branch object detection kernel (layered on Faster R-CNN), which incorporates a data-driven modeling approach on the performance metrics, and a latency SLA-driven scheduler to pick the best execution branch at runtime. We couple this kernel with approximable video object tracking algorithms to create an end-to-end video object detection system. We evaluate ApproxDet on a large benchmark video dataset and compare quantitatively to AdaScale and YOLOv3. We find that ApproxDet is able to adapt to a wide variety of contention and content characteristics and outshines all baselines, e.g., it achieves 52% lower latency and 11.1% higher accuracy over YOLOv3.

24.Underwater Image Color Correction by Complementary Adaptation ⬇️

In this paper, we propose a novel approach for underwater image color correction based on a Tikhonov type optimization model in the CIELAB color space. It presents a new variational interpretation of the complementary adaptation theory in psychophysics, which establishes the connection between colorimetric notions and color constancy of the human visual system (HVS). Understood as a long-term adaptive process, our method effectively removes the underwater color cast and yields a balanced color distribution. For visualization purposes, we enhance the image contrast by properly rescaling both lightness and chroma without trespassing the CIELAB gamut. The magnitude of the enhancement is hue-selective and image-based, thus our method is robust for different underwater imaging environments. To improve the uniformity of CIELAB, we include an approximate hue-linearization as the pre-processing and an inverse transform of the Helmholtz-Kohlrausch effect as the post-processing. We analyze and validate the proposed model by various numerical experiments. Based on image quality metrics designed for underwater conditions, we compare with some state-of-art approaches to show that the proposed method has consistently superior performances.

25.Mutual-Supervised Feature Modulation Network for Occluded Pedestrian Detection ⬇️

State-of-the-art pedestrian detectors have achieved significant progress on non-occluded pedestrians, yet they are still struggling under heavy occlusions. The recent occlusion handling strategy of popular two-stage approaches is to build a two-branch architecture with the help of additional visible body annotations. Nonetheless, these methods still have some weaknesses. Either the two branches are trained independently with only score-level fusion, which cannot guarantee the detectors to learn robust enough pedestrian features. Or the attention mechanisms are exploited to only emphasize on the visible body features. However, the visible body features of heavily occluded pedestrians are concentrated on a relatively small area, which will easily cause missing detections. To address the above issues, we propose in this paper a novel Mutual-Supervised Feature Modulation (MSFM) network, to better handle occluded pedestrian detection. The key MSFM module in our network calculates the similarity loss of full body boxes and visible body boxes corresponding to the same pedestrian so that the full-body detector could learn more complete and robust pedestrian features with the assist of contextual features from the occluding parts. To facilitate the MSFM module, we also propose a novel two-branch architecture, consisting of a standard full body detection branch and an extra visible body classification branch. These two branches are trained in a mutual-supervised way with full body annotations and visible body annotations, respectively. To verify the effectiveness of our proposed method, extensive experiments are conducted on two challenging pedestrian datasets: Caltech and CityPersons, and our approach achieves superior performance compared to other state-of-the-art methods on both datasets, especially in heavy occlusion case.

26.SCOP: Scientific Control for Reliable Neural Network Pruning ⬇️

This paper proposes a reliable neural network pruning algorithm by setting up a scientific control. Existing pruning methods have developed various hypotheses to approximate the importance of filters to the network and then execute filter pruning accordingly. To increase the reliability of the results, we prefer to have a more rigorous research design by including a scientific control group as an essential part to minimize the effect of all factors except the association between the filter and expected network output. Acting as a control group, knockoff feature is generated to mimic the feature map produced by the network filter, but they are conditionally independent of the example label given the real feature map. We theoretically suggest that the knockoff condition can be approximately preserved given the information propagation of network layers. Besides the real feature map on an intermediate layer, the corresponding knockoff feature is brought in as another auxiliary input signal for the subsequent layers. Redundant filters can be discovered in the adversarial process of different features. Through experiments, we demonstrate the superiority of the proposed algorithm over state-of-the-art methods. For example, our method can reduce 57.8% parameters and 60.2% FLOPs of ResNet-101 with only 0.01% top-1 accuracy loss on ImageNet.

27.High-Capacity Complex Convolutional Neural Networks For I/Q Modulation Classification ⬇️

I/Q modulation classification is a unique pattern recognition problem as the data for each class varies in quality, quantified by signal to noise ratio (SNR), and has structure in the complex-plane. Previous work shows treating these samples as complex-valued signals and computing complex-valued convolutions within deep learning frameworks significantly increases the performance over comparable shallow CNN architectures. In this work, we claim state of the art performance by enabling high-capacity architectures containing residual and/or dense connections to compute complex-valued convolutions, with peak classification accuracy of 92.4% on a benchmark classification problem, the RadioML 2016.10a dataset. We show statistically significant improvements in all networks with complex convolutions for I/Q modulation classification. Complexity and inference speed analyses show models with complex convolutions substantially outperform architectures with a comparable number of parameters and comparable speed by over 10% in each case.

28.TargetDrop: A Targeted Regularization Method for Convolutional Neural Networks ⬇️

Dropout regularization has been widely used in deep learning but performs less effective for convolutional neural networks since the spatially correlated features allow dropped information to still flow through the networks. Some structured forms of dropout have been proposed to address this but prone to result in over or under regularization as features are dropped randomly. In this paper, we propose a targeted regularization method named TargetDrop which incorporates the attention mechanism to drop the discriminative feature units. Specifically, it masks out the target regions of the feature maps corresponding to the target channels. Experimental results compared with the other methods or applied for different networks demonstrate the regularization effect of our method.

29.Geometry-based Occlusion-Aware Unsupervised Stereo Matching for Autonomous Driving ⬇️

Recently, there are emerging many stereo matching methods for autonomous driving based on unsupervised learning. Most of them take advantage of reconstruction losses to remove dependency on disparity groundtruth. Occlusion handling is a challenging problem in stereo matching, especially for unsupervised methods. Previous unsupervised methods failed to take full advantage of geometry properties in occlusion handling. In this paper, we introduce an effective way to detect occlusion regions and propose a novel unsupervised training strategy to deal with occlusion that only uses the predicted left disparity map, by making use of its geometry features in an iterative way. In the training process, we regard the predicted left disparity map as pseudo groundtruth and infer occluded regions using geometry features. The resulting occlusion mask is then used in either training, post-processing, or both of them as guidance. Experiments show that our method could deal with the occlusion problem effectively and significantly outperforms the other unsupervised methods for stereo matching. Moreover, our occlusion-aware strategies can be extended to the other stereo methods conveniently and improve their performances.

30.Deep Learning Frameworks for Pavement Distress Classification: A Comparative Analysis ⬇️

Automatic detection and classification of pavement distresses is critical in timely maintaining and rehabilitating pavement surfaces. With the evolution of deep learning and high performance computing, the feasibility of vision-based pavement defect assessments has significantly improved. In this study, the authors deploy state-of-the-art deep learning algorithms based on different network backbones to detect and characterize pavement distresses. The influence of different backbone models such as CSPDarknet53, Hourglass-104 and EfficientNet were studied to evaluate their classification performance. The models were trained using 21,041 images captured across urban and rural streets of Japan, Czech Republic and India. Finally, the models were assessed based on their ability to predict and classify distresses, and tested using F1 score obtained from the statistical precision and recall values. The best performing model achieved an F1 score of 0.58 and 0.57 on two test datasets released by the IEEE Global Road Damage Detection Challenge. The source code including the trained models are made available at [1].

31.Mutual Information Regularized Identity-aware Facial ExpressionRecognition in Compressed Video ⬇️

This paper targets to explore the inter-subject variations eliminated facial expression representation in the compressed video domain. Most of the previous methods process the RGB images of a sequence, while the off-the-shelf and valuable expression-related muscle movement already embedded in the compression format. In the up to two orders of magnitude compressed domain, we can explicitly infer the expression from the residual frames and possible to extract identity factors from the I frame with a pre-trained face recognition network. By enforcing the marginal independent of them, the expression feature is expected to be purer for the expression and be robust to identity shifts. Specifically, we propose a novel collaborative min-min game for mutual information (MI) minimization in latent space. We do not need the identity label or multiple expression samples from the same person for identity elimination. Moreover, when the apex frame is annotated in the dataset, the complementary constraint can be further added to regularize the feature-level game. In testing, only the compressed residual frames are required to achieve expression prediction. Our solution can achieve comparable or better performance than the recent decoded image-based methods on the typical FER benchmarks with about 3 times faster inference.

32.ENSURE: Ensemble Stein's Unbiased Risk Estimator for Unsupervised Learning ⬇️

Deep learning accelerates the MR image reconstruction process after offline training of a deep neural network from a large volume of clean and fully sampled data. Unfortunately, fully sampled images may not be available or are difficult to acquire in several application areas such as high-resolution imaging. Previous studies have utilized Stein's Unbiased Risk Estimator (SURE) as a mean square error (MSE) estimate for the image denoising problem. Unrolled reconstruction algorithms, where the denoiser at each iteration is trained using SURE, has also been introduced. Unfortunately, the end-to-end training of a network using SURE remains challenging since the projected SURE loss is a poor approximation to the MSE, especially in the heavily undersampled setting. We propose an ENsemble SURE (ENSURE) approach to train a deep network only from undersampled measurements. In particular, we show that training a network using an ensemble of images, each acquired with a different sampling pattern, can closely approximate the MSE. Our preliminary experimental results show that the proposed ENSURE approach gives comparable reconstruction quality to supervised learning and a recent unsupervised learning method.

33.Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma Segmentation in MRI Scans ⬇️

Structural magnetic resonance imaging (MRI) has been widely utilized for analysis and diagnosis of brain diseases. Automatic segmentation of brain tumors is a challenging task for computer-aided diagnosis due to low-tissue contrast in the tumor subregions. To overcome this, we devise a novel pixel-wise segmentation framework through a convolutional 3D to 2D MR patch conversion model to predict class labels of the central pixel in the input sliding patches. Precisely, we first extract 3D patches from each modality to calibrate slices through the squeeze and excitation (SE) block. Then, the output of the SE block is fed directly into subsequent bottleneck layers to reduce the number of channels. Finally, the calibrated 2D slices are concatenated to obtain multimodal features through a 2D convolutional neural network (CNN) for prediction of the central pixel. In our architecture, both local inter-slice and global intra-slice features are jointly exploited to predict class label of the central voxel in a given patch through the 2D CNN classifier. We implicitly apply all modalities through trainable parameters to assign weights to the contributions of each sequence for segmentation. Experimental results on the segmentation of brain tumors in multimodal MRI scans (BraTS'19) demonstrate that our proposed method can efficiently segment the tumor regions.

34.Cross-Modal Information Maximization for Medical Imaging: CMIM ⬇️

In hospitals, data are siloed to specific information systems that make the same information available under different modalities such as the different medical imaging exams the patient undergoes (CT scans, MRI, PET, Ultrasound, etc.) and their associated radiology reports. This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
In this paper, we propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time, using recent advances in mutual information maximization. By maximizing cross-modal information at train time, we are able to outperform several state-of-the-art baselines in two different settings, medical image classification, and segmentation. In particular, our method is shown to have a strong impact on the inference-time performance of weaker modalities.

35.American Sign Language Identification Using Hand Trackpoint Analysis ⬇️

Sign Language helps people with Speaking and Hearing Disabilities communicate with others efficiently. Sign Language Recognition is a challenging area in the field of computer vision and recent developments have been able to achieve near perfect results for the task, though some challenges are yet to be solved. In this paper we propose a novel machine learning based pipeline for American Sign Language recognition using hand track points. We convert a hand gesture into a series of hand track point coordinates that serve as an input to our system. In order to make the solution more efficient, we experimented with 28 different combinations of pre-processing techniques, each run on three different machine learning algorithms namely k-Nearest Neighbours, Random Forests and a Neural Network. Their performance was contrasted to determine the best pre-processing scheme and Algorithm Pair. Our system achieved an Accuracy of 95.66% to recognize American sign language gestures.

36.A Survey on Deep Learning and Explainability for Automatic Image-based Medical Report Generation ⬇️

Every year physicians face an increasing demand of image-based diagnosis from patients, a problem that can be addressed with recent artificial intelligence methods. In this context, we survey works in the area of automatic report generation from medical images, with emphasis on methods using deep neural networks, with respect to: (1) Datasets, (2) Architecture Design, (3) Explainability and (4) Evaluation Metrics. Our survey identifies interesting developments, but also remaining challenges. Among them, the current evaluation of generated reports is especially weak, since it mostly relies on traditional Natural Language Processing (NLP) metrics, which do not accurately capture medical correctness.

37.Image-Driven Furniture Style for Interactive 3D Scene Modeling ⬇️

Creating realistic styled spaces is a complex task, which involves design know-how for what furniture pieces go well together. Interior style follows abstract rules involving color, geometry and other visual elements. Following such rules, users manually select similar-style items from large repositories of 3D furniture models, a process which is both laborious and time-consuming. We propose a method for fast-tracking style-similarity tasks, by learning a furniture's style-compatibility from interior scene images. Such images contain more style information than images depicting single furniture. To understand style, we train a deep learning network on a classification task. Based on image embeddings extracted from our network, we measure stylistic compatibility of furniture. We demonstrate our method with several 3D model style-compatibility results, and with an interactive system for modeling style-consistent scenes.

38.Study of star clusters in the M83 galaxy with a convolutional neural network ⬇️

We present a study of evolutionary and structural parameters of star cluster candidates in the spiral galaxy M83. For this we use a convolutional neural network trained on mock clusters and capable of fast identification and localization of star clusters, as well as inference of their parameters from multi-band images. We use this pipeline to detect 3,380 cluster candidates in Hubble Space Telescope observations. The sample of cluster candidates shows an age gradient across the galaxy's spiral arms, which is in good agreement with predictions of the density wave theory and other studies. As measured from the dust lanes of the spiral arms, the younger population of cluster candidates peaks at the distance of $\sim$0.4 kpc while the older candidates are more dispersed, but shifted towards $\gtrsim$0.7 kpc in the leading part of the spiral arms. We find high extinction cluster candidates positioned in the trailing part of the spiral arms, close to the dust lanes. We also find a large number of dense older clusters near the center of the galaxy and a slight increase of the typical cluster size further from the center.

39.Anatomically-Informed Deep Learning on Contrast-Enhanced Cardiac MRI for Scar Segmentation and Clinical Feature Extraction ⬇️

Many cardiac diseases are associated with structural remodeling of the myocardium. Cardiac magnetic resonance (CMR) imaging with contrast enhancement, such as late gadolinium enhancement (LGE), has unparalleled capability to visualize fibrotic tissue remodeling, allowing for direct characterization of the pathophysiological abnormalities leading to arrhythmias and sudden cardiac death (SCD). Automating segmentation of the ventricles with fibrosis distribution could dramatically enhance the utility of LGE-CMR in heart disease clinical research and in the management of patients with risk of arrhythmias and SCD. Here we describe an anatomically-informed deep learning (DL) approach to myocardium and scar segmentation and clinical feature extraction from LGE-CMR images. The technology enables clinical use by ensuring anatomical accuracy and complete automation. Algorithm performance is strong for both myocardium segmentation ($98%$ accuracy and $0.79$ Dice score in a hold-out test set) and evaluation measures shown to correlate with heart disease, such as scar amount ($6.3%$ relative error). Our approach for clinical feature extraction, which satisfies highly complex geometric constraints without stunting the learning process, has the potential of a broad applicability in computer vision beyond cardiology, and even outside of medicine.

40.Learning Curves for Analysis of Deep Networks ⬇️

A learning curve models a classifier's test error as a function of the number of training samples. Prior works show that learning curves can be used to select model parameters and extrapolate performance. We investigate how to use learning curves to analyze the impact of design choices, such as pre-training, architecture, and data augmentation. We propose a method to robustly estimate learning curves, abstract their parameters into error and data-reliance, and evaluate the effectiveness of different parameterizations. We also provide several interesting observations based on learning curves for a variety of image classification models.

41.A Wigner-Eckart Theorem for Group Equivariant Convolution Kernels ⬇️

Group equivariant convolutional networks (GCNNs) endow classical convolutional networks with additional symmetry priors, which can lead to a considerably improved performance. Recent advances in the theoretical description of GCNNs revealed that such models can generally be understood as performing convolutions with G-steerable kernels, that is, kernels that satisfy an equivariance constraint themselves. While the G-steerability constraint has been derived, it has to date only been solved for specific use cases - a general characterization of G-steerable kernel spaces is still missing. This work provides such a characterization for the practically relevant case of G being any compact group. Our investigation is motivated by a striking analogy between the constraints underlying steerable kernels on the one hand and spherical tensor operators from quantum mechanics on the other hand. By generalizing the famous Wigner-Eckart theorem for spherical tensor operators, we prove that steerable kernel spaces are fully understood and parameterized in terms of 1) generalized reduced matrix elements, 2) Clebsch-Gordan coefficients, and 3) harmonic basis functions on homogeneous spaces.

42.DiSCO: Differentiable Scan Context with Orientation ⬇️

Global localization is essential for robot navigation, of which the first step is to retrieve a query from the map database. This problem is called place recognition. In recent years, LiDAR scan based place recognition has drawn attention as it is robust against the environmental change. In this paper, we propose a LiDAR-based place recognition method, named Differentiable Scan Context with Orientation (DiSCO), which simultaneously finds the scan at a similar place and estimates their relative orientation. The orientation can further be used as the initial value for the down-stream local optimal metric pose estimation, improving the pose estimation especially when a large orientation between the current scan and retrieved scan exists. Our key idea is to transform the feature learning into the frequency domain. We utilize the magnitude of the spectrum as the place signature, which is theoretically rotation-invariant. In addition, based on the differentiable phase correlation, we can efficiently estimate the global optimal relative orientation using the spectrum. With such structural constraints, the network can be learned in an end-to-end manner, and the backbone is fully shared by the two tasks, achieving interpretability and light weight. Finally, DiSCO is validated on the NCLT and Oxford datasets with long-term outdoor conditions, showing better performance than the compared methods.

43.Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning ⬇️

Visual navigation is essential for many applications in robotics, from manipulation, through mobile robotics to automated driving. Deep reinforcement learning (DRL) provides an elegant map-free approach integrating image processing, localization, and planning in one module, which can be trained and therefore optimized for a given environment. However, to date, DRL-based visual navigation was validated exclusively in simulation, where the simulator provides information that is not available in the real world, e.g., the robot's position or image segmentation masks. This precludes the use of the learned policy on a real robot. Therefore, we propose a novel approach that enables a direct deployment of the trained policy on real robots. We have designed visual auxiliary tasks, a tailored reward scheme, and a new powerful simulator to facilitate domain randomization. The policy is fine-tuned on images collected from real-world environments. We have evaluated the method on a mobile robot in a real office environment. The training took ~30 hours on a single GPU. In 30 navigation experiments, the robot reached a 0.3-meter neighborhood of the goal in more than 86.7% of cases. This result makes the proposed method directly applicable to tasks like mobile manipulation.

44.Learning Integrodifferential Models for Image Denoising ⬇️

We introduce an integrodifferential extension of the edge-enhancing anisotropic diffusion model for image denoising. By accumulating weighted structural information on multiple scales, our model is the first to create anisotropy through multiscale integration. It follows the philosophy of combining the advantages of model-based and data-driven approaches within compact, insightful, and mathematically well-founded models with improved performance. We explore trained results of scale-adaptive weighting and contrast parameters to obtain an explicit modelling by smooth functions. This leads to a transparent model with only three parameters, without significantly decreasing its denoising performance. Experiments demonstrate that it outperforms its diffusion-based predecessors. We show that both multiscale information and anisotropy are crucial for its success.

45.Probabilistic Numeric Convolutional Neural Networks ⬇️

Continuous input signals like images and time series that are irregularly sampled or have missing values are challenging for existing deep learning methods. Coherently defined feature representations must depend on the values in unobserved regions of the input. Drawing from the work in probabilistic numerics, we propose Probabilistic Numeric Convolutional Neural Networks which represent features as Gaussian processes (GPs), providing a probabilistic description of discretization error. We then define a convolutional layer as the evolution of a PDE defined on this GP, followed by a nonlinearity. This approach also naturally admits steerable equivariant convolutions under e.g. the rotation group. In experiments we show that our approach yields a $3\times$ reduction of error from the previous state of the art on the SuperPixel-MNIST dataset and competitive performance on the medical time series dataset PhysioNet2012.

46.Recurrent neural network-based volumetric fluorescence microscopy ⬇️

Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences. Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. Elegans and nanobead samples, Recurrent-MZ is demonstrated to increase the depth-of-field of a 63x/1.4NA objective lens by approximately 50-fold, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors. Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework, overcoming the limitations of current 3D scanning microscopy tools.

47.Boosting Gradient for White-Box Adversarial Attacks ⬇️

Deep neural networks (DNNs) are playing key roles in various artificial intelligence applications such as image classification and object recognition. However, a growing number of studies have shown that there exist adversarial examples in DNNs, which are almost imperceptibly different from original samples, but can greatly change the network output. Existing white-box attack algorithms can generate powerful adversarial examples. Nevertheless, most of the algorithms concentrate on how to iteratively make the best use of gradients to improve adversarial performance. In contrast, in this paper, we focus on the properties of the widely-used ReLU activation function, and discover that there exist two phenomena (i.e., wrong blocking and over transmission) misleading the calculation of gradients in ReLU during the backpropagation. Both issues enlarge the difference between the predicted changes of the loss function from gradient and corresponding actual changes, and mislead the gradients which results in larger perturbations. Therefore, we propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms. During the backpropagation of the network, our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients. Comprehensive experimental results on \emph{ImageNet} demonstrate that our ADV-ReLU can be easily integrated into many state-of-the-art gradient-based white-box attack algorithms, as well as transferred to black-box attack attackers, to further decrease perturbations in the ${\ell _2}$-norm.

48.A Coarse-To-Fine (C2F) Representation for End-To-End 6-DoF Grasp Detection ⬇️

We proposed an end-to-end grasp detection network, Grasp Detection Network (GDN), cooperated with a novel coarse-to-fine (C2F) grasp representation design to detect diverse and accurate 6-DoF grasps based on point clouds. Compared to previous two-stage approaches which sample and evaluate multiple grasp candidates, our architecture is at least 20 times faster. It is also 8% and 40% more accurate in terms of the success rate in single object scenes and the complete rate in clutter scenes, respectively. Our method shows superior results among settings with different number of views and input points. Moreover, we propose a new AP-based metric which considers both rotation and transition errors, making it a more comprehensive evaluation tool for grasp detection models.

49.Exploring Overcomplete Representations for Single Image Deraining using CNNs ⬇️

Removal of rain streaks from a single image is an extremely challenging problem since the rainy images often contain rain streaks of different size, shape, direction and density. Most recent methods for deraining use a deep network following a generic "encoder-decoder" architecture which captures low-level features across the initial layers and high-level features in the deeper layers. For the task of deraining, the rain streaks which are to be removed are relatively small and focusing much on global features is not an efficient way to solve the problem. To this end, we propose using an overcomplete convolutional network architecture which gives special attention in learning local structures by restraining the receptive field of filters. We combine it with U-Net so that it does not lose out on the global structures as well while focusing more on low-level features, to compute the derained image. The proposed network called, Over-and-Under Complete Deraining Network (OUCD), consists of two branches: overcomplete branch which is confined to small receptive field size in order to focus on the local structures and an undercomplete branch that has larger receptive fields to primarily focus on global structures. Extensive experiments on synthetic and real datasets demonstrate that the proposed method achieves significant improvements over the recent state-of-the-art methods.

50.Towards End-to-End In-Image Neural Machine Translation ⬇️

In this paper, we offer a preliminary investigation into the task of in-image machine translation: transforming an image containing text in one language into an image containing the same text in another language. We propose an end-to-end neural model for this task inspired by recent approaches to neural machine translation, and demonstrate promising initial results based purely on pixel-level supervision. We then offer a quantitative and qualitative evaluation of our system outputs and discuss some common failure modes. Finally, we conclude with directions for future work.

51.Incandescent Bulb and LED Brake Lights:Novel Analysis of Reaction Times ⬇️

Rear-end collision accounts for around 8% of all vehicle crashes in the UK, with the failure to notice or react to a brake light signal being a major contributory cause. Meanwhile traditional incandescent brake light bulbs on vehicles are increasingly being replaced by a profusion of designs featuring LEDs. In this paper, we investigate the efficacy of brake light design using a novel approach to recording subject reaction times in a simulation setting using physical brake light assemblies. The reaction times of 22 subjects were measured for ten pairs of LED and incandescent bulb brake lights. Three events were investigated for each subject, namely the latency of brake light activation to accelerator release (BrakeAcc), the latency of accelerator release to brake pedal depression (AccPdl), and the cumulative time from light activation to brake pedal depression (BrakePdl). To our knowledge, this is the first study in which reaction times have been split into BrakeAcc and AccPdl. Results indicate that the two brake lights containing incandescent bulbs led to significantly slower reaction times compared to the tested eight LED lights. BrakeAcc results also show that experienced subjects were quicker to respond to the activation of brake lights by releasing the accelerator pedal. Interestingly, the analysis also revealed that the type of brake light influenced the AccPdl time, although experienced subjects did not always act quicker than inexperienced subjects. Overall, the study found that different designs of brake light can significantly influence driver response times.