Skip to content

Latest commit

 

History

History
43 lines (43 loc) · 26 KB

20181119.md

File metadata and controls

43 lines (43 loc) · 26 KB

ArXiv cs.CV --Mon, 19 Nov 2018

1.GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism pdf

GPipe is a scalable pipeline parallelism library that enables learning of giant deep neural networks. It partitions network layers across accelerators and pipelines execution to achieve high hardware utilization. It leverages recomputation to minimize activation memory usage. For example, using partitions over 8 accelerators, it is able to train networks that are 25x larger, demonstrating its scalability. It also guarantees that the computed gradients remain consistent regardless of the number of partitions. It achieves an almost linear speed up without any changes in the model parameters: when using 4x more accelerators, training the same model is up to 3.5x faster. We train a 557 million parameters AmoebaNet model on ImageNet and achieve a new state-of-the-art 84.3% top-1 / 97.0% top-5 accuracy on ImageNet. Finally, we use this learned model as an initialization for training 7 different popular image classification datasets and obtain results that exceed the best published ones on 5 of them, including pushing the CIFAR-10 accuracy to 99% and CIFAR-100 accuracy to 91.3%.

2.Automatic Paper Summary Generation from Visual and Textual Information pdf

Due to the recent boom in artificial intelligence (AI) research, including computer vision (CV), it has become impossible for researchers in these fields to keep up with the exponentially increasing number of manuscripts. In response to this situation, this paper proposes the paper summary generation (PSG) task using a simple but effective method to automatically generate an academic paper summary from raw PDF data. We realized PSG by combination of vision-based supervised components detector and language-based unsupervised important sentence extractor, which is applicable for a trained format of manuscripts. We show the quantitative evaluation of ability of simple vision-based components extraction, and the qualitative evaluation that our system can extract both visual item and sentence that are helpful for understanding. After processing via our PSG, the 979 manuscripts accepted by the Conference on Computer Vision and Pattern Recognition (CVPR) 2018 are available. It is believed that the proposed method will provide a better way for researchers to stay caught with important academic papers.

3.Mode Variational LSTM Robust to Unseen Modes of Variation: Application to Facial Expression Recognition pdf

Spatio-temporal feature encoding is essential for encoding the dynamics in video sequences. Recurrent neural networks, particularly long short-term memory (LSTM) units, have been popular as an efficient tool for encoding spatio-temporal features in sequences. In this work, we investigate the effect of mode variations on the encoded spatio-temporal features using LSTMs. We show that the LSTM retains information related to the mode variation in the sequence, which is irrelevant to the task at hand (e.g. classification facial expressions). Actually, the LSTM forget mechanism is not robust enough to mode variations and preserves information that could negatively affect the encoded spatio-temporal features. We propose the mode variational LSTM to encode spatio-temporal features robust to unseen modes of variation. The mode variational LSTM modifies the original LSTM structure by adding an additional cell state that focuses on encoding the mode variation in the input sequence. To efficiently regulate what features should be stored in the additional cell state, additional gating functionality is also introduced. The effectiveness of the proposed mode variational LSTM is verified using the facial expression recognition task. Comparative experiments on publicly available datasets verified that the proposed mode variational LSTM outperforms existing methods. Moreover, a new dynamic facial expression dataset with different modes of variation, including various modes like pose and illumination variations, was collected to comprehensively evaluate the proposed mode variational LSTM. Experimental results verified that the proposed mode variational LSTM encodes spatio-temporal features robust to unseen modes of variation.

4.Image Pre-processing Using OpenCV Library on MORPH-II Face Database pdf

This paper outlines the steps taken toward pre-processing the 55,134 images of the MORPH-II non-commercial dataset. Following the introduction, section two begins with an overview of each step in the pre-processing pipeline. Section three expands upon each stage of the process and includes details on all calculations made, by providing the OpenCV functionality paired with each step. The last portion of this paper discusses the potential improvements to this pre-processing pipeline that became apparent in retrospect.

5.The Perfect Match: 3D Point Cloud Matching with Smoothed Densities pdf

We propose 3DSmoothNet, a full workflow to match 3D point clouds with a siamese deep learning architecture and fully convolutional layers using a voxelized smoothed density value (SDV) representation. The latter is computed per interest point and aligned to the local reference frame (LRF) to achieve rotation invariance. Our compact, learned, rotation invariant 3D point cloud descriptor achieves 94.9% average recall on the 3DMatch benchmark data set, outperforming the state-of-the-art by more than 20 percent points with only 32 output dimensions. This very low output dimension allows for near real-time correspondence search with 0.1 ms per feature point on a standard PC. Our approach is sensor- and scene-agnostic because of SDV, LRF and learning highly descriptive features with fully convolutional layers. We show that 3DSmoothNet trained only on RGB-D indoor scenes of buildings achieves 79.0% average recall on laser scans of outdoor vegetation, more than double the performance of our closest, learningbased competitors.

6.Residual Convolutional Neural Network Revisited with Active Weighted Mapping pdf

In visual recognition, the key to the performance improvement of ResNet is the success in establishing the stack of deep sequential convolutional layers using identical mapping by a shortcut connection. It results in multiple paths of data flow under a network and the paths are merged with the equal weights. However, it is questionable whether it is correct to use the fixed and predefined weights at the mapping units of all paths. In this paper, we introduce the active weighted mapping method which infers proper weight values based on the characteristic of input data on the fly. The weight values of each mapping unit are not fixed but changed as the input image is changed, and the most proper weight values for each mapping unit are derived according to the input image. For this purpose, channel-wise information is embedded from both the shortcut connection and convolutional block, and then the fully connected layers are used to estimate the weight values for the mapping units. We train the backbone network and the proposed module alternately for a more stable learning of the proposed method. Results of the extensive experiments show that the proposed method works successfully on the various backbone architectures from ResNet to DenseNet. We also verify the superiority and generality of the proposed method on various datasets in comparison with the baseline.

7.Learning Where to Fixate on Foveated Images pdf

Foveation, the ability to sequentially acquire high-acuity regions of a scene viewed initially at low-acuity, is a key property of biological vision systems. In a computer vision system, foveation is also desired to increase data efficiency and derive task-relevant features. Yet, most existing deep learning models lack the ability to foveate. In this paper, we propose a deep reinforcement learning-based foveation model, DRIFT, and apply it to challenging fine-grained classification tasks. Training of DRIFT requires only image-level category labels and encourages fixations to contain discriminative information while maintaining data efficiency. Specifically, we formulate foveation as a sequential decision-making process and train a foveation actor network with a novel Deep Deterministic Policy Gradient by Conditioned Critic and Coaching (DDPGC3) algorithm. In addition, we propose to shape the reward to provide informative feedback after each fixation to better guide the RL training. We demonstrate the effectiveness of our method on five fine-grained classification benchmark datasets, and show that the proposed approach achieves state-of-the-art performance using an order-of-magnitude fewer pixels.

8.Anomaly Detection using Deep Learning based Image Completion pdf

Automated surface inspection is an important task in many manufacturing industries and often requires machine learning driven solutions. Supervised approaches, however, can be challenging, since it is often difficult to obtain large amounts of labeled training data. In this work, we instead perform one-class unsupervised learning on fault-free samples by training a deep convolutional neural network to complete images whose center regions are cut out. Since the network is trained exclusively on fault-free data, it completes the image patches with a fault-free version of the missing image region. The pixel-wise reconstruction error within the cut out region is an anomaly image which can be used for anomaly detection. Results on surface images of decorated plastic parts demonstrate that this approach is suitable for detection of visible anomalies and moreover surpasses all other tested methods.

9.Improving Fingerprint Pore Detection with a Small FCN pdf

In this work, we investigate if previously proposed CNNs for fingerprint pore detection overestimate the number of required model parameters for this task. We show that this is indeed the case by proposing a fully convolutional neural network that has significantly fewer parameters. We evaluate this model using a rigorous and reproducible protocol, which was, prior to our work, not available to the community. Using our protocol, we show that the proposed model, when combined with post-processing, performs better than previous methods, albeit being much more efficient. All our code is available at this https URL

10.DeRPN: Taking a further step toward more general object detection pdf

Most current detection methods have adopted anchor boxes as regression references. However, the detection performance is sensitive to the setting of the anchor boxes. A proper setting of anchor boxes may vary significantly across different datasets, which severely limits the universality of the detectors. To improve the adaptivity of the detectors, in this paper, we present a novel dimension-decomposition region proposal network (DeRPN) that can perfectly displace the traditional Region Proposal Network (RPN). DeRPN utilizes an anchor string mechanism to independently match object widths and heights, which is conducive to treating variant object shapes. In addition, a novel scale-sensitive loss is designed to address the imbalanced loss computations of different scaled objects, which can avoid the small objects being overwhelmed by larger ones. Comprehensive experiments conducted on both general object detection datasets (Pascal VOC 2007, 2012 and MS COCO) and scene text detection datasets (ICDAR 2013 and COCO-Text) all prove that our DeRPN can significantly outperform RPN. It is worth mentioning that the proposed DeRPN can be employed directly on different models, tasks, and datasets without any modifications of hyperparameters or specialized optimization, which further demonstrates its adaptivity. The code will be released at this https URL.

11.HSCS: Hierarchical Sparsity Based Co-saliency Detection for RGBD Images pdf

Co-saliency detection aims to discover common and salient objects in an image group containing more than two relevant images. Moreover, depth information has been demonstrated to be effective for many computer vision tasks. In this paper, we propose a novel co-saliency detection method for RGBD images based on hierarchical sparsity reconstruction and energy function refinement. With the assistance of the intra saliency map, the inter-image correspondence is formulated as a hierarchical sparsity reconstruction framework. The global sparsity reconstruction model with a ranking scheme focuses on capturing the global characteristics among the whole image group through a common foreground dictionary. The pairwise sparsity reconstruction model aims to explore the corresponding relationship between pairwise images through a set of pairwise dictionaries. In order to improve the intra-image smoothness and inter-image consistency, an energy function refinement model is proposed, which includes the unary data term, spatial smooth term, and holistic consistency term. Experiments on two RGBD co-saliency detection benchmarks demonstrate that the proposed method outperforms the state-of-the-art algorithms both qualitatively and quantitatively.

12.Ground Plane Polling for 6DoF Pose Estimation of Objects on the Road pdf

This paper introduces an approach to produce accurate 3D detection boxes for objects on the ground using single monocular images. We do so by merging 2D visual cues, 3D object dimensions, and ground plane constraints to produce boxes that are robust against small errors and incorrect predictions. First, we train a single-shot convolutional neural network (CNN) that produces multiple visual and geometric cues of interest: 2D bounding boxes, 2D keypoints of interest, coarse object orientations and object dimensions. Subsets of these cues are then used to poll probable ground planes from a pre-computed database of ground planes, to identify the "best fit" plane with highest consensus. Once identified, the "best fit" plane provides enough constraints to successfully construct the desired 3D detection box, without directly predicting the 6DoF pose of the object. The entire ground plane polling (GPP) procedure is constructed as a non-parametrized layer of the CNN that outputs the desired "best fit" plane and the corresponding 3D keypoints, which together define the final 3D bounding box. This single-stage, single-pass CNN results in superior localization compared to more complex and computationally expensive approaches.

13.Detecting The Objects on The Road Using Modular Lightweight Network pdf

This paper presents a modular lightweight network model for road objects detection, such as car, pedestrian and cyclist, especially when they are far away from the camera and their sizes are small. Great advances have been made for the deep networks, but small objects detection is still a challenging task. In order to solve this problem, majority of existing methods utilize complicated network or bigger image size, which generally leads to higher computation cost. The proposed network model is referred to as modular feature fusion detector (MFFD), using a fast and efficient network architecture for detecting small objects. The contribution lies in the following aspects: 1) Two base modules have been designed for efficient computation: Front module reduce the information loss from raw input images; Tinier module decrease model size and computation cost, while ensuring the detection accuracy. 2) By stacking the base modules, we design a context features fusion framework for multi-scale object detection. 3) The propose method is efficient in terms of model size and computation cost, which is applicable for resource limited devices, such as embedded systems for advanced driver assistance systems (ADAS). Comparisons with the state-of-the-arts on the challenging KITTI dataset reveal the superiority of the proposed method. Especially, 100 fps can be achieved on the embedded GPUs such as Jetson TX2.

14.Conditional GANs for Multi-Illuminant Color Constancy: Revolution or Yet Another Approach? pdf

Non-uniform and multi-illuminant color constancy are important tasks, the solution of which will allow to discard information about lighting conditions in the image. Non-uniform illumination and shadows distort colors of real-world objects and mostly do not contain valuable information. Thus, many computer vision and image processing techniques would benefit from automatic discarding of this information at the pre-processing step. In this work we propose novel view on this classical problem via generative end-to-end algorithm, namely image conditioned Generative Adversarial Network. We also demonstrate the potential of the given approach for joint shadow detection and removal. Forced by the lack of training data, we render the largest existing shadow removal dataset and make it publicly available. It consists of approximately 6,000 pairs of wide field of view synthetic images with and without shadows.

15.CAN: Composite Appearance Network and a Novel Evaluation Metric for Person Tracking pdf

Tracking multiple people across multiple cameras is an open problem. It is typically divided into two tasks: (i) single-camera tracking (SCT) - identify trajectories in the same scene, and (ii) inter-camera tracking (ICT) - identify trajectories across cameras for real surveillance scenes. Many of the existing methods cater to single camera person tracking, while inter-camera tracking still remains a challenge. In this paper, we propose a tracking method which uses motion cues and a feature aggregation network for template-based person re-identification by incorporating metadata such as person bounding box and camera information. We present an architecture called Composite Appearance Network (CAN) to address the above problem. The key structure of this architecture is a network called EvalNet that pays attention to each feature vector independently and learns to weight them based on gradients it receives for the overall template for optimal re-identification performance. We demonstrate the efficiency of our approach with experiments on the challenging and large-scale multi-camera tracking dataset, DukeMTMC, and by comparing results to their baseline approach. We also survey existing tracking measures and present an online error metric called "Inference Error" (IE) that provides a better estimate of tracking/re-identification error, by treating within-camera and inter-camera errors uniformly.

16.DARCCC: Detecting Adversaries by Reconstruction from Class Conditional Capsules pdf

We present a simple technique that allows capsule models to detect adversarial images. In addition to being trained to classify images, the capsule model is trained to reconstruct the images from the pose parameters and identity of the correct top-level capsule. Adversarial images do not look like a typical member of the predicted class and they have much larger reconstruction errors when the reconstruction is produced from the top-level capsule for that class. We show that setting a threshold on the $l2$ distance between the input image and its reconstruction from the winning capsule is very effective at detecting adversarial images for three different datasets. The same technique works quite well for CNNs that have been trained to reconstruct the image from all or part of the last hidden layer before the softmax. We then explore a stronger, white-box attack that takes the reconstruction error into account. This attack is able to fool our detection technique but in order to make the model change its prediction to another class, the attack must typically make the "adversarial" image resemble images of the other class.

17.Grasp2Vec: Learning Object Representations from Self-Supervised Grasping pdf

Well structured visual representations can make robot learning faster and can improve generalization. In this paper, we study how we can acquire effective object-centric representations for robotic manipulation tasks without human labeling by using autonomous robot interaction with the environment. Such representation learning methods can benefit from continuous refinement of the representation as the robot collects more experience, allowing them to scale effectively without human intervention. Our representation learning approach is based on object persistence: when a robot removes an object from a scene, the representation of that scene should change according to the features of the object that was removed. We formulate an arithmetic relationship between feature vectors from this observation, and use it to learn a representation of scenes and objects that can then be used to identify object instances, localize them in the scene, and perform goal-directed grasping tasks where the robot must retrieve commanded objects from a bin. The same grasping procedure can also be used to automatically collect training data for our method, by recording images of scenes, grasping and removing an object, and recording the outcome. Our experiments demonstrate that this self-supervised approach for tasked grasping substantially outperforms direct reinforcement learning from images and prior representation learning methods.

18.Evaluating Uncertainty Quantification in End-to-End Autonomous Driving Control pdf

A rise in popularity of Deep Neural Networks (DNNs), attributed to more powerful GPUs and widely available datasets, has seen them being increasingly used within safety-critical domains. One such domain, self-driving, has benefited from significant performance improvements, with millions of miles having been driven with no human intervention. Despite this, crashes and erroneous behaviours still occur, in part due to the complexity of verifying the correctness of DNNs and a lack of safety guarantees.
In this paper, we demonstrate how quantitative measures of uncertainty can be extracted in real-time, and their quality evaluated in end-to-end controllers for self-driving cars. To this end we utilise a recent method for gathering approximate uncertainty information from DNNs without changing the network's architecture. We propose evaluation techniques for the uncertainty on two separate architectures which use the uncertainty to predict crashes up to five seconds in advance. We find that mutual information, a measure of uncertainty in classification networks, is a promising indicator of forthcoming crashes.

19.DropFilter: A Novel Regularization Method for Learning Convolutional Neural Networks pdf

The past few years have witnessed the fast development of different regularization methods for deep learning models such as fully-connected deep neural networks (DNNs) and Convolutional Neural Networks (CNNs). Most of previous methods mainly consider to drop features from input data and hidden layers, such as Dropout, Cutout and DropBlocks. DropConnect select to drop connections between fully-connected layers. By randomly discard some features or connections, the above mentioned methods control the overfitting problem and improve the performance of neural networks. In this paper, we proposed two novel regularization methods, namely DropFilter and DropFilter-PLUS, for the learning of CNNs. Different from the previous methods, DropFilter and DropFilter-PLUS selects to modify the convolution filters. For DropFilter-PLUS, we find a suitable way to accelerate the learning process based on theoretical analysis. Experimental results on MNIST show that using DropFilter and DropFilter-PLUS may improve performance on image classification tasks.

20.Composite Binary Decomposition Networks pdf

Binary neural networks have great resource and computing efficiency, while suffer from long training procedure and non-negligible accuracy drops, when comparing to the full-precision counterparts. In this paper, we propose the composite binary decomposition networks (CBDNet), which first compose real-valued tensor of each layer with a limited number of binary tensors, and then decompose some conditioned binary tensors into two low-rank binary tensors, so that the number of parameters and operations are greatly reduced comparing to the original ones. Experiments demonstrate the effectiveness of the proposed method, as CBDNet can approximate image classification network ResNet-18 using 5.25 bits, VGG-16 using 5.47 bits, DenseNet-121 using 5.72 bits, object detection networks SSD300 using 4.38 bits, and semantic segmentation networks SegNet using 5.18 bits, all with minor accuracy drops.

21.Optical Flow Based Background Subtraction with a Moving Camera: Application to Autonomous Driving pdf

In this research we present a novel algorithm for background subtraction using a moving camera. Our algorithm is based purely on visual information obtained from a camera mounted on an electric bus, operating in downtown Reno which automatically detects moving objects of interest with the view to provide a fully autonomous vehicle. In our approach we exploit the optical flow vectors generated by the motion of the camera while keeping parameter assumptions a minimum. At first, we estimate the Focus of Expansion, which is used to model and simulate 3D points given the intrinsic parameters of the camera, and perform multiple linear regression to estimate the regression equation parameters and implement on the real data set of every frame to identify moving objects. We validated our algorithm using data taken from a common bus route.