Skip to content

Latest commit

 

History

History
95 lines (95 loc) · 61.3 KB

20181108.md

File metadata and controls

95 lines (95 loc) · 61.3 KB

ArXiv cs.CV --Thu, 8 Nov 2018

1.Prototypical Clustering Networks for Dermatological Disease Diagnosis pdf

We consider the problem of image classification for the purpose of aiding doctors in dermatological diagnosis. Dermatological diagnosis poses two major challenges for standard off-the-shelf techniques: First, the data distribution is typically extremely long tailed. Second, intra-class variability is often large. To address the first issue, we formulate the problem as low-shot learning, where once deployed, a base classifier must rapidly generalize to diagnose novel conditions given very few labeled examples. To model diverse classes effectively, we propose Prototypical Clustering Networks (PCN), an extension to Prototypical Networks that learns a mixture of prototypes for each class. Prototypes are initialized for each class via clustering and refined via an online update scheme. Classification is performed by measuring similarity to a weighted combination of prototypes within a class, where the weights are the inferred cluster responsibilities. We demonstrate the strengths of our approach in effective diagnosis on a realistic dataset of dermatological conditions.

2.Instance Retrieval at Fine-grained Level Using Multi-Attribute Recognition pdf

In this paper, we present a method for instance ranking and retrieval at fine-grained level based on the global features extracted from a multi-attribute recognition model which is not dependent on landmarks information or part-based annotations. Further, we make this architecture suitable for mobile-device application by adopting the bilinear CNN to make the multi-attribute recognition model smaller (in terms of the number of parameters). The experiments run on the Dress category of DeepFashion In-Shop Clothes Retrieval and CUB200 datasets show that the results of instance retrieval at fine-grained level are promising for these datasets, specially in terms of texture and color.

3.SurReal: enhancing Surgical simulation Realism using style transfer pdf

Surgical simulation is an increasingly important element of surgical education. Using simulation can be a means to address some of the significant challenges in developing surgical skills with limited time and resources. The photo-realistic fidelity of simulations is a key feature that can improve the experience and transfer ratio of trainees. In this paper, we demonstrate how we can enhance the visual fidelity of existing surgical simulation by performing style transfer of multi-class labels from real surgical video onto synthetic content. We demonstrate our approach on simulations of cataract surgery using real data labels from an existing public dataset. Our results highlight the feasibility of the approach and also the powerful possibility to extend this technique to incorporate additional temporal constraints and to different applications.

4.Multi-branch Convolutional Neural Network for Multiple Sclerosis Lesion Segmentation pdf

In this paper, we present an automated approach for segmenting multiple sclerosis (MS) lesions from multi-modal brain magnetic resonance images. Our method is based on a deep end-to-end 2D convolutional neural network (CNN) for slice-based segmentation of 3D volumetric data. The proposed CNN includes a multi-branch down-sampling path, which enables the network to encode slices from multiple modalities separately. Multi-scale feature fusion blocks are proposed to combine feature-maps from different modalities at different stages of the network. Then, multi-scale feature up-sampling blocks are proposed to upsize combined feature-maps with different resolutions to leverage information from the lesion's shape and location. We trained and tested our model using orthogonal plane orientations of each 3D modality to exploit the contextual information in all directions. The proposed pipeline is evaluated on two different datasets, a private dataset including 37 MS patients and a publicly available dataset known as the ISBI 2015 longitudinal MS lesion segmentation challenge dataset, consisting of 14 MS patients. Considering the ISBI challenge, at the time of submission, our method was amongst the top performing solutions. On the private dataset, using the same array of performance metrics as in the ISBI challenge, the proposed approach shows high improvements in MS lesion segmentation comparing with other publicly available tools.

5.Emerging Applications of Reversible Data Hiding pdf

Reversible data hiding (RDH) is one special type of information hiding, by which the host sequence as well as the embedded data can be both restored from the marked sequence without loss. Beside media annotation and integrity authentication, recently some scholars begin to apply RDH in many other fields innovatively. In this paper, we summarize these emerging applications, including steganography, adversarial example, visual transformation, image processing, and give out the general frameworks to make these operations reversible. As far as we are concerned, this is the first paper to summarize the extended applications of RDH.

6.DOD-CNN: Doubly-injecting Object Information for Event Recognition pdf

Recognizing an event in an image can be enhanced by detecting relevant objects in two ways: 1) indirectly utilizing object detection information within the unified architecture or 2) directly making use of the object detection output results. We introduce a novel approach, referred to as Doubly-injected Object Detection CNN (DOD-CNN), exploiting the object information in both ways for the task of event recognition. The structure of this network is inspired by the Integrated Object Detection CNN (IOD-CNN) where object information is indirectly exploited by the event recognition module through the shared portion of the network. In the DOD-CNN architecture, the intermediate object detection outputs are directly injected into the event recognition network while keeping the indirect sharing structure inherited from the IOD-CNN, thus being `doubly-injected'. We also introduce a batch pooling layer which constructs one representative feature map from multiple object hypotheses. We have demonstrated the effectiveness of injecting the object detection information in two different ways in the task of malicious event recognition.

7.Neural Image Compression for Gigapixel Histopathology Image Analysis pdf

We present Neural Image Compression (NIC), a method to reduce the size of gigapixel images by mapping them to a compact latent space using neural networks. We show that this compression allows us to train convolutional neural networks on histopathology whole-slide images end-to-end using weak image-level labels.

8.PaDNet: Pan-Density Crowd Counting pdf

Crowd counting in varying density scenes is a challenging problem in artificial intelligence (AI) and pattern recognition. Recently, deep convolutional neural networks (CNNs) are used to tackle this problem. However, the single-column CNN cannot achieve high accuracy and robustness in diverse density scenes. Meanwhile, multi-column CNNs lack effective way to accurately learn the features of different scales for estimating crowd density. To address these issues, we propose a novel pan-density level deep learning model, named as Pan-Density Network (PaDNet). Specifically, the PaDNet learns multi-scale features by three steps. First, several sub-networks are pre-trained on crowd images with different density-levels. Then, a Scale Reinforcement Net (SRN) is utilized to reinforce the scale features. Finally, a Fusion Net fuses all of the scale features to generate the final density map. Experiments on four crowd counting benchmark datasets, the ShanghaiTech, the UCF_CC_50, the UCSD, and the UCF-QRNF, indicate that the PaDNet achieves the best performance and has high robustness in pan-density crowd counting compared with other state-of-the-art algorithms.

9.Image Smoothing via Unsupervised Learning pdf

Image smoothing represents a fundamental component of many disparate computer vision and graphics applications. In this paper, we present a unified unsupervised (label-free) learning framework that facilitates generating flexible and high-quality smoothing effects by directly learning from data using deep convolutional neural networks (CNNs). The heart of the design is the training signal as a novel energy function that includes an edge-preserving regularizer which helps maintain important yet potentially vulnerable image structures, and a spatially-adaptive Lp flattening criterion which imposes different forms of regularization onto different image regions for better smoothing quality. We implement a diverse set of image smoothing solutions employing the unified framework targeting various applications such as, image abstraction, pencil sketching, detail enhancement, texture removal and content-aware image manipulation, and obtain results comparable with or better than previous methods. Moreover, our method is extremely fast with a modern GPU (e.g, 200 fps for 1280x720 images). Our codes and model are released in this https URL.

10.Deep Neural Networks for ECG-free Cardiac Phase and End-Diastolic Frame Detection on Coronary Angiographies pdf

Invasive coronary angiography (ICA) is the gold standard in Coronary Artery Disease (CAD) imaging. Detection of the end-diastolic frame (EDF) and, in general, cardiac phase detection on each temporal frame of a coronary angiography acquisition is of significant importance for the anatomical and non-invasive functional assessment of CAD. This task is generally performed via manual frame selection or semi-automated selection based on simultaneously acquired ECG signals - thus introducing the requirement of simultaneous ECG recordings. We evaluate the performance of a purely image based workflow based on deep neural networks for fully automated cardiac phase and EDF detection on coronary angiographies. A first deep neural network (DNN), trained to detect coronary arteries, is employed to preselect a subset of frames in which coronary arteries are well visible. A second DNN predicts cardiac phase labels for each frame. Only in the training and evaluation phases for the second DNN, ECG signals are used to provide ground truth labels for each angiographic frame. The networks were trained on 17800 coronary angiographies from 3900 patients and evaluated on 27900 coronary angiographies from 6250 patients. No exclusion criteria related to patient state, previous interventions, or pathology were formulated. Cardiac phase detection had an accuracy of 97.6%, a sensitivity of 97.6% and a specificity of 97.5% on the evaluation set. EDF prediction had a precision of 97.4% and a recall of 96.9%. Several sub-group analyses were performed, indicating that the cardiac phase detection performance is largely independent from acquisition angles and the heart rate of the patient. The average execution time of cardiac phase detection for one angiographic series was on average less than five seconds on a standard workstation.

11.Amalgamating Knowledge towards Comprehensive Classification pdf

With the rapid development of deep learning, there have been an unprecedentedly large number of trained deep network models available online. Reusing such trained models can significantly reduce the cost of training the new models from scratch, if not infeasible at all as the annotations used for the training original networks are often unavailable to public. We propose in this paper to study a new model-reusing task, which we term as \emph{knowledge amalgamation}. Given multiple trained teacher networks, each of which specializes in a different classification problem, the goal of knowledge amalgamation is to learn a lightweight student model capable of handling the comprehensive classification. We assume no other annotations except the outputs from the teacher models are available, and thus focus on extracting and amalgamating knowledge from the multiple teachers. To this end, we propose a pilot two-step strategy to tackle the knowledge amalgamation task, by learning first the compact feature representations from teachers and then the network parameters in a layer-wise manner so as to build the student model. We apply this approach to four public datasets and obtain very encouraging results: even without any human annotation, the obtained student model is competent to handle the comprehensive classification task and in most cases outperforms the teachers in individual sub-tasks.

12.GeoSay: A Geometric Saliency for Extracting Buildings in Remote Sensing Images pdf

Automatic extraction of buildings in remote sensing images is an important but challenging task and finds many applications in different fields such as urban planning, navigation and so on. This paper addresses the problem of buildings extraction in very high-spatial-resolution (VHSR) remote sensing (RS) images, whose spatial resolution is often up to half meters and provides rich information about buildings. Based on the observation that buildings in VHSR-RS images are always more distinguishable in geometry than in texture or spectral domain, this paper proposes a geometric building index (GBI) for accurate building extraction, by computing the geometric saliency from VHSR-RS images. More precisely, given an image, the geometric saliency is derived from a mid-level geometric representations based on meaningful junctions that can locally describe geometrical structures of images. The resulting GBI is finally measured by integrating the derived geometric saliency of buildings. Experiments on three public and commonly used datasets demonstrate that the proposed GBI achieves the state-of-the-art performance and shows impressive generalization capability. Additionally, GBI preserves both the exact position and accurate shape of single buildings compared to existing methods.

13.Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks pdf

The training of many existing end-to-end steering angle prediction models heavily relies on steering angles as the supervisory signal. Without learning from much richer contexts, these methods are susceptible to the presence of sharp road curves, challenging traffic conditions, strong shadows, and severe lighting changes. In this paper, we considerably improve the accuracy and robustness of predictions through heterogeneous auxiliary networks feature mimicking, a new and effective training method that provides us with much richer contextual signals apart from steering direction. Specifically, we train our steering angle predictive model by distilling multi-layer knowledge from multiple heterogeneous auxiliary networks that perform related but different tasks, e.g., image segmentation or optical flow estimation. As opposed to multi-task learning, our method does not require expensive annotations of related tasks on the target set. This is made possible by applying contemporary off-the-shelf networks on the target set and mimicking their features in different layers after transformation. The auxiliary networks are discarded after training without affecting the runtime efficiency of our model. Our approach achieves a new state-of-the-art on Udacity and Comma.ai, outperforming the previous best by a large margin of 12.8% and 52.1%, respectively. Encouraging results are also shown on Berkeley Deep Drive (BDD) dataset.

14.Component-based Attention for Large-scale Trademark Retrieval pdf

The demand for large-scale trademark retrieval (TR) systems has significantly increased to combat the rise in international trademark infringement. Unfortunately, the ranking accuracy of current approaches using either hand-crafted or pre-trained deep convolution neural network (DCNN) features is inadequate for large-scale deployments. We show in this paper that the ranking accuracy of TR systems can be significantly improved by incorporating hard and soft attention mechanisms, which direct attention to critical information such as figurative elements and reduce attention given to distracting and uninformative elements such as text and background. Our proposed approach achieves state-of-the-art results on a challenging large-scale trademark dataset.

15.Y^2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences pdf

A recent method employs 3D voxels to represent 3D shapes, but this limits the approach to low resolutions due to the computational cost caused by the cubic complexity of 3D voxels. Hence the method suffers from a lack of detailed geometry. To resolve this issue, we propose Y^2Seq2Seq, a view-based model, to learn cross-modal representations by joint reconstruction and prediction of view and word sequences. Specifically, the network architecture of Y^2Seq2Seq bridges the semantic meaning embedded in the two modalities by two coupled `Y' like sequence-to-sequence (Seq2Seq) structures. In addition, our novel hierarchical constraints further increase the discriminability of the cross-modal representations by employing more detailed discriminative information. Experimental results on cross-modal retrieval and 3D shape captioning show that Y^2Seq2Seq outperforms the state-of-the-art methods.

16.View Inter-Prediction GAN: Unsupervised Representation Learning for 3D Shapes by Learning Global Shape Memories to Support Local View Predictions pdf

In this paper we present a novel unsupervised representation learning approach for 3D shapes, which is an important research challenge as it avoids the manual effort required for collecting supervised data. Our method trains an RNN-based neural network architecture to solve multiple view inter-prediction tasks for each shape. Given several nearby views of a shape, we define view inter-prediction as the task of predicting the center view between the input views, and reconstructing the input views in a low-level feature space. The key idea of our approach is to implement the shape representation as a shape-specific global memory that is shared between all local view inter-predictions for each shape. Intuitively, this memory enables the system to aggregate information that is useful to better solve the view inter-prediction tasks for each shape, and to leverage the memory as a view-independent shape representation. Our approach obtains the best results using a combination of L_2 and adversarial losses for the view inter-prediction task. We show that VIP-GAN outperforms state-of-the-art methods in unsupervised 3D feature learning on three large scale 3D shape benchmarks.

17.Style Separation and Synthesis via Generative Adversarial Networks pdf

Style synthesis attracts great interests recently, while few works focus on its dual problem "style separation". In this paper, we propose the Style Separation and Synthesis Generative Adversarial Network (S3-GAN) to simultaneously implement style separation and style synthesis on object photographs of specific categories. Based on the assumption that the object photographs lie on a manifold, and the contents and styles are independent, we employ S3-GAN to build mappings between the manifold and a latent vector space for separating and synthesizing the contents and styles. The S3-GAN consists of an encoder network, a generator network, and an adversarial network. The encoder network performs style separation by mapping an object photograph to a latent vector. Two halves of the latent vector represent the content and style, respectively. The generator network performs style synthesis by taking a concatenated vector as input. The concatenated vector contains the style half vector of the style target image and the content half vector of the content target image. Once obtaining the images from the generator network, an adversarial network is imposed to generate more photo-realistic images. Experiments on CelebA and UT Zappos 50K datasets demonstrate that the S3-GAN has the capacity of style separation and synthesis simultaneously, and could capture various styles in a single model.

18.Training Domain Specific Models for Energy-Efficient Object Detection pdf

We propose an end-to-end framework for training domain specific models (DSMs) to obtain both high accuracy and computational efficiency for object detection tasks. DSMs are trained with distillation \cite{hinton2015distilling} and focus on achieving high accuracy at a limited domain (e.g. fixed view of an intersection). We argue that DSMs can capture essential features well even with a small model size, enabling higher accuracy and efficiency than traditional techniques. In addition, we improve the training efficiency by reducing the dataset size by culling easy to classify images from the training set. For the limited domain, we observed that compact DSMs significantly surpass the accuracy of COCO trained models of the same size. By training on a compact dataset, we show that with an accuracy drop of only 3.6%, the training time can be reduced by 93%.

19.Automatic Assessment of Full Left Ventricular Coverage in Cardiac Cine Magnetic Resonance Imaging with Fisher-Discriminative 3D CNN pdf

Cardiac magnetic resonance (CMR) images play a growing role in the diagnostic imaging of cardiovascular diseases. Full coverage of the left ventricle (LV), from base to apex, is a basic criterion for CMR image quality and necessary for accurate measurement of cardiac volume and functional assessment. Incomplete coverage of the LV is identified through visual inspection, which is time-consuming and usually done retrospectively in the assessment of large imaging cohorts. This paper proposes a novel automatic method for determining LV coverage from CMR images by using Fisher-discriminative three-dimensional (FD3D) convolutional neural networks (CNNs). In contrast to our previous method employing 2D CNNs, this approach utilizes spatial contextual information in CMR volumes, extracts more representative high-level features and enhances the discriminative capacity of the baseline 2D CNN learning framework, thus achieving superior detection accuracy. A two-stage framework is proposed to identify missing basal and apical slices in measurements of CMR volume. First, the FD3D CNN extracts high-level features from the CMR stacks. These image representations are then used to detect the missing basal and apical slices. Compared to the traditional 3D CNN strategy, the proposed FD3D CNN minimizes within-class scatter and maximizes between-class scatter. We performed extensive experiments to validate the proposed method on more than 5,000 independent volumetric CMR scans from the UK Biobank study, achieving low error rates for missing basal/apical slice detection (4.9%/4.6%). The proposed method can also be adopted for assessing LV coverage for other types of CMR image data.

20.Attention-Mechanism-based Tracking Method for Intelligent Internet of Vehicles pdf

Vehicle tracking task plays an important role on the internet of vehicles and intelligent transportation system. Beyond the traditional GPS sensor, the image sensor can capture different kinds of vehicles, analyze their driving situation and can interact with them. Aiming at the problem that the traditional convolutional neural network is vulnerable to background interference, this paper proposes vehicle tracking method based on human attention mechanism for self-selection of deep features with an inter-channel fully connected layer. It mainly includes the following contents: 1) A fully convolutional neural network fused attention mechanism with the selection of the deep features for convolution. 2) A separation method for template and semantic background region to separate target vehicles from the background in the initial frame adaptively. 3) A two-stage method for model training using our traffic dataset. The experimental results show that the proposed method improves the tracking accuracy without an increase in tracking time. Meanwhile, it strengthens the robustness of algorithm under the condition of the complex background region. The success rate of the proposed method in overall traffic datasets is higher than Siamese network by about 10 percent, and the overall precision is higher than Siamese network by 8 percent.

21.Automated Diagnosis of Lymphoma with Digital Pathology Images Using Deep Learning pdf

Recent studies have shown promising results in using Deep Learning to detect malignancy in whole slide imaging. However, they were limited to just predicting positive or negative finding for a specific neoplasm. We attempted to use Deep Learning with a convolutional neural network algorithm to build a lymphoma diagnostic model for four diagnostic categories: benign lymph node, diffuse large B cell lymphoma, Burkitt lymphoma, and small lymphocytic lymphoma. Our software was written in Python language. We obtained digital whole slide images of Hematoxylin and Eosin stained slides of 128 cases including 32 cases for each diagnostic category. Four sets of 5 representative images, 40x40 pixels in dimension, were taken for each case. A total of 2,560 images were obtained from which 1,856 were used for training, 464 for validation and 240 for testing. For each test set of 5 images, the predicted diagnosis was combined from prediction of 5 images. The test results showed excellent diagnostic accuracy at 95% for image-by-image prediction and at 10% for set-by-set prediction. This preliminary study provided a proof of concept for incorporating automated lymphoma diagnostic screen into future pathology workflow to augment the pathologists' productivity.

22.Band Selection from Hyperspectral Images Using Attention-based Convolutional Neural Networks pdf

This paper introduces new attention-based convolutional neural networks for selecting bands from hyperspectral images. The proposed approach re-uses convolutional activations at different depths, identifying the most informative regions of the spectrum with the help of gating mechanisms. Our attention techniques are modular and easy to implement, and they can be seamlessly trained end-to-end using gradient descent. Our rigorous experiments showed that deep models equipped with the attention mechanism deliver high-quality classification, and repeatedly identify significant bands in the training data, permitting the creation of refined and extremely compact sets that retain the most meaningful features.

23.Similarity Learning with Higher-Order Proximity for Brain Network Analysis pdf

In recent years, the similarity learning problem has been widely studied. Most of the existing works focus on images and few of these works could be applied to learn similarity between neuroimages, such as fMRI images and DTI images, which are important data sources for human brain analysis. In this paper, we focus on the similarity learning for fMRI brain network analysis. We propose a general framework called "Multi-hop Siamese GCN" for similarity learning on graphs. This framework provides options for refining the graph representations with high-order structure information, thus can be used for graph similarity learning on various brain network data sets. We apply the proposed Multi-hop Siamese GCN approach on four real fMRI brain network datasets for similarity learning with respect to brain health status and cognitive abilities. Our proposed method achieves an average AUC gain of 82.6% compared to PCA, and an average AUC gain of 42% compared to S-GCN across a variety of datasets, indicating its promising learning ability for clinical investigation and brain disease diagnosis.

24.MAMMO: A Deep Learning Solution for Facilitating Radiologist-Machine Collaboration in Breast Cancer Diagnosis pdf

With an aging and growing population, the number of women requiring either screening or symptomatic mammograms is increasing. To reduce the number of mammograms that need to be read by a radiologist while keeping the diagnostic accuracy the same or better than current clinical practice, we develop Man and Machine Mammography Oracle (MAMMO) - a clinical decision support system capable of triaging mammograms into those that can be confidently classified by a machine and those that cannot be, thus requiring the reading of a radiologist. The first component of MAMMO is a novel multi-view convolutional neural network (CNN) with multi-task learning (MTL). MTL enables the CNN to learn the radiological assessments known to be associated with cancer, such as breast density, conspicuity, suspicion, etc., in addition to learning the primary task of cancer diagnosis. We show that MTL has two advantages: 1) learning refined feature representations associated with cancer improves the classification performance of the diagnosis task and 2) issuing radiological assessments provides an additional layer of model interpretability that a radiologist can use to debug and scrutinize the diagnoses provided by the CNN. The second component of MAMMO is a triage network, which takes as input the radiological assessment and diagnostic predictions of the first network's MTL outputs and determines which mammograms can be correctly and confidently diagnosed by the CNN and which mammograms cannot, thus needing to be read by a radiologist. Results obtained on a private dataset of 8,162 patients show that MAMMO reduced the number of radiologist readings by 42.8% while improving the overall diagnostic accuracy in comparison to readings done by radiologists alone. We analyze the triage of patients decided by MAMMO to gain a better understanding of what unique mammogram characteristics require radiologists' expertise.

25.Machine Learning Algorithms for Classification of Microcirculation Images from Septic and Non-Septic Patients pdf

Sepsis is a life-threatening disease and one of the major causes of death in hospitals. Imaging of microcirculatory dysfunction is a promising approach for automated diagnosis of sepsis. We report a machine learning classifier capable of distinguishing non-septic and septic images from dark field microcirculation videos of patients. The classifier achieves an accuracy of 89.45%. The area under the receiver operating characteristics of the classifier was 0.92, the precision was 0.92 and the recall was 0.84. Codes representing the learned feature space of trained classifier were visualized using t-SNE embedding and were separable and distinguished between images from critically ill and non-septic patients. Using an unsupervised convolutional autoencoder, independent of the clinical diagnosis, we also report clustering of learned features from a compressed representation associated with healthy images and those with microcirculatory dysfunction. The feature space used by our trained classifier to distinguish between images from septic and non-septic patients has potential diagnostic application.

26.When Not to Classify: Detection of Reverse Engineering Attacks on DNN Image Classifiers pdf

This paper addresses detection of a reverse engineering (RE) attack targeting a deep neural network (DNN) image classifier; by querying, RE's aim is to discover the classifier's decision rule. RE can enable test-time evasion attacks, which require knowledge of the classifier. Recently, we proposed a quite effective approach (ADA) to detect test-time evasion attacks. In this paper, we extend ADA to detect RE attacks (ADA-RE). We demonstrate our method is successful in detecting "stealthy" RE attacks before they learn enough to launch effective test-time evasion attacks.

27.Neural Rendering Model: Joint Generation and Prediction for Semi-Supervised Learning pdf

Unsupervised and semi-supervised learning are important problems that are especially challenging with complex data like natural images. Progress on these problems would accelerate if we had access to appropriate generative models under which to pose the associated inference tasks. Inspired by the success of Convolutional Neural Networks (CNNs) for supervised prediction in images, we design the Neural Rendering Model (NRM), a new probabilistic generative model whose inference calculations correspond to those in a given CNN architecture. The NRM uses the given CNN to design the prior distribution in the probabilistic model. Furthermore, the NRM generates images from coarse to finer scales. It introduces a small set of latent variables at each level, and enforces dependencies among all the latent variables via a conjugate prior distribution. This conjugate prior yields a new regularizer based on paths rendered in the generative model for training CNNs-the Rendering Path Normalization (RPN). We demonstrate that this regularizer improves generalization, both in theory and in practice. In addition, likelihood estimation in the NRM yields training losses for CNNs, and inspired by this, we design a new loss termed as the Max-Min cross entropy which outperforms the traditional cross-entropy loss for object classification. The Max-Min cross entropy suggests a new deep network architecture, namely the Max-Min network, which can learn from less labeled data while maintaining good prediction performance. Our experiments demonstrate that the NRM with the RPN and the Max-Min architecture exceeds or matches the-state-of-art on benchmarks including SVHN, CIFAR10, and CIFAR100 for semi-supervised and supervised learning tasks.

28.Quaternion Convolutional Neural Networks for Heterogeneous Image Processing pdf

Convolutional neural networks (CNN) have recently achieved state-of-the-art results in various applications. In the case of image recognition, an ideal model has to learn independently of the training data, both local dependencies between the three components (R,G,B) of a pixel, and the global relations describing edges or shapes, making it efficient with small or heterogeneous datasets. Quaternion-valued convolutional neural networks (QCNN) solved this problematic by introducing multidimensional algebra to CNN. This paper proposes to explore the fundamental reason of the success of QCNN over CNN, by investigating the impact of the Hamilton product on a color image reconstruction task performed from a gray-scale only training. By learning independently both internal and external relations and with less parameters than real valued convolutional encoder-decoder (CAE), quaternion convolutional encoder-decoders (QCAE) perfectly reconstructed unseen color images while CAE produced worst and gray-scale versions.

29.A Volumetric Convolutional Neural Network for Brain Tumor Segmentation pdf

Brain cancer can be very fatal, but chances of survival increase through early detection and treatment. Doctors use Magnetic Resonance Imaging (MRI) to detect and locate tumors in the brain, and very carefully analyze scans to segment brain tumors. Manual segmentation is time consuming and tiring for doctors, and it can be difficult for them to notice extremely small abnormalities. Automated segmentations performed by computers offer quicker diagnoses, the ability to notice small details, and more accurate segmentations. Advances in deep learning and computer hardware have allowed for high-performing automated segmentation approaches. However, several problems persist in practice: increased training time, class imbalance, and low performance. In this paper, I propose applying V-Net, a volumetric, fully convolutional neural network, to segment brain tumors in MRI scans from the BraTS Challenges. With this approach, I achieve a whole tumor dice score of 0.89 and train the network in a short time while addressing class imbalance with the use of a dice loss layer. Then, I propose applying an existing technique to improve automated segmentation performance in practice.

30.Finding and Following of Honeycombing Regions in Computed Tomography Lung Images by Deep Learning pdf

In recent years, besides the medical treatment methods in medical field, Computer Aided Diagnosis (CAD) systems which can facilitate the decision making phase of the physician and can detect the disease at an early stage have started to be used frequently. The diagnosis of Idiopathic Pulmonary Fibrosis (IPF) disease by using CAD systems is very important in that it can be followed by doctors and radiologists. It has become possible to diagnose and follow up the disease with the help of CAD systems by the development of high resolution computed imaging scanners and increasing size of computation power. The purpose of this project is to design a tool that will help specialists diagnose and follow up the IPF disease by identifying areas of honeycombing and ground glass patterns in High Resolution Computed Tomography (HRCT) lung images. Creating a program module that segments the lung pair and creating a self-learner deep learning model from given Computed Tomography (CT) images for the specific diseased regions thanks to doctors are the main purposes of this work. Through the created model, program module will be able to find special regions in given new CT images. In this study, the performance of lung segmentation was tested by the Sørensen-Dice coefficient method and the mean performance was measured as 90.7%, testing of the created model was performed with data not used in the training stage of the CNN network, and the average performance was measured as 87.8% for healthy regions, 73.3% for ground-glass areas and 69.1% for honeycombing zones.

31.Visual Attention is Beyond One Single Saliency Map pdf

Of later years, numerous bottom-up attention models have been proposed on different assumptions. However, the produced saliency maps may be different from each other even from the same input image. We also observe that human fixation map varies across time greatly. When people freely view an image, they tend to allocate attention at salient regions of large scale at first, and then search more and more detailed regions. In this paper, we argue that, for one input image visual attention cannot be described by only one single saliency map, and this mechanism should be modeled as a dynamic process. Under the frequency domain paradigm, we proposed a global inhibition model to mimic this process by suppressing the {\it non-saliency} in the input image; we also show that the dynamic process is influenced by one parameter in the frequency domain. Experiments illustrate that the proposed model is capable of predicting human dynamic fixation distribution.

32.DeepDPM: Dynamic Population Mapping via Deep Neural Network pdf

Dynamic high resolution data on human population distribution is of great importance for a wide spectrum of activities and real-life applications, but is too difficult and expensive to obtain directly. Therefore, generating fine-scaled population distributions from coarse population data is of great significance. However, there are three major challenges: 1) the complexity in spatial relations between high and low resolution population; 2) the dependence of population distributions on other external information; 3) the difficulty in retrieving temporal distribution patterns. In this paper, we first propose the idea to generate dynamic population distributions in full-time series, then we design dynamic population mapping via deep neural network(DeepDPM), a model that describes both spatial and temporal patterns using coarse data and point of interest information. In DeepDPM, we utilize super-resolution convolutional neural network(SRCNN) based model to directly map coarse data into higher resolution data, and a time-embedded long short-term memory model to effectively capture the periodicity nature to smooth the finer-scaled results from the previous static SRCNN model. We perform extensive experiments on a real-life mobile dataset collected from Shanghai. Our results demonstrate that DeepDPM outperforms previous state-of-the-art methods and a suite of frequent data-mining approaches. Moreover, DeepDPM breaks through the limitation from previous works in time dimension so that dynamic predictions in all-day time slots can be obtained.

33.Distilling Critical Paths in Convolutional Neural Networks pdf

Neural network compression and acceleration are widely demanded currently due to the resource constraints on most deployment targets. In this paper, through analyzing the filter activation, gradients, and visualizing the filters' functionality in convolutional neural networks, we show that the filters in higher layers learn extremely task-specific features, which are exclusive for only a small subset of the overall tasks, or even a single class. Based on such findings, we reveal the critical paths of information flow for different classes. And by their intrinsic property of exclusiveness, we propose a critical path distillation method, which can effectively customize the convolutional neural networks to small ones with much smaller model size and less computation.

34.Computational Histological Staining and Destaining of Prostate Core Biopsy RGB Images with Generative Adversarial Neural Networks pdf

Histopathology tissue samples are widely available in two states: paraffin-embedded unstained and non-paraffin-embedded stained whole slide RGB images (WSRI). Hematoxylin and eosin stain (H&E) is one of the principal stains in histology but suffers from several shortcomings related to tissue preparation, staining protocols, slowness and human error. We report two novel approaches for training machine learning models for the computational H&E staining and destaining of prostate core biopsy RGB images. The staining model uses a conditional generative adversarial network that learns hierarchical non-linear mappings between whole slide RGB image (WSRI) pairs of prostate core biopsy before and after H&E staining. The trained staining model can then generate computationally H&E-stained prostate core WSRIs using previously unseen non-stained biopsy images as input. The destaining model, by learning mappings between an H&E stained WSRI and a non-stained WSRI of the same biopsy, can computationally destain previously unseen H&E-stained images. Structural and anatomical details of prostate tissue and colors, shapes, geometries, locations of nuclei, stroma, vessels, glands and other cellular components were generated by both models with structural similarity indices of 0.68 (staining) and 0.84 (destaining). The proposed staining and destaining models can engender computational H&E staining and destaining of WSRI biopsies without additional equipment and devices.

35.Demystifying Neural Network Filter Pruning pdf

Based on filter magnitude ranking (e.g. L1 norm), conventional filter pruning methods for Convolutional Neural Networks (CNNs) have been proved with great effectiveness in computation load reduction. Although effective, these methods are rarely analyzed in a perspective of filter functionality. In this work, we explore the filter pruning and the retraining through qualitative filter functionality interpretation. We find that the filter magnitude based method fails to eliminate the filters with repetitive functionality. And the retraining phase is actually used to reconstruct the remained filters for functionality compensation for the wrongly-pruned critical filters. With a proposed functionality-oriented pruning method, we further testify that, by precisely addressing the filter functionality redundancy, a CNN can be pruned without considerable accuracy drop, and the retraining phase is unnecessary.

36.A mixed signal architecture for convolutional neural networks pdf

Deep neural network (DNN) accelerators with improved energy and delay are desirable for meeting the requirements of hardware targeted for IoT and edge computing systems. Convolutional neural networks (CoNNs) belong to one of the most popular types of DNN architectures. This paper presents the design and evaluation of an accelerator for CoNNs. The system-level architecture is based on mixed-signal, cellular neural networks (CeNNs). Specifically, we present (i) the implementation of different layers, including convolution, ReLU, and pooling, in a CoNN using CeNN, (ii) modified CoNN structures with CeNN-friendly layers to reduce computational overheads typically associated with a CoNN, (iii) a mixed-signal CeNN architecture that performs CoNN computations in the analog and mixed signal domain, and (iv) design space exploration that identifies what CeNN-based algorithm and architectural features fare best compared to existing algorithms and architectures when evaluated over common datasets -- MNIST and CIFAR-10. Notably, the proposed approach can lead to 8.7$\times$ improvements in energy-delay product (EDP) per digit classification for the MNIST dataset at iso-accuracy when compared with the state-of-the-art DNN engine, while our approach could offer 4.3$\times$ improvements in EDP when compared to other network implementations for the CIFAR-10 dataset.

37.Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge pdf

Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e. 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that undergone gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.

38.Learning Bone Suppression from Dual Energy Chest X-rays using Adversarial Networks pdf

Suppressing bones on chest X-rays such as ribs and clavicle is often expected to improve pathologies classification. These bones can interfere with a broad range of diagnostic tasks on pulmonary disease except for musculoskeletal system. Current conventional method for acquisition of bone suppressed X-rays is dual energy imaging, which captures two radiographs at a very short interval with different energy levels; however, the patient is exposed to radiation twice and the artifacts arise due to heartbeats between two shots. In this paper, we introduce a deep generative model trained to predict bone suppressed images on single energy chest X-rays, analyzing a finite set of previously acquired dual energy chest X-rays. Since the relatively small amount of data is available, such approach relies on the methodology maximizing the data utilization. Here we integrate the following two approaches. First, we use a conditional generative adversarial network that complements the traditional regression method minimizing the pairwise image difference. Second, we use Haar 2D wavelet decomposition to offer a perceptual guideline in frequency details to allow the model to converge quickly and efficiently. As a result, we achieve state-of-the-art performance on bone suppression as compared to the existing approaches with dual energy chest X-rays.

39.Vehicle Tracking Using Surveillance with Multimodal Data Fusion pdf

Vehicle location prediction or vehicle tracking is a significant topic within connected vehicles. This task, however, is difficult if only a single modal data is available, probably causing bias and impeding the accuracy. With the development of sensor networks in connected vehicles, multimodal data are becoming accessible. Therefore, we propose a framework for vehicle tracking with multimodal data fusion. Specifically, we fuse the results of two modalities, images and velocity, in our vehicle-tracking task. Images, being processed in the module of vehicle detection, provide direct information about the features of vehicles, whereas velocity estimation can further evaluate the possible location of the target vehicles, which reduces the number of features being compared, and decreases the time consumption and computational cost. Vehicle detection is designed with a color-faster R-CNN, which takes both the shape and color of the vehicles into consideration. Meanwhile, velocity estimation is through the Kalman filter, which is a classical method for tracking. Finally, a multimodal data fusion method is applied to integrate these outcomes so that vehicle-tracking tasks can be achieved. Experimental results suggest the efficiency of our methods, which can track vehicles using a series of surveillance cameras in urban areas.

40.Embedded polarizing filters to separate diffuse and specular reflection pdf

Polarizing filters provide a powerful way to separate diffuse and specular reflection; however, traditional methods rely on several captures and require proper alignment of the filters. Recently, camera manufacturers have proposed to embed polarizing micro-filters in front of the sensor, creating a mosaic of pixels with different polarizations. In this paper, we investigate the advantages of such camera designs. In particular, we consider different design patterns for the filter arrays and propose an algorithm to demosaic an image generated by such cameras. This essentially allows us to separate the diffuse and specular components using a single image. The performance of our algorithm is compared with a color-based method using synthetic and real data. Finally, we demonstrate how we can recover the normals of a scene using the diffuse images estimated by our method.

41.Point2Sequence: Learning the Shape Representation of 3D Point Clouds with an Attention-based Sequence to Sequence Network pdf

Exploring contextual information in the local region is important for shape understanding and analysis. Existing studies often employ hand-crafted or explicit ways to encode contextual information of local regions. However, it is hard to capture fine-grained contextual information in hand-crafted or explicit manners, such as the correlation between different areas in a local region, which limits the discriminative ability of learned features. To resolve this issue, we propose a novel deep learning model for 3D point clouds, named Point2Sequence, to learn 3D shape features by capturing fine-grained contextual information in a novel implicit way. Point2Sequence employs a novel sequence learning model for point clouds to capture the correlations by aggregating multi-scale areas of each local region with attention. Specifically, Point2Sequence first learns the feature of each area scale in a local region. Then, it captures the correlation between area scales in the process of aggregating all area scales using a recurrent neural network (RNN) based encoder-decoder structure, where an attention mechanism is proposed to highlight the importance of different area scales. Experimental results show that Point2Sequence achieves state-of-the-art performance in shape classification and segmentation tasks.

42.Generative Adversarial Speaker Embedding Networks for Domain Robust End-to-End Speaker Verification pdf

This article presents a novel approach for learning domain-invariant speaker embeddings using Generative Adversarial Networks. The main idea is to confuse a domain discriminator so that is can't tell if embeddings are from the source or target domains. We train several GAN variants using our proposed framework and apply them to the speaker verification task. On the challenging NIST-SRE 2016 dataset, we are able to match the performance of a strong baseline x-vector system. In contrast to the the baseline systems which are dependent on dimensionality reduction (LDA) and an external classifier (PLDA), our proposed speaker embeddings can be scored using simple cosine distance. This is achieved by optimizing our models end-to-end, using an angular margin loss function. Furthermore, we are able to significantly boost verification performance by averaging our different GAN models at the score level, achieving a relative improvement of 7.2% over the baseline.

43.FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks pdf

There exists a plethora of techniques for inducing structured sparsity in parametric models during the optimization process, with the final goal of resource-efficient inference. However, to the best of our knowledge, none target a specific number of floating-point operations (FLOPs) as part of a single end-to-end optimization objective, despite reporting FLOPs as part of the results. Furthermore, a one-size-fits-all approach ignores realistic system constraints, which differ significantly between, say, a GPU and a mobile phone -- FLOPs on the former incur less latency than on the latter; thus, it is important for practitioners to be able to specify a target number of FLOPs during model compression. In this work, we extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective and show that, given a desired FLOPs requirement, different neural networks can be successfully trained for image classification.

44.Adapting End-to-End Neural Speaker Verification to New Languages and Recording Conditions with Adversarial Training pdf

In this article we propose a novel approach for adapting speaker embeddings to new domains based on adversarial training of neural networks. We apply our embeddings to the task of text-independent speaker verification, a challenging, real-world problem in biometric security. We further the development of end-to-end speaker embedding models by combing a novel 1-dimensional, self-attentive residual network, an angular margin loss function and adversarial training strategy. Our model is able to learn extremely compact, 64-dimensional speaker embeddings that deliver competitive performance on a number of popular datasets using simple cosine distance scoring. One the NIST-SRE 2016 task we are able to beat a strong i-vector baseline, while on the Speakers in the Wild task our model was able to outperform both i-vector and x-vector baselines, showing an absolute improvement of 2.19% over the latter. Additionally, we show that the integration of adversarial training consistently leads to a significant improvement over an unadapted model.

45.A Holistic Visual Place Recognition Approach using Lightweight CNNs for Severe ViewPoint and Appearance Changes pdf

Recently, deep and complex Convolutional Neural Network (CNN) architectures have achieved encouraging results for Visual Place Recognition under strong viewpoint and appearance changes. However, the significant computation and memory overhead of these CNNs limit their practical deployment for resource-constrained mobile robots that are usually battery-operated. Achieving state-of-the-art performance/accuracy with light-weight CNN architectures is thus highly desirable, but a challenging problem. In this paper, a holistic approach is presented that combines novel regions-based features from a light-weight CNN architecture, pretrained on a place-/scene-centric image database, with Vector of Locally Aggregated Descriptors (VLAD) encoding methodology adapted specifically for Visual Place Recognition problem. The proposed approach is evaluated on a number of challenging benchmark datasets (under strong viewpoint and appearance variations) and achieves an average performance boost of 10% over state-of-the-art algorithms in terms of Area Under the Curve (AUC) calculated under precision-recall curves.

46.Beyond the Leaderboard: Insight and Deployment Challenges to Address Research Problems pdf

In the medical image analysis field, organizing challenges with associated workshops at international conferences began in 2007 and has grown to include over 150 challenges. Several of these challenges have had a major impact in the field. However, whereas well-designed challenges have the potential to unite and focus the field on creating solutions to important problems, poorly designed and documented challenges can equally impede a field and lead to pursuing incremental improvements in metric scores with no theoretic or clinical significance. This is supported by a critical assessment of challenges at the international MICCAI conference. In this assessment the main observation was that small changes to the underlying challenge data can drastically change the ranking order on the leaderboard. Related to this is the practice of leaderboard climbing, which is characterized by participants focusing on incrementally improving metric results rather than advancing science or solving the driving problem of a challenge. In this abstract we look beyond the leaderboard of a challenge and instead look at the conclusions that can be drawn from a challenge with respect to the research problem that it is addressing. Research study design is well described in other research areas and can be translated to challenge design when viewing challenges as research studies on algorithm performance that address a research problem. Based on the two main types of scientific research study design, we propose two main challenge types, which we think would benefit other research areas as well: 1) an insight challenge that is based on a qualitative study design and 2) a deployment challenge that is based on a quantitative study design. In addition we briefly touch upon related considerations with respect to statistical significance versus practical significance, generalizability and data saturation.

47.Learning to Compose Topic-Aware Mixture of Experts for Zero-Shot Video Captioning pdf

Although promising results have been achieved in video captioning, existing models are limited to the fixed inventory of activities in the training corpus, and do not generalize to open vocabulary scenarios. Here we introduce a novel task, zero-shot video captioning, that aims at describing out-of-domain videos of unseen activities. Videos of different activities usually require different captioning strategies in many aspects, i.e. word selection, semantic construction, and style expression etc, which poses a great challenge to depict novel activities without paired training data. But meanwhile, similar activities share some of those aspects in common. Therefore, We propose a principled Topic-Aware Mixture of Experts (TAMoE) model for zero-shot video captioning, which learns to compose different experts based on different topic embeddings, implicitly transferring the knowledge learned from seen activities to unseen ones. Besides, we leverage external topic-related text corpus to construct the topic embedding for each activity, which embodies the most relevant semantic vectors within the topic. Empirical results not only validate the effectiveness of our method in utilizing semantic knowledge for video captioning, but also show its strong generalization ability when describing novel activities.