Paper Group ANR 94
SketchDesc: Learning Local Sketch Descriptors for Multi-view Correspondence. Anomaly detection in chest radiographs with a weakly supervised flow-based deep learning method. Improving the Detection of Burnt Areas in Remote Sensing using Hyper-features Evolved by M3GP. Fast Adaptation to Super-Resolution Networks via Meta-Learning. Stochastic Proxim …
SketchDesc: Learning Local Sketch Descriptors for Multi-view Correspondence
Title | SketchDesc: Learning Local Sketch Descriptors for Multi-view Correspondence |
Authors | Deng Yu, Lei Li, Youyi Zheng, Manfred Lau, Yi-Zhe Song, Chiew-Lan Tai, Hongbo Fu |
Abstract | In this paper, we study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object and predict semantic correspondence among the sketches. This problem is challenging, since visual features of corresponding points at different views can be very different. To this end, we take a deep learning approach and learn a novel local sketch descriptor from data. We contribute a training dataset by generating the pixel-level correspondence for the multi-view line drawings synthesized from 3D shapes. To handle the sparsity and ambiguity of sketches, we design a novel multi-branch neural network that integrates a patch-based representation and a multi-scale strategy to learn the \pixelLevel correspondence among multi-view sketches. We demonstrate the effectiveness of our proposed approach with extensive experiments on hand-drawn sketches, and multi-view line drawings rendered from multiple 3D shape datasets. |
Tasks | |
Published | 2020-01-16 |
URL | https://arxiv.org/abs/2001.05744v2 |
https://arxiv.org/pdf/2001.05744v2.pdf | |
PWC | https://paperswithcode.com/paper/sketchdesc-learning-local-sketch-descriptors |
Repo | |
Framework | |
Anomaly detection in chest radiographs with a weakly supervised flow-based deep learning method
Title | Anomaly detection in chest radiographs with a weakly supervised flow-based deep learning method |
Authors | H. Shibata, S. Hanaoka, Y. Nomura, T. Nakao, I. Sato, N. Hayashi, O. Abe |
Abstract | Preventing the oversight of anomalies in chest X-ray radiographs (CXRs) during diagnosis is a crucial issue. Deep learning (DL)-based anomaly detection methods are rapidly growing in popularity, and provide effective solutions to the problem, but the workload in labeling CXRs during the training procedure remains heavy. To reduce the workload, a novel anomaly detection method for CXRs based on weakly supervised DL is presented in this study. The DL is based on a flow-based deep neural network (DNN) framework with which two normality metrics (logarithm likelihood and logarithm likelihood ratio) can be calculated. With this method, only one set of normal CXRs requires labeling to train the DNN, then the normality of any unknown CXR can be evaluated. The area under the receiver operation characteristic curve acquired with the logarithm likelihood ratio metric ($\approx0.783$) was greater than that obtained with the logarithm likelihood metric, and was a value comparable to those in previous studies where other weakly supervised DNNs were implemented. |
Tasks | Anomaly Detection |
Published | 2020-01-22 |
URL | https://arxiv.org/abs/2001.07847v1 |
https://arxiv.org/pdf/2001.07847v1.pdf | |
PWC | https://paperswithcode.com/paper/anomaly-detection-in-chest-radiographs-with-a |
Repo | |
Framework | |
Improving the Detection of Burnt Areas in Remote Sensing using Hyper-features Evolved by M3GP
Title | Improving the Detection of Burnt Areas in Remote Sensing using Hyper-features Evolved by M3GP |
Authors | João E. Batista, Sara Silva |
Abstract | One problem found when working with satellite images is the radiometric variations across the image and different images. Intending to improve remote sensing models for the classification of burnt areas, we set two objectives. The first is to understand the relationship between feature spaces and the predictive ability of the models, allowing us to explain the differences between learning and generalization when training and testing in different datasets. We find that training on datasets built from more than one image provides models that generalize better. These results are explained by visualizing the dispersion of values on the feature space. The second objective is to evolve hyper-features that improve the performance of different classifiers on a variety of test sets. We find the hyper-features to be beneficial, and obtain the best models with XGBoost, even if the hyper-features are optimized for a different method. |
Tasks | |
Published | 2020-01-31 |
URL | https://arxiv.org/abs/2002.00053v1 |
https://arxiv.org/pdf/2002.00053v1.pdf | |
PWC | https://paperswithcode.com/paper/improving-the-detection-of-burnt-areas-in |
Repo | |
Framework | |
Fast Adaptation to Super-Resolution Networks via Meta-Learning
Title | Fast Adaptation to Super-Resolution Networks via Meta-Learning |
Authors | Seobin Park, Jinsu Yoo, Donghyeon Cho, Jiwon Kim, Tae Hyun Kim |
Abstract | Conventional supervised super-resolution (SR) approaches are trained with massive external SR datasets but fail to exploit desirable properties of the given test image. On the other hand, self-supervised SR approaches utilize the internal information within a test image but suffer from computational complexity in run-time. In this work, we observe the opportunity for further improvement of the performance of SISR without changing the architecture of conventional SR networks by practically exploiting additional information given from the input image. In the training stage, we train the network via meta-learning; thus, the network can quickly adapt to any input image at test time. Then, in the test stage, parameters of this meta-learned network are rapidly fine-tuned with only a few iterations by only using the given low-resolution image. The adaptation at the test time takes full advantage of patch-recurrence property observed in natural images. Our method effectively handles unknown SR kernels and can be applied to any existing model. We demonstrate that the proposed model-agnostic approach consistently improves the performance of conventional SR networks on various benchmark SR datasets. |
Tasks | Meta-Learning, Super-Resolution |
Published | 2020-01-09 |
URL | https://arxiv.org/abs/2001.02905v2 |
https://arxiv.org/pdf/2001.02905v2.pdf | |
PWC | https://paperswithcode.com/paper/fast-adaptation-to-super-resolution-networks |
Repo | |
Framework | |
Stochastic Proximal Gradient Algorithm with Minibatches. Application to Large Scale Learning Models
Title | Stochastic Proximal Gradient Algorithm with Minibatches. Application to Large Scale Learning Models |
Authors | Andrei Patrascu, Ciprian Paduraru, Paul Irofti |
Abstract | Stochastic optimization lies at the core of most statistical learning models. The recent great development of stochastic algorithmic tools focused significantly onto proximal gradient iterations, in order to find an efficient approach for nonsmooth (composite) population risk functions. The complexity of finding optimal predictors by minimizing regularized risk is largely understood for simple regularizations such as $\ell_1/\ell_2$ norms. However, more complex properties desired for the predictor necessitates highly difficult regularizers as used in grouped lasso or graph trend filtering. In this chapter we develop and analyze minibatch variants of stochastic proximal gradient algorithm for general composite objective functions with stochastic nonsmooth components. We provide iteration complexity for constant and variable stepsize policies obtaining that, for minibatch size $N$, after $\mathcal{O}(\frac{1}{N\epsilon})$ iterations $\epsilon-$suboptimality is attained in expected quadratic distance to optimal solution. The numerical tests on $\ell_2-$regularized SVMs and parametric sparse representation problems confirm the theoretical behaviour and surpasses minibatch SGD performance. |
Tasks | Stochastic Optimization |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13332v1 |
https://arxiv.org/pdf/2003.13332v1.pdf | |
PWC | https://paperswithcode.com/paper/stochastic-proximal-gradient-algorithm-with |
Repo | |
Framework | |
Agent-Based Proof Design via Lemma Flow Diagram
Title | Agent-Based Proof Design via Lemma Flow Diagram |
Authors | Keehang Kwon, Daeseong Kang |
Abstract | We discuss an agent-based approach to proof design and implementation, which we call {\it Lemma Flow Diagram} (LFD). This approach is based on the multicut rule with $shared$ cuts. This approach is modular and easy to use, read and automate. Thus, we consider LFD an appealing alternative to `flow proof’ which is popular in mathematical education. Some examples are provided. | |
Tasks | |
Published | 2020-02-03 |
URL | https://arxiv.org/abs/2002.00666v1 |
https://arxiv.org/pdf/2002.00666v1.pdf | |
PWC | https://paperswithcode.com/paper/agent-based-proof-design-via-lemma-flow |
Repo | |
Framework | |
When Humans Aren’t Optimal: Robots that Collaborate with Risk-Aware Humans
Title | When Humans Aren’t Optimal: Robots that Collaborate with Risk-Aware Humans |
Authors | Minae Kwon, Erdem Biyik, Aditi Talati, Karan Bhasin, Dylan P. Losey, Dorsa Sadigh |
Abstract | In order to collaborate safely and efficiently, robots need to anticipate how their human partners will behave. Some of today’s robots model humans as if they were also robots, and assume users are always optimal. Other robots account for human limitations, and relax this assumption so that the human is noisily rational. Both of these models make sense when the human receives deterministic rewards: i.e., gaining either $100 or $130 with certainty. But in real world scenarios, rewards are rarely deterministic. Instead, we must make choices subject to risk and uncertainty–and in these settings, humans exhibit a cognitive bias towards suboptimal behavior. For example, when deciding between gaining $100 with certainty or $130 only 80% of the time, people tend to make the risk-averse choice–even though it leads to a lower expected gain! In this paper, we adopt a well-known Risk-Aware human model from behavioral economics called Cumulative Prospect Theory and enable robots to leverage this model during human-robot interaction (HRI). In our user studies, we offer supporting evidence that the Risk-Aware model more accurately predicts suboptimal human behavior. We find that this increased modeling accuracy results in safer and more efficient human-robot collaboration. Overall, we extend existing rational human models so that collaborative robots can anticipate and plan around suboptimal human behavior during HRI. |
Tasks | |
Published | 2020-01-13 |
URL | https://arxiv.org/abs/2001.04377v1 |
https://arxiv.org/pdf/2001.04377v1.pdf | |
PWC | https://paperswithcode.com/paper/when-humans-arent-optimal-robots-that |
Repo | |
Framework | |
FePh: An Annotated Facial Expression Dataset for the RWTH-PHOENIX-Weather 2014 Dataset
Title | FePh: An Annotated Facial Expression Dataset for the RWTH-PHOENIX-Weather 2014 Dataset |
Authors | Marie Alaghband, Niloofar Yousefi, Ivan Garibay |
Abstract | Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression dataset in the context of sign language are still scarce resources. In this manuscript, we introduce a continuous sign language facial expression dataset, comprising over $3000$ annotated images of the RWTH-PHOENIX-Weather 2014 development set. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of “sad”, “surprise”, “fear”, “angry”, “neutral”, “disgust”, and “happy”. We also considered the “None” class if the image’s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh in the context of facial expression and sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems. The dataset will be publicly available. |
Tasks | Gesture Recognition, Sign Language Recognition |
Published | 2020-03-03 |
URL | https://arxiv.org/abs/2003.08759v1 |
https://arxiv.org/pdf/2003.08759v1.pdf | |
PWC | https://paperswithcode.com/paper/feph-an-annotated-facial-expression-dataset |
Repo | |
Framework | |
Emotion Recognition for In-the-wild Videos
Title | Emotion Recognition for In-the-wild Videos |
Authors | Hanyu Liu, Jiabei Zeng, Shiguang Shan, Xilin Chen |
Abstract | This paper is a brief introduction to our submission to the seven basic expression classification track of Affective Behavior Analysis in-the-wild Competition held in conjunction with the IEEE International Conference on Automatic Face and Gesture Recognition (FG) 2020. Our method combines Deep Residual Network (ResNet) and Bidirectional Long Short-Term Memory Network (BLSTM), achieving 64.3% accuracy and 43.4% final metric on the validation set. |
Tasks | Emotion Recognition, Gesture Recognition |
Published | 2020-02-13 |
URL | https://arxiv.org/abs/2002.05447v1 |
https://arxiv.org/pdf/2002.05447v1.pdf | |
PWC | https://paperswithcode.com/paper/emotion-recognition-for-in-the-wild-videos |
Repo | |
Framework | |
Real Time Detection of Small Objects
Title | Real Time Detection of Small Objects |
Authors | Al-Akhir Nayan, Joyeta Saha, Ahamad Nokib Mozumder, Khan Raqib Mahmud, Abul Kalam Al Azad |
Abstract | The existing real time object detection algorithm is based on the deep neural network of convolution need to perform multilevel convolution and pooling operations on the entire image to extract a deep semantic characteristic of the image. The detection models perform better for large objects. However, these models do not detect small objects with low resolution and noise, because the features of existing models do not fully represent the essential features of small objects after repeated convolution operations. We have introduced a novel real time detection algorithm which employs upsampling and skip connection to extract multiscale features at different convolution levels in a learning task resulting a remarkable performance in detecting small objects. The detection precision of the model is shown to be higher and faster than that of the state-of-the-art models. |
Tasks | Object Detection, Real-Time Object Detection |
Published | 2020-03-17 |
URL | https://arxiv.org/abs/2003.07442v1 |
https://arxiv.org/pdf/2003.07442v1.pdf | |
PWC | https://paperswithcode.com/paper/real-time-detection-of-small-objects |
Repo | |
Framework | |
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models
Title | More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models |
Authors | Lin Chen, Yifei Min, Mingrui Zhang, Amin Karbasi |
Abstract | Despite remarkable success in practice, modern machine learning models have been found to be susceptible to adversarial attacks that make human-imperceptible perturbations to the data, but result in serious and potentially dangerous prediction errors. To address this issue, practitioners often use adversarial training to learn models that are robust against such attacks at the cost of weaker generalization accuracy on unperturbed test sets. The conventional wisdom is that more training data should shrink the generalization gap between adversarially-trained models and standard models. However, we study the training of robust classifiers for both Gaussian and Bernoulli models under $\ell_\infty$ attacks, and we prove that more data may actually increase this gap. Furthermore, our theoretical results identify if and when additional data will finally begin to shrink the gap. Lastly, we experimentally demonstrate that our results also hold for linear regression models, which may indicate that this phenomenon occurs more broadly. |
Tasks | |
Published | 2020-02-11 |
URL | https://arxiv.org/abs/2002.04725v1 |
https://arxiv.org/pdf/2002.04725v1.pdf | |
PWC | https://paperswithcode.com/paper/more-data-can-expand-the-generalization-gap |
Repo | |
Framework | |
Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis
Title | Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis |
Authors | Guangzhi Sun, Yu Zhang, Ron J. Weiss, Yuan Cao, Heiga Zen, Yonghui Wu |
Abstract | This paper proposes a hierarchical, fine-grained and interpretable latent variable model for prosody based on the Tacotron 2 text-to-speech model. It achieves multi-resolution modeling of prosody by conditioning finer level representations on coarser level ones. Additionally, it imposes hierarchical conditioning across all latent dimensions using a conditional variational auto-encoder (VAE) with an auto-regressive structure. Evaluation of reconstruction performance illustrates that the new structure does not degrade the model while allowing better interpretability. Interpretations of prosody attributes are provided together with the comparison between word-level and phone-level prosody representations. Moreover, both qualitative and quantitative evaluations are used to demonstrate the improvement in the disentanglement of the latent dimensions. |
Tasks | Speech Synthesis |
Published | 2020-02-06 |
URL | https://arxiv.org/abs/2002.03785v1 |
https://arxiv.org/pdf/2002.03785v1.pdf | |
PWC | https://paperswithcode.com/paper/fully-hierarchical-fine-grained-prosody |
Repo | |
Framework | |
Generalized Self-Adapting Particle Swarm Optimization algorithm with archive of samples
Title | Generalized Self-Adapting Particle Swarm Optimization algorithm with archive of samples |
Authors | Michał Okulewicz, Mateusz Zaborski, Jacek Mańdziuk |
Abstract | In this paper we enhance Generalized Self-Adapting Particle Swarm Optimization algorithm (GAPSO), initially introduced at the Parallel Problem Solving from Nature 2018 conference, and to investigate its properties. The research on GAPSO is underlined by the two following assumptions: (1) it is possible to achieve good performance of an optimization algorithm through utilization of all of the gathered samples, (2) the best performance can be accomplished by means of a combination of specialized sampling behaviors (Particle Swarm Optimization, Differential Evolution, and locally fitted square functions). From a software engineering point of view, GAPSO considers a standard Particle Swarm Optimization algorithm as an ideal starting point for creating a generalpurpose global optimization framework. Within this framework hybrid optimization algorithms are developed, and various additional techniques (like algorithm restart management or adaptation schemes) are tested. The paper introduces a new version of the algorithm, abbreviated as M-GAPSO. In comparison with the original GAPSO formulation it includes the following four features: a global restart management scheme, samples gathering within an R-Tree based index (archive/memory of samples), adaptation of a sampling behavior based on a global particle performance, and a specific approach to local search. The above-mentioned enhancements resulted in improved performance of M-GAPSO over GAPSO, observed on both COCO BBOB testbed and in the black-box optimization competition BBComp. Also, for lower dimensionality functions (up to 5D) results of M-GAPSO are better or comparable to the state-of-the art version of CMA-ES (namely the KL-BIPOP-CMA-ES algorithm presented at the GECCO 2017 conference). |
Tasks | |
Published | 2020-02-28 |
URL | https://arxiv.org/abs/2002.12485v1 |
https://arxiv.org/pdf/2002.12485v1.pdf | |
PWC | https://paperswithcode.com/paper/generalized-self-adapting-particle-swarm |
Repo | |
Framework | |
Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators
Title | Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators |
Authors | Noah Gamboa, Kais Kudrolli, Anand Dhoot, Ardavan Pedram |
Abstract | This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology Campfire explores pruning at granularities within a convolutional kernel and filter. We study various tradeoffs with respect to pruning duration, level of sparsity, and learning rate configuration. We show that our method creates a sparse version of ResNet-50 and ResNet-50 v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To ensure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable. |
Tasks | |
Published | 2020-01-09 |
URL | https://arxiv.org/abs/2001.03253v2 |
https://arxiv.org/pdf/2001.03253v2.pdf | |
PWC | https://paperswithcode.com/paper/campfire-compressable-regularization-free |
Repo | |
Framework | |
From Bit To Bedside: A Practical Framework For Artificial Intelligence Product Development In Healthcare
Title | From Bit To Bedside: A Practical Framework For Artificial Intelligence Product Development In Healthcare |
Authors | David Higgins, Vince I. Madai |
Abstract | Artificial Intelligence (AI) in healthcare holds great potential to expand access to high-quality medical care, whilst reducing overall systemic costs. Despite hitting the headlines regularly and many publications of proofs-of-concept, certified products are failing to breakthrough to the clinic. AI in healthcare is a multi-party process with deep knowledge required in multiple individual domains. The lack of understanding of the specific challenges in the domain is, therefore, the major contributor to the failure to deliver on the big promises. Thus, we present a decision perspective framework, for the development of AI-driven biomedical products, from conception to market launch. Our framework highlights the risks, objectives and key results which are typically required to proceed through a three-phase process to the market launch of a validated medical AI product. We focus on issues related to Clinical validation, Regulatory affairs, Data strategy and Algorithmic development. The development process we propose for AI in healthcare software strongly diverges from modern consumer software development processes. We highlight the key time points to guide founders, investors and key stakeholders throughout their relevant part of the process. Our framework should be seen as a template for innovation frameworks, which can be used to coordinate team communications and responsibilities towards a reasonable product development roadmap, thus unlocking the potential of AI in medicine. |
Tasks | |
Published | 2020-03-23 |
URL | https://arxiv.org/abs/2003.10303v1 |
https://arxiv.org/pdf/2003.10303v1.pdf | |
PWC | https://paperswithcode.com/paper/from-bit-to-bedside-a-practical-framework-for |
Repo | |
Framework | |