Paper Group ANR 784
“I can assure you [$\ldots$] that it’s going to be all right” – A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships. Binary adaptive embeddings from order statistics of random projections. Slice-to-volume medical image registration: a survey. Sentiment Predictability for Stocks. Opening the Black Box …
“I can assure you [$\ldots$] that it’s going to be all right” – A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships
Title | “I can assure you [$\ldots$] that it’s going to be all right” – A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships |
Authors | Brett W Israelsen |
Abstract | As technology become more advanced, those who design, use and are otherwise affected by it want to know that it will perform correctly, and understand why it does what it does, and how to use it appropriately. In essence they want to be able to trust the systems that are being designed. In this survey we present assurances that are the method by which users can understand how to trust this technology. Trust between humans and autonomy is reviewed, and the implications for the design of assurances are highlighted. A survey of research that has been performed with respect to assurances is presented, and several key ideas are extracted in order to refine the definition of assurances. Several directions for future research are identified and discussed. |
Tasks | |
Published | 2017-08-01 |
URL | http://arxiv.org/abs/1708.00495v2 |
http://arxiv.org/pdf/1708.00495v2.pdf | |
PWC | https://paperswithcode.com/paper/i-can-assure-you-ldots-that-its-going-to-be |
Repo | |
Framework | |
Binary adaptive embeddings from order statistics of random projections
Title | Binary adaptive embeddings from order statistics of random projections |
Authors | Diego Valsesia, Enrico Magli |
Abstract | We use some of the largest order statistics of the random projections of a reference signal to construct a binary embedding that is adapted to signals correlated with such signal. The embedding is characterized from the analytical standpoint and shown to provide improved performance on tasks such as classification in a reduced-dimensionality space. |
Tasks | |
Published | 2017-01-30 |
URL | http://arxiv.org/abs/1701.08511v1 |
http://arxiv.org/pdf/1701.08511v1.pdf | |
PWC | https://paperswithcode.com/paper/binary-adaptive-embeddings-from-order |
Repo | |
Framework | |
Slice-to-volume medical image registration: a survey
Title | Slice-to-volume medical image registration: a survey |
Authors | Enzo Ferrante, Nikos Paragios |
Abstract | During the last decades, the research community of medical imaging has witnessed continuous advances in image registration methods, which pushed the limits of the state-of-the-art and enabled the development of novel medical procedures. A particular type of image registration problem, known as slice-to-volume registration, played a fundamental role in areas like image guided surgeries and volumetric image reconstruction. However, to date, and despite the extensive literature available on this topic, no survey has been written to discuss this challenging problem. This paper introduces the first comprehensive survey of the literature about slice-to-volume registration, presenting a categorical study of the algorithms according to an ad-hoc taxonomy and analyzing advantages and disadvantages of every category. We draw some general conclusions from this analysis and present our perspectives on the future of the field. |
Tasks | Image Reconstruction, Image Registration, Medical Image Registration |
Published | 2017-02-06 |
URL | http://arxiv.org/abs/1702.01636v2 |
http://arxiv.org/pdf/1702.01636v2.pdf | |
PWC | https://paperswithcode.com/paper/slice-to-volume-medical-image-registration-a |
Repo | |
Framework | |
Sentiment Predictability for Stocks
Title | Sentiment Predictability for Stocks |
Authors | Jordan Prosky, Xingyou Song, Andrew Tan, Michael Zhao |
Abstract | In this work, we present our findings and experiments for stock-market prediction using various textual sentiment analysis tools, such as mood analysis and event extraction, as well as prediction models, such as LSTMs and specific convolutional architectures. |
Tasks | Sentiment Analysis, Stock Market Prediction |
Published | 2017-12-15 |
URL | http://arxiv.org/abs/1712.05785v2 |
http://arxiv.org/pdf/1712.05785v2.pdf | |
PWC | https://paperswithcode.com/paper/sentiment-predictability-for-stocks |
Repo | |
Framework | |
Opening the Black Box of Financial AI with CLEAR-Trade: A CLass-Enhanced Attentive Response Approach for Explaining and Visualizing Deep Learning-Driven Stock Market Prediction
Title | Opening the Black Box of Financial AI with CLEAR-Trade: A CLass-Enhanced Attentive Response Approach for Explaining and Visualizing Deep Learning-Driven Stock Market Prediction |
Authors | Devinder Kumar, Graham W Taylor, Alexander Wong |
Abstract | Deep learning has been shown to outperform traditional machine learning algorithms across a wide range of problem domains. However, current deep learning algorithms have been criticized as uninterpretable “black-boxes” which cannot explain their decision making processes. This is a major shortcoming that prevents the widespread application of deep learning to domains with regulatory processes such as finance. As such, industries such as finance have to rely on traditional models like decision trees that are much more interpretable but less effective than deep learning for complex problems. In this paper, we propose CLEAR-Trade, a novel financial AI visualization framework for deep learning-driven stock market prediction that mitigates the interpretability issue of deep learning methods. In particular, CLEAR-Trade provides a effective way to visualize and explain decisions made by deep stock market prediction models. We show the efficacy of CLEAR-Trade in enhancing the interpretability of stock market prediction by conducting experiments based on S&P 500 stock index prediction. The results demonstrate that CLEAR-Trade can provide significant insight into the decision-making process of deep learning-driven financial models, particularly for regulatory processes, thus improving their potential uptake in the financial industry. |
Tasks | Decision Making, Stock Market Prediction |
Published | 2017-09-05 |
URL | http://arxiv.org/abs/1709.01574v1 |
http://arxiv.org/pdf/1709.01574v1.pdf | |
PWC | https://paperswithcode.com/paper/opening-the-black-box-of-financial-ai-with |
Repo | |
Framework | |
Improving Search through A3C Reinforcement Learning based Conversational Agent
Title | Improving Search through A3C Reinforcement Learning based Conversational Agent |
Authors | Milan Aggarwal, Aarushi Arora, Shagun Sodhani, Balaji Krishnamurthy |
Abstract | We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent. Our approach caters to subjective search where the user is seeking digital assets such as images which is fundamentally different from the tasks which have objective and limited search modalities. Labeled conversational data is generally not available in such search tasks and training the agent through human interactions can be time consuming. We propose a stochastic virtual user which impersonates a real user and can be used to sample user behavior efficiently to train the agent which accelerates the bootstrapping of the agent. We develop A3C algorithm based context preserving architecture which enables the agent to provide contextual assistance to the user. We compare the A3C agent with Q-learning and evaluate its performance on average rewards and state values it obtains with the virtual user in validation episodes. Our experiments show that the agent learns to achieve higher rewards and better states. |
Tasks | Q-Learning |
Published | 2017-09-17 |
URL | http://arxiv.org/abs/1709.05638v2 |
http://arxiv.org/pdf/1709.05638v2.pdf | |
PWC | https://paperswithcode.com/paper/improving-search-through-a3c-reinforcement |
Repo | |
Framework | |
Evaluating Semantic Parsing against a Simple Web-based Question Answering Model
Title | Evaluating Semantic Parsing against a Simple Web-based Question Answering Model |
Authors | Alon Talmor, Mor Geva, Jonathan Berant |
Abstract | Semantic parsing shines at analyzing complex natural language that involves composition and computation over multiple pieces of evidence. However, datasets for semantic parsing contain many factoid questions that can be answered from a single web document. In this paper, we propose to evaluate semantic parsing-based question answering models by comparing them to a question answering baseline that queries the web and extracts the answer only from web snippets, without access to the target knowledge-base. We investigate this approach on COMPLEXQUESTIONS, a dataset designed to focus on compositional language, and find that our model obtains reasonable performance (35 F1 compared to 41 F1 of state-of-the-art). We find in our analysis that our model performs well on complex questions involving conjunctions, but struggles on questions that involve relation composition and superlatives. |
Tasks | Question Answering, Semantic Parsing |
Published | 2017-07-14 |
URL | http://arxiv.org/abs/1707.04412v1 |
http://arxiv.org/pdf/1707.04412v1.pdf | |
PWC | https://paperswithcode.com/paper/evaluating-semantic-parsing-against-a-simple |
Repo | |
Framework | |
Chipmunk: A Systolically Scalable 0.9 mm$
Title | Chipmunk: A Systolically Scalable 0.9 mm${}^2$, 3.08 Gop/s/mW @ 1.2 mW Accelerator for Near-Sensor Recurrent Neural Network Inference |
Authors | Francesco Conti, Lukas Cavigelli, Gianna Paulin, Igor Susmelj, Luca Benini |
Abstract | Recurrent neural networks (RNNs) are state-of-the-art in voice awareness/understanding and speech recognition. On-device computation of RNNs on low-power mobile and wearable devices would be key to applications such as zero-latency voice-based human-machine interfaces. Here we present Chipmunk, a small (<1 mm${}^2$) hardware accelerator for Long-Short Term Memory RNNs in UMC 65 nm technology capable to operate at a measured peak efficiency up to 3.08 Gop/s/mW at 1.24 mW peak power. To implement big RNN models without incurring in huge memory transfer overhead, multiple Chipmunk engines can cooperate to form a single systolic array. In this way, the Chipmunk architecture in a 75 tiles configuration can achieve real-time phoneme extraction on a demanding RNN topology proposed by Graves et al., consuming less than 13 mW of average power. |
Tasks | Speech Recognition |
Published | 2017-11-15 |
URL | http://arxiv.org/abs/1711.05734v2 |
http://arxiv.org/pdf/1711.05734v2.pdf | |
PWC | https://paperswithcode.com/paper/chipmunk-a-systolically-scalable-09-mm2-308 |
Repo | |
Framework | |
Sample-level Deep Convolutional Neural Networks for Music Auto-tagging Using Raw Waveforms
Title | Sample-level Deep Convolutional Neural Networks for Music Auto-tagging Using Raw Waveforms |
Authors | Jongpil Lee, Jiyoung Park, Keunhyoung Luke Kim, Juhan Nam |
Abstract | Recently, the end-to-end approach that learns hierarchical representations from raw data using deep convolutional neural networks has been successfully explored in the image, text and speech domains. This approach was applied to musical signals as well but has been not fully explored yet. To this end, we propose sample-level deep convolutional neural networks which learn representations from very small grains of waveforms (e.g. 2 or 3 samples) beyond typical frame-level input representations. Our experiments show how deep architectures with sample-level filters improve the accuracy in music auto-tagging and they provide results comparable to previous state-of-the-art performances for the Magnatagatune dataset and Million Song Dataset. In addition, we visualize filters learned in a sample-level DCNN in each layer to identify hierarchically learned features and show that they are sensitive to log-scaled frequency along layer, such as mel-frequency spectrogram that is widely used in music classification systems. |
Tasks | Music Auto-Tagging, Music Classification |
Published | 2017-03-06 |
URL | http://arxiv.org/abs/1703.01789v2 |
http://arxiv.org/pdf/1703.01789v2.pdf | |
PWC | https://paperswithcode.com/paper/sample-level-deep-convolutional-neural |
Repo | |
Framework | |
InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity
Title | InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity |
Authors | Hee Jung Ryu, Hartwig Adam, Margaret Mitchell |
Abstract | We demonstrate an approach to face attribute detection that retains or improves attribute detection accuracy across gender and race subgroups by learning demographic information prior to learning the attribute detection task. The system, which we call InclusiveFaceNet, detects face attributes by transferring race and gender representations learned from a held-out dataset of public race and gender identities. Leveraging learned demographic representations while withholding demographic inference from the downstream face attribute detection task preserves potential users’ demographic privacy while resulting in some of the best reported numbers to date on attribute detection in the Faces of the World and CelebA datasets. |
Tasks | |
Published | 2017-12-01 |
URL | http://arxiv.org/abs/1712.00193v3 |
http://arxiv.org/pdf/1712.00193v3.pdf | |
PWC | https://paperswithcode.com/paper/inclusivefacenet-improving-face-attribute |
Repo | |
Framework | |
Unsupervised Basis Function Adaptation for Reinforcement Learning
Title | Unsupervised Basis Function Adaptation for Reinforcement Learning |
Authors | Edward W. Barker, Charl J. Ras |
Abstract | When using reinforcement learning (RL) algorithms to evaluate a policy it is common, given a large state space, to introduce some form of approximation architecture for the value function (VF). The exact form of this architecture can have a significant effect on the accuracy of the VF estimate, however, and determining a suitable approximation architecture can often be a highly complex task. Consequently there is a large amount of interest in the potential for allowing RL algorithms to adaptively generate approximation architectures. We investigate a method of adapting approximation architectures which uses feedback regarding the frequency with which an agent has visited certain states to guide which areas of the state space to approximate with greater detail. This method is “unsupervised” in the sense that it makes no direct reference to reward or the VF estimate. We introduce an algorithm based upon this idea which adapts a state aggregation approximation architecture on-line. A common method of scoring a VF estimate is to weight the squared Bellman error of each state-action by the probability of that state-action occurring. Adopting this scoring method, and assuming $S$ states, we demonstrate theoretically that - provided (1) the number of cells $X$ in the state aggregation architecture is of order $\sqrt{S}\log_2{S}\ln{S}$ or greater, (2) the policy and transition function are close to deterministic, and (3) the prior for the transition function is uniformly distributed - our algorithm, used in conjunction with a suitable RL algorithm, can guarantee a score which is arbitrarily close to zero as $S$ becomes large. It is able to do this despite having only $O(X \log_2S)$ space complexity and negligible time complexity. The results take advantage of certain properties of the stationary distributions of Markov chains. |
Tasks | |
Published | 2017-03-03 |
URL | http://arxiv.org/abs/1703.01026v1 |
http://arxiv.org/pdf/1703.01026v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-basis-function-adaptation-for |
Repo | |
Framework | |
Weakly-supervised Visual Grounding of Phrases with Linguistic Structures
Title | Weakly-supervised Visual Grounding of Phrases with Linguistic Structures |
Authors | Fanyi Xiao, Leonid Sigal, Yong Jae Lee |
Abstract | We propose a weakly-supervised approach that takes image-sentence pairs as input and learns to visually ground (i.e., localize) arbitrary linguistic phrases, in the form of spatial attention masks. Specifically, the model is trained with images and their associated image-level captions, without any explicit region-to-phrase correspondence annotations. To this end, we introduce an end-to-end model which learns visual groundings of phrases with two types of carefully designed loss functions. In addition to the standard discriminative loss, which enforces that attended image regions and phrases are consistently encoded, we propose a novel structural loss which makes use of the parse tree structures induced by the sentences. In particular, we ensure complementarity among the attention masks that correspond to sibling noun phrases, and compositionality of attention masks among the children and parent phrases, as defined by the sentence parse tree. We validate the effectiveness of our approach on the Microsoft COCO and Visual Genome datasets. |
Tasks | |
Published | 2017-05-03 |
URL | http://arxiv.org/abs/1705.01371v1 |
http://arxiv.org/pdf/1705.01371v1.pdf | |
PWC | https://paperswithcode.com/paper/weakly-supervised-visual-grounding-of-phrases |
Repo | |
Framework | |
The Incredible Shrinking Neural Network: New Perspectives on Learning Representations Through The Lens of Pruning
Title | The Incredible Shrinking Neural Network: New Perspectives on Learning Representations Through The Lens of Pruning |
Authors | Aditya Sharma, Nikolas Wolfe, Bhiksha Raj |
Abstract | How much can pruning algorithms teach us about the fundamentals of learning representations in neural networks? And how much can these fundamentals help while devising new pruning techniques? A lot, it turns out. Neural network pruning has become a topic of great interest in recent years, and many different techniques have been proposed to address this problem. The decision of what to prune and when to prune necessarily forces us to confront our assumptions about how neural networks actually learn to represent patterns in data. In this work, we set out to test several long-held hypotheses about neural network learning representations, approaches to pruning and the relevance of one in the context of the other. To accomplish this, we argue in favor of pruning whole neurons as opposed to the traditional method of pruning weights from optimally trained networks. We first review the historical literature, point out some common assumptions it makes, and propose methods to demonstrate the inherent flaws in these assumptions. We then propose our novel approach to pruning and set about analyzing the quality of the decisions it makes. Our analysis led us to question the validity of many widely-held assumptions behind pruning algorithms and the trade-offs we often make in the interest of reducing computational complexity. We discovered that there is a straightforward way, however expensive, to serially prune 40-70% of the neurons in a trained network with minimal effect on the learning representation and without any re-training. It is to be noted here that the motivation behind this work is not to propose an algorithm that would outperform all existing methods, but to shed light on what some inherent flaws in these methods can teach us about learning representations and how this can lead us to superior pruning techniques. |
Tasks | Network Pruning |
Published | 2017-01-16 |
URL | http://arxiv.org/abs/1701.04465v2 |
http://arxiv.org/pdf/1701.04465v2.pdf | |
PWC | https://paperswithcode.com/paper/the-incredible-shrinking-neural-network-new |
Repo | |
Framework | |
Inference in Deep Networks in High Dimensions
Title | Inference in Deep Networks in High Dimensions |
Authors | Alyson K. Fletcher, Sundeep Rangan |
Abstract | Deep generative networks provide a powerful tool for modeling complex data in a wide range of applications. In inverse problems that use these networks as generative priors on data, one must often perform inference of the inputs of the networks from the outputs. Inference is also required for sampling during stochastic training on these generative models. This paper considers inference in a deep stochastic neural network where the parameters (e.g., weights, biases and activation functions) are known and the problem is to estimate the values of the input and hidden units from the output. While several approximate algorithms have been proposed for this task, there are few analytic tools that can provide rigorous guarantees in the reconstruction error. This work presents a novel and computationally tractable output-to-input inference method called Multi-Layer Vector Approximate Message Passing (ML-VAMP). The proposed algorithm, derived from expectation propagation, extends earlier AMP methods that are known to achieve the replica predictions for optimality in simple linear inverse problems. Our main contribution shows that the mean-squared error (MSE) of ML-VAMP can be exactly predicted in a certain large system limit (LSL) where the numbers of layers is fixed and weight matrices are random and orthogonally-invariant with dimensions that grow to infinity. ML-VAMP is thus a principled method for output-to-input inference in deep networks with a rigorous and precise performance achievability result in high dimensions. |
Tasks | |
Published | 2017-06-20 |
URL | http://arxiv.org/abs/1706.06549v1 |
http://arxiv.org/pdf/1706.06549v1.pdf | |
PWC | https://paperswithcode.com/paper/inference-in-deep-networks-in-high-dimensions |
Repo | |
Framework | |
List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians
Title | List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians |
Authors | Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart |
Abstract | We study the problem of list-decodable Gaussian mean estimation and the related problem of learning mixtures of separated spherical Gaussians. We develop a set of techniques that yield new efficient algorithms with significantly improved guarantees for these problems. {\bf List-Decodable Mean Estimation.} Fix any $d \in \mathbb{Z}_+$ and $0< \alpha <1/2$. We design an algorithm with runtime $O (\mathrm{poly}(n/\alpha)^{d})$ that outputs a list of $O(1/\alpha)$ many candidate vectors such that with high probability one of the candidates is within $\ell_2$-distance $O(\alpha^{-1/(2d)})$ from the true mean. The only previous algorithm for this problem achieved error $\tilde O(\alpha^{-1/2})$ under second moment conditions. For $d = O(1/\epsilon)$, our algorithm runs in polynomial time and achieves error $O(\alpha^{\epsilon})$. We also give a Statistical Query lower bound suggesting that the complexity of our algorithm is qualitatively close to best possible. {\bf Learning Mixtures of Spherical Gaussians.} We give a learning algorithm for mixtures of spherical Gaussians that succeeds under significantly weaker separation assumptions compared to prior work. For the prototypical case of a uniform mixture of $k$ identity covariance Gaussians we obtain: For any $\epsilon>0$, if the pairwise separation between the means is at least $\Omega(k^{\epsilon}+\sqrt{\log(1/\delta)})$, our algorithm learns the unknown parameters within accuracy $\delta$ with sample complexity and running time $\mathrm{poly} (n, 1/\delta, (k/\epsilon)^{1/\epsilon})$. The previously best known polynomial time algorithm required separation at least $k^{1/4} \mathrm{polylog}(k/\delta)$. Our main technical contribution is a new technique, using degree-$d$ multivariate polynomials, to remove outliers from high-dimensional datasets where the majority of the points are corrupted. |
Tasks | |
Published | 2017-11-20 |
URL | http://arxiv.org/abs/1711.07211v1 |
http://arxiv.org/pdf/1711.07211v1.pdf | |
PWC | https://paperswithcode.com/paper/list-decodable-robust-mean-estimation-and |
Repo | |
Framework | |