Paper Group ANR 59
User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient. Spiking Optical Flow for Event-based Sensors Using IBM’s TrueNorth Neurosynaptic System. Building Chatbots from Forum Data: Model Selection Using Question Answering Metrics. Improving Robustness of Feature Representations to Image Deformations using Powered Convolution …
User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
Title | User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient |
Authors | Arnak S. Dalalyan, Avetik G. Karagulyan |
Abstract | In this paper, we study the problem of sampling from a given probability density function that is known to be smooth and strongly log-concave. We analyze several methods of approximate sampling based on discretizations of the (highly overdamped) Langevin diffusion and establish guarantees on its error measured in the Wasserstein-2 distance. Our guarantees improve or extend the state-of-the-art results in three directions. First, we provide an upper bound on the error of the first-order Langevin Monte Carlo (LMC) algorithm with optimized varying step-size. This result has the advantage of being horizon free (we do not need to know in advance the target precision) and to improve by a logarithmic factor the corresponding result for the constant step-size. Second, we study the case where accurate evaluations of the gradient of the log-density are unavailable, but one can have access to approximations of the aforementioned gradient. In such a situation, we consider both deterministic and stochastic approximations of the gradient and provide an upper bound on the sampling error of the first-order LMC that quantifies the impact of the gradient evaluation inaccuracies. Third, we establish upper bounds for two versions of the second-order LMC, which leverage the Hessian of the log-density. We nonasymptotic guarantees on the sampling error of these second-order LMCs. These guarantees reveal that the second-order LMC algorithms improve on the first-order LMC in ill-conditioned settings. |
Tasks | |
Published | 2017-09-29 |
URL | http://arxiv.org/abs/1710.00095v3 |
http://arxiv.org/pdf/1710.00095v3.pdf | |
PWC | https://paperswithcode.com/paper/user-friendly-guarantees-for-the-langevin |
Repo | |
Framework | |
Spiking Optical Flow for Event-based Sensors Using IBM’s TrueNorth Neurosynaptic System
Title | Spiking Optical Flow for Event-based Sensors Using IBM’s TrueNorth Neurosynaptic System |
Authors | Germain Haessig, Andrew Cassidy, Rodrigo Alvarez, Ryad Benosman, Garrick Orchard |
Abstract | This paper describes a fully spike-based neural network for optical flow estimation from Dynamic Vision Sensor data. A low power embedded implementation of the method which combines the Asynchronous Time-based Image Sensor with IBM’s TrueNorth Neurosynaptic System is presented. The sensor generates spikes with sub-millisecond resolution in response to scene illumination changes. These spike are processed by a spiking neural network running on TrueNorth with a 1 millisecond resolution to accurately determine the order and time difference of spikes from neighboring pixels, and therefore infer the velocity. The spiking neural network is a variant of the Barlow Levick method for optical flow estimation. The system is evaluated on two recordings for which ground truth motion is available, and achieves an Average Endpoint Error of 11% at an estimated power budget of under 80mW for the sensor and computation. |
Tasks | Optical Flow Estimation |
Published | 2017-10-26 |
URL | http://arxiv.org/abs/1710.09820v1 |
http://arxiv.org/pdf/1710.09820v1.pdf | |
PWC | https://paperswithcode.com/paper/spiking-optical-flow-for-event-based-sensors |
Repo | |
Framework | |
Building Chatbots from Forum Data: Model Selection Using Question Answering Metrics
Title | Building Chatbots from Forum Data: Model Selection Using Question Answering Metrics |
Authors | Martin Boyanov, Ivan Koychev, Preslav Nakov, Alessandro Moschitti, Giovanni Da San Martino |
Abstract | We propose to use question answering (QA) data from Web forums to train chatbots from scratch, i.e., without dialog training data. First, we extract pairs of question and answer sentences from the typically much longer texts of questions and answers in a forum. We then use these shorter texts to train seq2seq models in a more efficient way. We further improve the parameter optimization using a new model selection strategy based on QA measures. Finally, we propose to use extrinsic evaluation with respect to a QA task as an automatic evaluation method for chatbots. The evaluation shows that the model achieves a MAP of 63.5% on the extrinsic task. Moreover, it can answer correctly 49.5% of the questions when they are similar to questions asked in the forum, and 47.3% of the questions when they are more conversational in style. |
Tasks | Model Selection, Question Answering |
Published | 2017-10-02 |
URL | http://arxiv.org/abs/1710.00689v1 |
http://arxiv.org/pdf/1710.00689v1.pdf | |
PWC | https://paperswithcode.com/paper/building-chatbots-from-forum-data-model |
Repo | |
Framework | |
Improving Robustness of Feature Representations to Image Deformations using Powered Convolution in CNNs
Title | Improving Robustness of Feature Representations to Image Deformations using Powered Convolution in CNNs |
Authors | Zhun Sun, Mete Ozay, Takayuki Okatani |
Abstract | In this work, we address the problem of improvement of robustness of feature representations learned using convolutional neural networks (CNNs) to image deformation. We argue that higher moment statistics of feature distributions could be shifted due to image deformations, and the shift leads to degrade of performance and cannot be reduced by ordinary normalization methods as observed in experimental analyses. In order to attenuate this effect, we apply additional non-linearity in CNNs by combining power functions with learnable parameters into convolution operation. In the experiments, we observe that CNNs which employ the proposed method obtain remarkable boost in both the generalization performance and the robustness under various types of deformations using large scale benchmark datasets. For instance, a model equipped with the proposed method obtains 3.3% performance boost in mAP on Pascal Voc object detection task using deformed images, compared to the reference model, while both models provide the same performance using original images. To the best of our knowledge, this is the first work that studies robustness of deep features learned using CNNs to a wide range of deformations for object recognition and detection. |
Tasks | Object Detection, Object Recognition |
Published | 2017-07-25 |
URL | http://arxiv.org/abs/1707.07830v1 |
http://arxiv.org/pdf/1707.07830v1.pdf | |
PWC | https://paperswithcode.com/paper/improving-robustness-of-feature |
Repo | |
Framework | |
ssEMnet: Serial-section Electron Microscopy Image Registration using a Spatial Transformer Network with Learned Features
Title | ssEMnet: Serial-section Electron Microscopy Image Registration using a Spatial Transformer Network with Learned Features |
Authors | Inwan Yoo, David G. C. Hildebrand, Willie F. Tobin, Wei-Chung Allen Lee, Won-Ki Jeong |
Abstract | The alignment of serial-section electron microscopy (ssEM) images is critical for efforts in neuroscience that seek to reconstruct neuronal circuits. However, each ssEM plane contains densely packed structures that vary from one section to the next, which makes matching features across images a challenge. Advances in deep learning has resulted in unprecedented performance in similar computer vision problems, but to our knowledge, they have not been successfully applied to ssEM image co-registration. In this paper, we introduce a novel deep network model that combines a spatial transformer for image deformation and a convolutional autoencoder for unsupervised feature learning for robust ssEM image alignment. This results in improved accuracy and robustness while requiring substantially less user intervention than conventional methods. We evaluate our method by comparing registration quality across several datasets. |
Tasks | Image Registration |
Published | 2017-07-25 |
URL | http://arxiv.org/abs/1707.07833v2 |
http://arxiv.org/pdf/1707.07833v2.pdf | |
PWC | https://paperswithcode.com/paper/ssemnet-serial-section-electron-microscopy |
Repo | |
Framework | |
Neural-Network Quantum States, String-Bond States, and Chiral Topological States
Title | Neural-Network Quantum States, String-Bond States, and Chiral Topological States |
Authors | Ivan Glasser, Nicola Pancotti, Moritz August, Ivan D. Rodriguez, J. Ignacio Cirac |
Abstract | Neural-Network Quantum States have been recently introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between Neural-Network Quantum States in the form of Restricted Boltzmann Machines and some classes of Tensor-Network states in arbitrary dimensions. In particular we demonstrate that short-range Restricted Boltzmann Machines are Entangled Plaquette States, while fully connected Restricted Boltzmann Machines are String-Bond States with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of Restricted Boltzmann Machines and their efficiency at representing many-body quantum states. String-Bond States also provide a generic way of enhancing the power of Neural-Network Quantum States and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of Tensor Networks and the efficiency of Neural-Network Quantum States into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional Tensor Networks, we show that Neural-Network Quantum States and their String-Bond States extension can describe a lattice Fractional Quantum Hall state exactly. In addition, we provide numerical evidence that Neural-Network Quantum States can approximate a chiral spin liquid with better accuracy than Entangled Plaquette States and local String-Bond States. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of String-Bond States as a tool in more traditional machine-learning applications. |
Tasks | Tensor Networks |
Published | 2017-10-11 |
URL | http://arxiv.org/abs/1710.04045v3 |
http://arxiv.org/pdf/1710.04045v3.pdf | |
PWC | https://paperswithcode.com/paper/neural-network-quantum-states-string-bond |
Repo | |
Framework | |
Rookie: A unique approach for exploring news archives
Title | Rookie: A unique approach for exploring news archives |
Authors | Abram Handler, Brendan O’Connor |
Abstract | News archives are an invaluable primary source for placing current events in historical context. But current search engine tools do a poor job at uncovering broad themes and narratives across documents. We present Rookie: a practical software system which uses natural language processing (NLP) to help readers, reporters and editors uncover broad stories in news archives. Unlike prior work, Rookie’s design emerged from 18 months of iterative development in consultation with editors and computational journalists. This process lead to a dramatically different approach from previous academic systems with similar goals. Our efforts offer a generalizable case study for others building real-world journalism software using NLP. |
Tasks | |
Published | 2017-08-06 |
URL | http://arxiv.org/abs/1708.01944v1 |
http://arxiv.org/pdf/1708.01944v1.pdf | |
PWC | https://paperswithcode.com/paper/rookie-a-unique-approach-for-exploring-news |
Repo | |
Framework | |
Automation of Feature Engineering for IoT Analytics
Title | Automation of Feature Engineering for IoT Analytics |
Authors | Snehasis Banerjee, Tanushyam Chattopadhyay, Arpan Pal, Utpal Garain |
Abstract | This paper presents an approach for automation of interpretable feature selection for Internet Of Things Analytics (IoTA) using machine learning (ML) techniques. Authors have conducted a survey over different people involved in different IoTA based application development tasks. The survey reveals that feature selection is the most time consuming and niche skill demanding part of the entire workflow. This paper shows how feature selection is successfully automated without sacrificing the decision making accuracy and thereby reducing the project completion time and cost of hiring expensive resources. Several pattern recognition principles and state of art (SoA) ML techniques are followed to design the overall approach for the proposed automation. Three data sets are considered to establish the proof-of-concept. Experimental results show that the proposed automation is able to reduce the time for feature selection to $2$ days instead of $4-6$ months which would have been required in absence of the automation. This reduction in time is achieved without any sacrifice in the accuracy of the decision making process. Proposed method is also compared against Multi Layer Perceptron (MLP) model as most of the state of the art works on IoTA uses MLP based Deep Learning. Moreover the feature selection method is compared against SoA feature reduction technique namely Principal Component Analysis (PCA) and its variants. The results obtained show that the proposed method is effective. |
Tasks | Decision Making, Feature Engineering, Feature Selection |
Published | 2017-07-13 |
URL | http://arxiv.org/abs/1707.04067v1 |
http://arxiv.org/pdf/1707.04067v1.pdf | |
PWC | https://paperswithcode.com/paper/automation-of-feature-engineering-for-iot |
Repo | |
Framework | |
Sequential Plan Recognition
Title | Sequential Plan Recognition |
Authors | Reuth Mirsky, Roni Stern, Ya’akov, Gal, Meir Kalech |
Abstract | Plan recognition algorithms infer agents’ plans from their observed actions. Due to imperfect knowledge about the agent’s behavior and the environment, it is often the case that there are multiple hypotheses about an agent’s plans that are consistent with the observations, though only one of these hypotheses is correct. This paper addresses the problem of how to disambiguate between hypotheses, by querying the acting agent about whether a candidate plan in one of the hypotheses matches its intentions. This process is performed sequentially and used to update the set of possible hypotheses during the recognition process. The paper defines the sequential plan recognition process (SPRP), which seeks to reduce the number of hypotheses using a minimal number of queries. We propose a number of policies for the SPRP which use maximum likelihood and information gain to choose which plan to query. We show this approach works well in practice on two domains from the literature, significantly reducing the number of hypotheses using fewer queries than a baseline approach. Our results can inform the design of future plan recognition systems that interleave the recognition process with intelligent interventions of their users. |
Tasks | |
Published | 2017-03-03 |
URL | http://arxiv.org/abs/1703.01083v1 |
http://arxiv.org/pdf/1703.01083v1.pdf | |
PWC | https://paperswithcode.com/paper/sequential-plan-recognition |
Repo | |
Framework | |
Unsupervised Segmentation of Action Segments in Egocentric Videos using Gaze
Title | Unsupervised Segmentation of Action Segments in Egocentric Videos using Gaze |
Authors | I. Hipiny, H. Ujir, J. L. Minoi, S. F. Samson Juan, M. A. Khairuddin, M. S. Sunar |
Abstract | Unsupervised segmentation of action segments in egocentric videos is a desirable feature in tasks such as activity recognition and content-based video retrieval. Reducing the search space into a finite set of action segments facilitates a faster and less noisy matching. However, there exist a substantial gap in machine understanding of natural temporal cuts during a continuous human activity. This work reports on a novel gaze-based approach for segmenting action segments in videos captured using an egocentric camera. Gaze is used to locate the region-of-interest inside a frame. By tracking two simple motion-based parameters inside successive regions-of-interest, we discover a finite set of temporal cuts. We present several results using combinations (of the two parameters) on a dataset, i.e., BRISGAZE-ACTIONS. The dataset contains egocentric videos depicting several daily-living activities. The quality of the temporal cuts is further improved by implementing two entropy measures. |
Tasks | Activity Recognition, Video Retrieval |
Published | 2017-09-30 |
URL | http://arxiv.org/abs/1710.00187v1 |
http://arxiv.org/pdf/1710.00187v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-segmentation-of-action-segments |
Repo | |
Framework | |
Variational Inference of Disentangled Latent Concepts from Unlabeled Observations
Title | Variational Inference of Disentangled Latent Concepts from Unlabeled Observations |
Authors | Abhishek Kumar, Prasanna Sattigeri, Avinash Balakrishnan |
Abstract | Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc. We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors. We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement. We also propose a new disentanglement metric which is better aligned with the qualitative disentanglement observed in the decoder’s output. We empirically observe significant improvement over existing methods in terms of both disentanglement and data likelihood (reconstruction quality). |
Tasks | |
Published | 2017-11-02 |
URL | http://arxiv.org/abs/1711.00848v3 |
http://arxiv.org/pdf/1711.00848v3.pdf | |
PWC | https://paperswithcode.com/paper/variational-inference-of-disentangled-latent |
Repo | |
Framework | |
CLAD: A Complex and Long Activities Dataset with Rich Crowdsourced Annotations
Title | CLAD: A Complex and Long Activities Dataset with Rich Crowdsourced Annotations |
Authors | Jawad Tayyub, Majd Hawasly, David C. Hogg, Anthony G. Cohn |
Abstract | This paper introduces a novel activity dataset which exhibits real-life and diverse scenarios of complex, temporally-extended human activities and actions. The dataset presents a set of videos of actors performing everyday activities in a natural and unscripted manner. The dataset was recorded using a static Kinect 2 sensor which is commonly used on many robotic platforms. The dataset comprises of RGB-D images, point cloud data, automatically generated skeleton tracks in addition to crowdsourced annotations. Furthermore, we also describe the methodology used to acquire annotations through crowdsourcing. Finally some activity recognition benchmarks are presented using current state-of-the-art techniques. We believe that this dataset is particularly suitable as a testbed for activity recognition research but it can also be applicable for other common tasks in robotics/computer vision research such as object detection and human skeleton tracking. |
Tasks | Activity Recognition, Object Detection |
Published | 2017-09-11 |
URL | http://arxiv.org/abs/1709.03456v2 |
http://arxiv.org/pdf/1709.03456v2.pdf | |
PWC | https://paperswithcode.com/paper/clad-a-complex-and-long-activities-dataset |
Repo | |
Framework | |
Towards Semantic Query Segmentation
Title | Towards Semantic Query Segmentation |
Authors | Ajinkya Kale, Thrivikrama Taula, Sanjika Hewavitharana, Amit Srivastava |
Abstract | Query Segmentation is one of the critical components for understanding users’ search intent in Information Retrieval tasks. It involves grouping tokens in the search query into meaningful phrases which help downstream tasks like search relevance and query understanding. In this paper, we propose a novel approach to segment user queries using distributed query embeddings. Our key contribution is a supervised approach to the segmentation task using low-dimensional feature vectors for queries, getting rid of traditional hand tuned and heuristic NLP features which are quite expensive. We benchmark on a 50,000 human-annotated web search engine query corpus achieving comparable accuracy to state-of-the-art techniques. The advantage of our technique is its fast and does not use external knowledge-base like Wikipedia for score boosting. This helps us generalize our approach to other domains like eCommerce without any fine-tuning. We demonstrate the effectiveness of this method on another 50,000 human-annotated eCommerce query corpus from eBay search logs. Our approach is easy to implement and generalizes well across different search domains proving the power of low-dimensional embeddings in query segmentation task, opening up a new direction of research for this problem. |
Tasks | Information Retrieval |
Published | 2017-07-25 |
URL | http://arxiv.org/abs/1707.07835v1 |
http://arxiv.org/pdf/1707.07835v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-semantic-query-segmentation |
Repo | |
Framework | |
On Robustness in Multilayer Interdependent Network
Title | On Robustness in Multilayer Interdependent Network |
Authors | Joydeep Banerjee, Chenyang Zhou, Arunabha Sen |
Abstract | Critical Infrastructures like power and communication networks are highly interdependent on each other for their full functionality. Many significant research have been pursued to model the interdependency and failure analysis of these interdependent networks. However, most of these models fail to capture the complex interdependencies that might actually exist between the infrastructures. The \emph{Implicative Interdependency Model} that utilizes Boolean Logic to capture complex interdependencies was recently proposed which overcome the limitations of the existing models. A number of problems were studies based on this model. In this paper we study the \textit{Robustness} problem in Interdependent Power and Communication Network. The robustness is defined with respect to two parameters $K \in I^{+} \cup {0}$ and $\rho \in (0,1]$. We utilized the \emph{Implicative Interdependency Model} model to capture the complex interdependency between the two networks. The model classifies the interdependency relations into four cases. Computational complexity of the problem is analyzed for each of these cases. A polynomial time algorithm is designed for the first case that outputs the optimal solution. All the other cases are proved to be NP-complete. An in-approximability bound is provided for the third case. For the general case we formulate an Integer Linear Program to get the optimal solution and a polynomial time heuristic. The applicability of the heuristic is evaluated using power and communication network data of Maricopa County, Arizona. The experimental results showed that the heuristic almost always produced near optimal value of parameter $K$ for $\rho < 0.42$. |
Tasks | |
Published | 2017-01-24 |
URL | http://arxiv.org/abs/1702.01018v1 |
http://arxiv.org/pdf/1702.01018v1.pdf | |
PWC | https://paperswithcode.com/paper/on-robustness-in-multilayer-interdependent |
Repo | |
Framework | |
Deep Supervision for Pancreatic Cyst Segmentation in Abdominal CT Scans
Title | Deep Supervision for Pancreatic Cyst Segmentation in Abdominal CT Scans |
Authors | Yuyin Zhou, Lingxi Xie, Elliot K. Fishman, Alan L. Yuille |
Abstract | Automatic segmentation of an organ and its cystic region is a prerequisite of computer-aided diagnosis. In this paper, we focus on pancreatic cyst segmentation in abdominal CT scan. This task is important and very useful in clinical practice yet challenging due to the low contrast in boundary, the variability in location, shape and the different stages of the pancreatic cancer. Inspired by the high relevance between the location of a pancreas and its cystic region, we introduce extra deep supervision into the segmentation network, so that cyst segmentation can be improved with the help of relatively easier pancreas segmentation. Under a reasonable transformation function, our approach can be factorized into two stages, and each stage can be efficiently optimized via gradient back-propagation throughout the deep networks. We collect a new dataset with 131 pathological samples, which, to the best of our knowledge, is the largest set for pancreatic cyst segmentation. Without human assistance, our approach reports a 63.44% average accuracy, measured by the Dice-S{\o}rensen coefficient (DSC), which is higher than the number (60.46%) without deep supervision. |
Tasks | Pancreas Segmentation |
Published | 2017-06-22 |
URL | http://arxiv.org/abs/1706.07346v1 |
http://arxiv.org/pdf/1706.07346v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-supervision-for-pancreatic-cyst |
Repo | |
Framework | |