July 27, 2019

2836 words 14 mins read

Paper Group ANR 615

Paper Group ANR 615

Visual Explanation by High-Level Abduction: On Answer-Set Programming Driven Reasoning about Moving Objects. Modelling collective motion based on the principle of agency. A General Memory-Bounded Learning Algorithm. Nonconvex penalties with analytical solutions for one-bit compressive sensing. Automatic Knot Adjustment Using Dolphin Echolocation Al …

Visual Explanation by High-Level Abduction: On Answer-Set Programming Driven Reasoning about Moving Objects

Title Visual Explanation by High-Level Abduction: On Answer-Set Programming Driven Reasoning about Moving Objects
Authors Jakob Suchan, Mehul Bhatt, Przemysław Wałęga, Carl Schultz
Abstract We propose a hybrid architecture for systematically computing robust visual explanation(s) encompassing hypothesis formation, belief revision, and default reasoning with video data. The architecture consists of two tightly integrated synergistic components: (1) (functional) answer set programming based abductive reasoning with space-time tracklets as native entities; and (2) a visual processing pipeline for detection based object tracking and motion analysis. We present the formal framework, its general implementation as a (declarative) method in answer set programming, and an example application and evaluation based on two diverse video datasets: the MOTChallenge benchmark developed by the vision community, and a recently developed Movie Dataset.
Tasks Object Tracking
Published 2017-12-03
URL http://arxiv.org/abs/1712.00840v1
PDF http://arxiv.org/pdf/1712.00840v1.pdf
PWC https://paperswithcode.com/paper/visual-explanation-by-high-level-abduction-on
Repo
Framework

Modelling collective motion based on the principle of agency

Title Modelling collective motion based on the principle of agency
Authors Katja Ried, Thomas Müller, Hans J. Briegel
Abstract Collective motion is an intriguing phenomenon, especially considering that it arises from a set of simple rules governing local interactions between individuals. In theoretical models, these rules are normally \emph{assumed} to take a particular form, possibly constrained by heuristic arguments. We propose a new class of models, which describe the individuals as \emph{agents}, capable of deciding for themselves how to act and learning from their experiences. The local interaction rules do not need to be postulated in this model, since they \emph{emerge} from the learning process. We apply this ansatz to a concrete scenario involving marching locusts, in order to model the phenomenon of density-dependent alignment. We show that our learning agent-based model can account for a Fokker-Planck equation that describes the collective motion and, most notably, that the agents can learn the appropriate local interactions, requiring no strong previous assumptions on their form. These results suggest that learning agent-based models are a powerful tool for studying a broader class of problems involving collective motion and animal agency in general.
Tasks
Published 2017-12-04
URL http://arxiv.org/abs/1712.01334v1
PDF http://arxiv.org/pdf/1712.01334v1.pdf
PWC https://paperswithcode.com/paper/modelling-collective-motion-based-on-the
Repo
Framework

A General Memory-Bounded Learning Algorithm

Title A General Memory-Bounded Learning Algorithm
Authors Michal Moshkovitz, Naftali Tishby
Abstract Designing bounded-memory algorithms is becoming increasingly important nowadays. Previous works studying bounded-memory algorithms focused on proving impossibility results, while the design of bounded-memory algorithms was left relatively unexplored. To remedy this situation, in this work we design a general bounded-memory learning algorithm, when the underlying distribution is known. The core idea of the algorithm is not to save the exact example received, but only a few important bits that give sufficient information. This algorithm applies to any hypothesis class that has an “anti-mixing” property. This paper complements previous works on unlearnability with bounded memory and provides a step towards a full characterization of bounded-memory learning.
Tasks
Published 2017-12-10
URL https://arxiv.org/abs/1712.03524v2
PDF https://arxiv.org/pdf/1712.03524v2.pdf
PWC https://paperswithcode.com/paper/a-general-memory-bounded-learning-algorithm
Repo
Framework

Nonconvex penalties with analytical solutions for one-bit compressive sensing

Title Nonconvex penalties with analytical solutions for one-bit compressive sensing
Authors Xiaolin Huang, Ming Yan
Abstract One-bit measurements widely exist in the real world, and they can be used to recover sparse signals. This task is known as the problem of learning halfspaces in learning theory and one-bit compressive sensing (1bit-CS) in signal processing. In this paper, we propose novel algorithms based on both convex and nonconvex sparsity-inducing penalties for robust 1bit-CS. We provide a sufficient condition to verify whether a solution is globally optimal or not. Then we show that the globally optimal solution for positive homogeneous penalties can be obtained in two steps: a proximal operator and a normalization step. For several nonconvex penalties, including minimax concave penalty (MCP), $\ell_0$ norm, and sorted $\ell_1$ penalty, we provide fast algorithms for finding the analytical solutions by solving the dual problem. Specifically, our algorithm is more than $200$ times faster than the existing algorithm for MCP. Its efficiency is comparable to the algorithm for the $\ell_1$ penalty in time, while its performance is much better. Among these penalties, the sorted $\ell_1$ penalty is most robust to noise in different settings.
Tasks Compressive Sensing
Published 2017-06-04
URL http://arxiv.org/abs/1706.01014v2
PDF http://arxiv.org/pdf/1706.01014v2.pdf
PWC https://paperswithcode.com/paper/nonconvex-penalties-with-analytical-solutions
Repo
Framework

Automatic Knot Adjustment Using Dolphin Echolocation Algorithm for B-Spline Curve Approximation

Title Automatic Knot Adjustment Using Dolphin Echolocation Algorithm for B-Spline Curve Approximation
Authors Hasan Ali Akyürek, Erkan Ülker, Barış Koçer
Abstract In this paper, a new approach to solve the cubic B-spline curve fitting problem is presented based on a meta-heuristic algorithm called " dolphin echolocation “. The method minimizes the proximity error value of the selected nodes that measured using the least squares method and the Euclidean distance method of the new curve generated by the reverse engineering. The results of the proposed method are compared with the genetic algorithm. As a result, this new method seems to be successful.
Tasks
Published 2017-01-16
URL http://arxiv.org/abs/1701.04383v1
PDF http://arxiv.org/pdf/1701.04383v1.pdf
PWC https://paperswithcode.com/paper/automatic-knot-adjustment-using-dolphin
Repo
Framework

Morphology Generation for Statistical Machine Translation

Title Morphology Generation for Statistical Machine Translation
Authors Sreelekha S, Pushpak Bhattacharyya
Abstract When translating into morphologically rich languages, Statistical MT approaches face the problem of data sparsity. The severity of the sparseness problem will be high when the corpus size of morphologically richer language is less. Even though we can use factored models to correctly generate morphological forms of words, the problem of data sparseness limits their performance. In this paper, we describe a simple and effective solution which is based on enriching the input corpora with various morphological forms of words. We use this method with the phrase-based and factor-based experiments on two morphologically rich languages: Hindi and Marathi when translating from English. We evaluate the performance of our experiments both in terms automatic evaluation and subjective evaluation such as adequacy and fluency. We observe that the morphology injection method helps in improving the quality of translation. We further analyze that the morph injection method helps in handling the data sparseness problem to a great level.
Tasks Machine Translation
Published 2017-10-05
URL http://arxiv.org/abs/1710.02093v3
PDF http://arxiv.org/pdf/1710.02093v3.pdf
PWC https://paperswithcode.com/paper/morphology-generation-for-statistical-machine
Repo
Framework

Towards Evolutional Compression

Title Towards Evolutional Compression
Authors Yunhe Wang, Chang Xu, Jiayan Qiu, Chao Xu, Dacheng Tao
Abstract Compressing convolutional neural networks (CNNs) is essential for transferring the success of CNNs to a wide variety of applications to mobile devices. In contrast to directly recognizing subtle weights or filters as redundant in a given CNN, this paper presents an evolutionary method to automatically eliminate redundant convolution filters. We represent each compressed network as a binary individual of specific fitness. Then, the population is upgraded at each evolutionary iteration using genetic operations. As a result, an extremely compact CNN is generated using the fittest individual. In this approach, either large or small convolution filters can be redundant, and filters in the compressed network are more distinct. In addition, since the number of filters in each convolutional layer is reduced, the number of filter channels and the size of feature maps are also decreased, naturally improving both the compression and speed-up ratios. Experiments on benchmark deep CNN models suggest the superiority of the proposed algorithm over the state-of-the-art compression methods.
Tasks
Published 2017-07-25
URL http://arxiv.org/abs/1707.08005v1
PDF http://arxiv.org/pdf/1707.08005v1.pdf
PWC https://paperswithcode.com/paper/towards-evolutional-compression
Repo
Framework

Inference of Personal Attributes from Tweets Using Machine Learning

Title Inference of Personal Attributes from Tweets Using Machine Learning
Authors Take Yo, Kazutoshi Sasahara
Abstract Using machine learning algorithms, including deep learning, we studied the prediction of personal attributes from the text of tweets, such as gender, occupation, and age groups. We applied word2vec to construct word vectors, which were then used to vectorize tweet blocks. The resulting tweet vectors were used as inputs for training models, and the prediction accuracy of those models was examined as a function of the dimension of the tweet vectors and the size of the tweet blacks. The results showed that the machine learning algorithms could predict the three personal attributes of interest with 60-70% accuracy.
Tasks
Published 2017-09-28
URL http://arxiv.org/abs/1709.09927v3
PDF http://arxiv.org/pdf/1709.09927v3.pdf
PWC https://paperswithcode.com/paper/inference-of-personal-attributes-from-tweets
Repo
Framework

A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints

Title A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
Authors Bin Hu, Peter Seiler, Anders Rantzer
Abstract We develop a simple routine unifying the analysis of several important recently-developed stochastic optimization methods including SAGA, Finito, and stochastic dual coordinate ascent (SDCA). First, we show an intrinsic connection between stochastic optimization methods and dynamic jump systems, and propose a general jump system model for stochastic optimization methods. Our proposed model recovers SAGA, SDCA, Finito, and SAG as special cases. Then we combine jump system theory with several simple quadratic inequalities to derive sufficient conditions for convergence rate certifications of the proposed jump system model under various assumptions (with or without individual convexity, etc). The derived conditions are linear matrix inequalities (LMIs) whose sizes roughly scale with the size of the training set. We make use of the symmetry in the stochastic optimization methods and reduce these LMIs to some equivalent small LMIs whose sizes are at most 3 by 3. We solve these small LMIs to provide analytical proofs of new convergence rates for SAGA, Finito and SDCA (with or without individual convexity). We also explain why our proposed LMI fails in analyzing SAG. We reveal a key difference between SAG and other methods, and briefly discuss how to extend our LMI analysis for SAG. An advantage of our approach is that the proposed analysis can be automated for a large class of stochastic methods under various assumptions (with or without individual convexity, etc).
Tasks Stochastic Optimization
Published 2017-06-25
URL http://arxiv.org/abs/1706.08141v1
PDF http://arxiv.org/pdf/1706.08141v1.pdf
PWC https://paperswithcode.com/paper/a-unified-analysis-of-stochastic-optimization
Repo
Framework

An efficient genetic algorithm for large-scale transmit power control of dense industrial wireless networks

Title An efficient genetic algorithm for large-scale transmit power control of dense industrial wireless networks
Authors Xu Gong, David Plets, Emmeric Tanghe, Toon De Pessemier, Luc Martens, Wout Joseph
Abstract The industrial wireless local area network (IWLAN) is increasingly dense, not only due to the penetration of wireless applications into factories and warehouses, but also because of the rising need of redundancy for robust wireless coverage. Instead of powering on all the nodes with the maximal transmit power, it becomes an unavoidable challenge to control the transmit power of all wireless nodes on a large scale, in order to reduce interference and adapt coverage to the latest shadowing effects in the environment. Therefore, this paper proposes an efficient genetic algorithm (GA) to solve this transmit power control (TPC) problem for dense IWLANs, named GATPC. Effective population initialization, crossover and mutation, parallel computing as well as dedicated speedup measures are introduced to tailor GATPC for the large-scale optimization that is intrinsically involved in this problem. In contrast to most coverage-related optimization algorithms which cannot deal with the prevalent shadowing effects in harsh industrial indoor environments, an empirical one-slope path loss model considering three-dimensional obstacle shadowing effects is used in GATPC, in order to enable accurate yet simple coverage prediction. Experimental validation and numerical experiments in real industrial cases show the promising performance of GATPC in terms of scalability to a hyper-large scale, up to 37-times speedup in resolution runtime, and solution quality to achieve adaptive coverage and to minimize interference.
Tasks
Published 2017-08-12
URL http://arxiv.org/abs/1709.04320v1
PDF http://arxiv.org/pdf/1709.04320v1.pdf
PWC https://paperswithcode.com/paper/an-efficient-genetic-algorithm-for-large
Repo
Framework

Semi-supervised Classification: Cluster and Label Approach using Particle Swarm Optimization

Title Semi-supervised Classification: Cluster and Label Approach using Particle Swarm Optimization
Authors Shahira Shaaban Azab, Mohamed Farouk Abdel Hady, Hesham Ahmed Hefny
Abstract Classification predicts classes of objects using the knowledge learned during the training phase. This process requires learning from labeled samples. However, the labeled samples usually limited. Annotation process is annoying, tedious, expensive, and requires human experts. Meanwhile, unlabeled data is available and almost free. Semi-supervised learning approaches make use of both labeled and unlabeled data. This paper introduces cluster and label approach using PSO for semi-supervised classification. PSO is competitive to traditional clustering algorithms. A new local best PSO is presented to cluster the unlabeled data. The available labeled data guides the learning process. The experiments are conducted using four state-of-the-art datasets from different domains. The results compared with Label Propagation a popular semi-supervised classifier and two state-of-the-art supervised classification models, namely k-nearest neighbors and decision trees. The experiments show the efficiency of the proposed model.
Tasks
Published 2017-06-03
URL http://arxiv.org/abs/1706.00996v1
PDF http://arxiv.org/pdf/1706.00996v1.pdf
PWC https://paperswithcode.com/paper/semi-supervised-classification-cluster-and
Repo
Framework

Towards Full Automated Drive in Urban Environments: A Demonstration in GoMentum Station, California

Title Towards Full Automated Drive in Urban Environments: A Demonstration in GoMentum Station, California
Authors Akansel Cosgun, Lichao Ma, Jimmy Chiu, Jiawei Huang, Mahmut Demir, Alexandre Miranda Anon, Thang Lian, Hasan Tafish, Samir Al-Stouhi
Abstract Each year, millions of motor vehicle traffic accidents all over the world cause a large number of fatalities, injuries and significant material loss. Automated Driving (AD) has potential to drastically reduce such accidents. In this work, we focus on the technical challenges that arise from AD in urban environments. We present the overall architecture of an AD system and describe in detail the perception and planning modules. The AD system, built on a modified Acura RLX, was demonstrated in a course in GoMentum Station in California. We demonstrated autonomous handling of 4 scenarios: traffic lights, cross-traffic at intersections, construction zones and pedestrians. The AD vehicle displayed safe behavior and performed consistently in repeated demonstrations with slight variations in conditions. Overall, we completed 44 runs, encompassing 110km of automated driving with only 3 cases where the driver intervened the control of the vehicle, mostly due to error in GPS positioning. Our demonstration showed that robust and consistent behavior in urban scenarios is possible, yet more investigation is necessary for full scale roll-out on public roads.
Tasks
Published 2017-05-02
URL http://arxiv.org/abs/1705.01187v1
PDF http://arxiv.org/pdf/1705.01187v1.pdf
PWC https://paperswithcode.com/paper/towards-full-automated-drive-in-urban
Repo
Framework

Clickbait detection using word embeddings

Title Clickbait detection using word embeddings
Authors Vijayasaradhi Indurthi, Subba Reddy Oota
Abstract Clickbait is a pejorative term describing web content that is aimed at generating online advertising revenue, especially at the expense of quality or accuracy, relying on sensationalist headlines or eye-catching thumbnail pictures to attract click-throughs and to encourage forwarding of the material over online social networks. We use distributed word representations of the words in the title as features to identify clickbaits in online news media. We train a machine learning model using linear regression to predict the cickbait score of a given tweet. Our methods achieve an F1-score of 64.98% and an MSE of 0.0791. Compared to other methods, our method is simple, fast to train, does not require extensive feature engineering and yet moderately effective.
Tasks Clickbait Detection, Feature Engineering, Word Embeddings
Published 2017-10-08
URL http://arxiv.org/abs/1710.02861v1
PDF http://arxiv.org/pdf/1710.02861v1.pdf
PWC https://paperswithcode.com/paper/clickbait-detection-using-word-embeddings
Repo
Framework

Getting Reliable Annotations for Sarcasm in Online Dialogues

Title Getting Reliable Annotations for Sarcasm in Online Dialogues
Authors Reid Swanson, Stephanie Lukin, Luke Eisenberg, Thomas Chase Corcoran, Marilyn A. Walker
Abstract The language used in online forums differs in many ways from that of traditional language resources such as news. One difference is the use and frequency of nonliteral, subjective dialogue acts such as sarcasm. Whether the aim is to develop a theory of sarcasm in dialogue, or engineer automatic methods for reliably detecting sarcasm, a major challenge is simply the difficulty of getting enough reliably labelled examples. In this paper we describe our work on methods for achieving highly reliable sarcasm annotations from untrained annotators on Mechanical Turk. We explore the use of a number of common statistical reliability measures, such as Kappa, Karger’s, Majority Class, and EM. We show that more sophisticated measures do not appear to yield better results for our data than simple measures such as assuming that the correct label is the one that a majority of Turkers apply.
Tasks
Published 2017-09-04
URL http://arxiv.org/abs/1709.01042v1
PDF http://arxiv.org/pdf/1709.01042v1.pdf
PWC https://paperswithcode.com/paper/getting-reliable-annotations-for-sarcasm-in
Repo
Framework

Noise Level Estimation for Overcomplete Dictionary Learning Based on Tight Asymptotic Bounds

Title Noise Level Estimation for Overcomplete Dictionary Learning Based on Tight Asymptotic Bounds
Authors Rui Chen, Changshui Yang, Huizhu Jia, Xiaodong Xie
Abstract In this letter, we address the problem of estimating Gaussian noise level from the trained dictionaries in update stage. We first provide rigorous statistical analysis on the eigenvalue distributions of a sample covariance matrix. Then we propose an interval-bounded estimator for noise variance in high dimensional setting. To this end, an effective estimation method for noise level is devised based on the boundness and asymptotic behavior of noise eigenvalue spectrum. The estimation performance of our method has been guaranteed both theoretically and empirically. The analysis and experiment results have demonstrated that the proposed algorithm can reliably infer true noise levels, and outperforms the relevant existing methods.
Tasks Dictionary Learning
Published 2017-12-09
URL http://arxiv.org/abs/1712.03381v1
PDF http://arxiv.org/pdf/1712.03381v1.pdf
PWC https://paperswithcode.com/paper/noise-level-estimation-for-overcomplete
Repo
Framework
comments powered by Disqus