May 7, 2019

3165 words 15 mins read

Paper Group ANR 47

Paper Group ANR 47

Cognitive Dynamic Systems: A Technical Review of Cognitive Radar. ML-based tactile sensor calibration: A universal approach. Differential Evolution for Efficient AUV Path Planning in Time Variant Uncertain Underwater Environment. Visual Place Recognition with Probabilistic Vertex Voting. Information-theoretical label embeddings for large-scale imag …

Cognitive Dynamic Systems: A Technical Review of Cognitive Radar

Title Cognitive Dynamic Systems: A Technical Review of Cognitive Radar
Authors Krishanth Krishnan, Taralyn Schwering, Saman Sarraf
Abstract We start with the history of cognitive radar, where origins of the PAC, Fuster research on cognition and principals of cognition are provided. Fuster describes five cognitive functions: perception, memory, attention, language, and intelligence. We describe the Perception-Action Cyclec as it applies to cognitive radar, and then discuss long-term memory, memory storage, memory retrieval and working memory. A comparison between memory in human cognition and cognitive radar is given as well. Attention is another function described by Fuster, and we have given the comparison of attention in human cognition and cognitive radar. We talk about the four functional blocks from the PAC: Bayesian filter, feedback information, dynamic programming and state-space model for the radar environment. Then, to show that the PAC improves the tracking accuracy of Cognitive Radar over Traditional Active Radar, we have provided simulation results. In the simulation, three nonlinear filters: Cubature Kalman Filter, Unscented Kalman Filter and Extended Kalman Filter are compared. Based on the results, radars implemented with CKF perform better than the radars implemented with UKF or radars implemented with EKF. Further, radar with EKF has the worst accuracy and has the biggest computation load because of derivation and evaluation of Jacobian matrices. We suggest using the concept of risk management to better control parameters and improve performance in cognitive radar. We believe, spectrum sensing can be seen as a potential interest to be used in cognitive radar and we propose a new approach Probabilistic ICA which will presumably reduce noise based on estimation error in cognitive radar. Parallel computing is a concept based on divide and conquers mechanism, and we suggest using the parallel computing approach in cognitive radar by doing complicated calculations or tasks to reduce processing time.
Tasks
Published 2016-05-26
URL http://arxiv.org/abs/1605.08150v1
PDF http://arxiv.org/pdf/1605.08150v1.pdf
PWC https://paperswithcode.com/paper/cognitive-dynamic-systems-a-technical-review
Repo
Framework

ML-based tactile sensor calibration: A universal approach

Title ML-based tactile sensor calibration: A universal approach
Authors Maximilian Karl, Artur Lohrer, Dhananjay Shah, Frederik Diehl, Max Fiedler, Saahil Ognawala, Justin Bayer, Patrick van der Smagt
Abstract We study the responses of two tactile sensors, the fingertip sensor from the iCub and the BioTac under different external stimuli. The question of interest is to which degree both sensors i) allow the estimation of force exerted on the sensor and ii) enable the recognition of differing degrees of curvature. Making use of a force controlled linear motor affecting the tactile sensors we acquire several high-quality data sets allowing the study of both sensors under exactly the same conditions. We also examined the structure of the representation of tactile stimuli in the recorded tactile sensor data using t-SNE embeddings. The experiments show that both the iCub and the BioTac excel in different settings.
Tasks Calibration
Published 2016-06-21
URL http://arxiv.org/abs/1606.06588v1
PDF http://arxiv.org/pdf/1606.06588v1.pdf
PWC https://paperswithcode.com/paper/ml-based-tactile-sensor-calibration-a
Repo
Framework

Differential Evolution for Efficient AUV Path Planning in Time Variant Uncertain Underwater Environment

Title Differential Evolution for Efficient AUV Path Planning in Time Variant Uncertain Underwater Environment
Authors S. Mahmoud Zadeh, D. M. W. Powers, A. Yazdani, K. Sammut, A Atyabi
Abstract The AUV three-dimension path planning in complex turbulent underwater environment is investigated in this research, in which static current map data and uncertain static-moving time variant obstacles are taken into account. Robustness of AUVs path planning to this strong variability is known as a complex NP-hard problem and is considered a critical issue to ensure vehicles safe deployment. Efficient evolutionary techniques have substantial potential of handling NP hard complexity of path planning problem as more powerful and fast algorithms among other approaches for mentioned problem. For the purpose of this research Differential Evolution (DE) technique is conducted to solve the AUV path planning problem in a realistic underwater environment. The path planners designed in this paper are capable of extracting feasible areas of a real map to determine the allowed spaces for deployment, where coastal area, islands, static/dynamic obstacles and ocean current is taken into account and provides the efficient path with a small computation time. The results obtained from analyze of experimental demonstrate the inherent robustness and drastic efficiency of the proposed scheme in enhancement of the vehicles path planning capability in coping undesired current, using useful current flow, and avoid colliding collision boundaries in a real-time manner. The proposed approach is also flexible and strictly respects to vehicle’s kinematic constraints resisting current instabilities.
Tasks
Published 2016-04-09
URL http://arxiv.org/abs/1604.02523v4
PDF http://arxiv.org/pdf/1604.02523v4.pdf
PWC https://paperswithcode.com/paper/differential-evolution-for-efficient-auv-path
Repo
Framework

Visual Place Recognition with Probabilistic Vertex Voting

Title Visual Place Recognition with Probabilistic Vertex Voting
Authors Mathias Gehrig, Elena Stumm, Timo Hinzmann, Roland Siegwart
Abstract We propose a novel scoring concept for visual place recognition based on nearest neighbor descriptor voting and demonstrate how the algorithm naturally emerges from the problem formulation. Based on the observation that the number of votes for matching places can be evaluated using a binomial distribution model, loop closures can be detected with high precision. By casting the problem into a probabilistic framework, we not only remove the need for commonly employed heuristic parameters but also provide a powerful score to classify matching and non-matching places. We present methods for both a 2D-2D pose-graph vertex matching and a 2D-3D landmark matching based on the above scoring. The approach maintains accuracy while being efficient enough for online application through the use of compact (low dimensional) descriptors and fast nearest neighbor retrieval techniques. The proposed methods are evaluated on several challenging datasets in varied environments, showing state-of-the-art results with high precision and high recall.
Tasks Visual Place Recognition
Published 2016-10-11
URL http://arxiv.org/abs/1610.03548v2
PDF http://arxiv.org/pdf/1610.03548v2.pdf
PWC https://paperswithcode.com/paper/visual-place-recognition-with-probabilistic
Repo
Framework

Information-theoretical label embeddings for large-scale image classification

Title Information-theoretical label embeddings for large-scale image classification
Authors François Chollet
Abstract We present a method for training multi-label, massively multi-class image classification models, that is faster and more accurate than supervision via a sigmoid cross-entropy loss (logistic regression). Our method consists in embedding high-dimensional sparse labels onto a lower-dimensional dense sphere of unit-normed vectors, and treating the classification problem as a cosine proximity regression problem on this sphere. We test our method on a dataset of 300 million high-resolution images with 17,000 labels, where it yields considerably faster convergence, as well as a 7% higher mean average precision compared to logistic regression.
Tasks Image Classification
Published 2016-07-19
URL http://arxiv.org/abs/1607.05691v1
PDF http://arxiv.org/pdf/1607.05691v1.pdf
PWC https://paperswithcode.com/paper/information-theoretical-label-embeddings-for
Repo
Framework

Online Optimization of Smoothed Piecewise Constant Functions

Title Online Optimization of Smoothed Piecewise Constant Functions
Authors Vincent Cohen-Addad, Varun Kanade
Abstract We study online optimization of smoothed piecewise constant functions over the domain [0, 1). This is motivated by the problem of adaptively picking parameters of learning algorithms as in the recently introduced framework by Gupta and Roughgarden (2016). Majority of the machine learning literature has focused on Lipschitz-continuous functions or functions with bounded gradients. 1 This is with good reason—any learning algorithm suffers linear regret even against piecewise constant functions that are chosen adversarially, arguably the simplest of non-Lipschitz continuous functions. The smoothed setting we consider is inspired by the seminal work of Spielman and Teng (2004) and the recent work of Gupta and Roughgarden—in this setting, the sequence of functions may be chosen by an adversary, however, with some uncertainty in the location of discontinuities. We give algorithms that achieve sublinear regret in the full information and bandit settings.
Tasks
Published 2016-04-07
URL http://arxiv.org/abs/1604.01999v2
PDF http://arxiv.org/pdf/1604.01999v2.pdf
PWC https://paperswithcode.com/paper/online-optimization-of-smoothed-piecewise
Repo
Framework

Modeling Language Change in Historical Corpora: The Case of Portuguese

Title Modeling Language Change in Historical Corpora: The Case of Portuguese
Authors Marcos Zampieri, Shervin Malmasi, Mark Dras
Abstract This paper presents a number of experiments to model changes in a historical Portuguese corpus composed of literary texts for the purpose of temporal text classification. Algorithms were trained to classify texts with respect to their publication date taking into account lexical variation represented as word n-grams, and morphosyntactic variation represented by part-of-speech (POS) distribution. We report results of 99.8% accuracy using word unigram features with a Support Vector Machines classifier to predict the publication date of documents in time intervals of both one century and half a century. A feature analysis is performed to investigate the most informative features for this task and how they are linked to language change.
Tasks Text Classification
Published 2016-09-30
URL http://arxiv.org/abs/1610.00030v1
PDF http://arxiv.org/pdf/1610.00030v1.pdf
PWC https://paperswithcode.com/paper/modeling-language-change-in-historical
Repo
Framework

Kissing Cuisines: Exploring Worldwide Culinary Habits on the Web

Title Kissing Cuisines: Exploring Worldwide Culinary Habits on the Web
Authors Sina Sajadmanesh, Sina Jafarzadeh, Seyed Ali Osia, Hamid R. Rabiee, Hamed Haddadi, Yelena Mejova, Mirco Musolesi, Emiliano De Cristofaro, Gianluca Stringhini
Abstract Food and nutrition occupy an increasingly prevalent space on the web, and dishes and recipes shared online provide an invaluable mirror into culinary cultures and attitudes around the world. More specifically, ingredients, flavors, and nutrition information become strong signals of the taste preferences of individuals and civilizations. However, there is little understanding of these palate varieties. In this paper, we present a large-scale study of recipes published on the web and their content, aiming to understand cuisines and culinary habits around the world. Using a database of more than 157K recipes from over 200 different cuisines, we analyze ingredients, flavors, and nutritional values which distinguish dishes from different regions, and use this knowledge to assess the predictability of recipes from different cuisines. We then use country health statistics to understand the relation between these factors and health indicators of different nations, such as obesity, diabetes, migration, and health expenditure. Our results confirm the strong effects of geographical and cultural similarities on recipes, health indicators, and culinary preferences across the globe.
Tasks
Published 2016-10-26
URL http://arxiv.org/abs/1610.08469v4
PDF http://arxiv.org/pdf/1610.08469v4.pdf
PWC https://paperswithcode.com/paper/kissing-cuisines-exploring-worldwide-culinary
Repo
Framework

Anytime Monte Carlo

Title Anytime Monte Carlo
Authors Lawrence M. Murray, Sumeetpal Singh, Pierre E. Jacob, Anthony Lee
Abstract A Monte Carlo algorithm typically simulates some prescribed number of samples, taking some random real time to complete the computations necessary. This work considers the converse: to impose a real-time budget on the computation, so that the number of samples simulated is random. To complicate matters, the real time taken for each simulation may depend on the sample produced, so that the samples themselves are not independent of their number, and a length bias with respect to compute time is apparent. This is especially problematic when a Markov chain Monte Carlo (MCMC) algorithm is used and the final state of the Markov chain—rather than an average over all states—is required. The length bias does not diminish with the compute budget in this case. It occurs, for example, in sequential Monte Carlo (SMC) algorithms. We propose an anytime framework to address the concern, using a continuous-time Markov jump process to study the progress of the computation in real time. We show that the length bias can be eliminated for any MCMC algorithm by using a multiple chain construction. The utility of this construction is demonstrated on a large-scale SMC-squared implementation, using four billion particles distributed across a cluster of 128 graphics processing units on the Amazon EC2 service. The anytime framework imposes a real-time budget on the MCMC move steps within SMC-squared, ensuring that all processors are simultaneously ready for the resampling step, demonstrably reducing wait times and providing substantial control over the total compute budget.
Tasks
Published 2016-12-10
URL http://arxiv.org/abs/1612.03319v2
PDF http://arxiv.org/pdf/1612.03319v2.pdf
PWC https://paperswithcode.com/paper/anytime-monte-carlo
Repo
Framework

The languages of actions, formal grammars and qualitive modeling of companies

Title The languages of actions, formal grammars and qualitive modeling of companies
Authors Vladislav B Kovchegov
Abstract In this paper we discuss methods of using the language of actions, formal languages, and grammars for qualitative conceptual linguistic modeling of companies as technological and human institutions. The main problem following the discussion is the problem to find and describe a language structure for external and internal flow of information of companies. We anticipate that the language structure of external and internal base flows determine the structure of companies. In the structure modeling of an abstract industrial company an internal base flow of information is constructed as certain flow of words composed on the theoretical parts-processes-actions language. The language of procedures is found for an external base flow of information for an insurance company. The formal stochastic grammar for the language of procedures is found by statistical methods and is used in understanding the tendencies of the health care industry. We present the model of human communications as a random walk on the semantic tree
Tasks
Published 2016-08-19
URL http://arxiv.org/abs/1608.05694v1
PDF http://arxiv.org/pdf/1608.05694v1.pdf
PWC https://paperswithcode.com/paper/the-languages-of-actions-formal-grammars-and
Repo
Framework

Localized Lasso for High-Dimensional Regression

Title Localized Lasso for High-Dimensional Regression
Authors Makoto Yamada, Koh Takeuchi, Tomoharu Iwata, John Shawe-Taylor, Samuel Kaski
Abstract We introduce the localized Lasso, which is suited for learning models that are both interpretable and have a high predictive power in problems with high dimensionality $d$ and small sample size $n$. More specifically, we consider a function defined by local sparse models, one at each data point. We introduce sample-wise network regularization to borrow strength across the models, and sample-wise exclusive group sparsity (a.k.a., $\ell_{1,2}$ norm) to introduce diversity into the choice of feature sets in the local models. The local models are interpretable in terms of similarity of their sparsity patterns. The cost function is convex, and thus has a globally optimal solution. Moreover, we propose a simple yet efficient iterative least-squares based optimization procedure for the localized Lasso, which does not need a tuning parameter, and is guaranteed to converge to a globally optimal solution. The solution is empirically shown to outperform alternatives for both simulated and genomic personalized medicine data.
Tasks
Published 2016-03-22
URL http://arxiv.org/abs/1603.06743v3
PDF http://arxiv.org/pdf/1603.06743v3.pdf
PWC https://paperswithcode.com/paper/localized-lasso-for-high-dimensional
Repo
Framework

Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph

Title Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph
Authors Jiaying Liu, Wenhan Yang, Xiaoyan Sun, Wenjun Zeng
Abstract With the rapid development of social network and multimedia technology, customized image and video stylization has been widely used for various social-media applications. In this paper, we explore the problem of exemplar-based photo style transfer, which provides a flexible and convenient way to invoke fantastic visual impression. Rather than investigating some fixed artistic patterns to represent certain styles as was done in some previous works, our work emphasizes styles related to a series of visual effects in the photograph, e.g. color, tone, and contrast. We propose a photo stylistic brush, an automatic robust style transfer approach based on Superpixel-based BIpartite Graph (SuperBIG). A two-step bipartite graph algorithm with different granularity levels is employed to aggregate pixels into superpixels and find their correspondences. In the first step, with the extracted hierarchical features, a bipartite graph is constructed to describe the content similarity for pixel partition to produce superpixels. In the second step, superpixels in the input/reference image are rematched to form a new superpixel-based bipartite graph, and superpixel-level correspondences are generated by a bipartite matching. Finally, the refined correspondence guides SuperBIG to perform the transformation in a decorrelated color space. Extensive experimental results demonstrate the effectiveness and robustness of the proposed method for transferring various styles of exemplar images, even for some challenging cases, such as night images.
Tasks Style Transfer
Published 2016-06-13
URL http://arxiv.org/abs/1606.03871v2
PDF http://arxiv.org/pdf/1606.03871v2.pdf
PWC https://paperswithcode.com/paper/photo-stylistic-brush-robust-style-transfer
Repo
Framework

A Markovian-based Approach for Daily Living Activities Recognition

Title A Markovian-based Approach for Daily Living Activities Recognition
Authors Zaineb Liouane, Tayeb Lemlouma, Philippe Roose, Frédéric Weis, Messaoud Hassani
Abstract Recognizing the activities of daily living plays an important role in healthcare. It is necessary to use an adapted model to simulate the human behavior in a domestic space to monitor the patient harmonically and to intervene in the necessary time. In this paper, we tackle this problem using the hierarchical hidden Markov model for representing and recognizing complex indoor activities. We propose a new grammar, called “Home By Room Activities Language”, to facilitate the complexity of human scenarios and consider the abnormal activities.
Tasks
Published 2016-03-10
URL http://arxiv.org/abs/1603.03251v1
PDF http://arxiv.org/pdf/1603.03251v1.pdf
PWC https://paperswithcode.com/paper/a-markovian-based-approach-for-daily-living
Repo
Framework

From Community Detection to Community Deception

Title From Community Detection to Community Deception
Authors Valeria Fionda, Giuseppe Pirrò
Abstract The community deception problem is about how to hide a target community C from community detection algorithms. The need for deception emerges whenever a group of entities (e.g., activists, police enforcements) want to cooperate while concealing their existence as a community. In this paper we introduce and formalize the community deception problem. To solve this problem, we describe algorithms that carefully rewire the connections of C’s members. We experimentally show how several existing community detection algorithms can be deceived, and quantify the level of deception by introducing a deception score. We believe that our study is intriguing since, while showing how deception can be realized it raises awareness for the design of novel detection algorithms robust to deception techniques.
Tasks Community Detection
Published 2016-09-01
URL http://arxiv.org/abs/1609.00149v1
PDF http://arxiv.org/pdf/1609.00149v1.pdf
PWC https://paperswithcode.com/paper/from-community-detection-to-community-1
Repo
Framework

Learning from Untrusted Data

Title Learning from Untrusted Data
Authors Moses Charikar, Jacob Steinhardt, Gregory Valiant
Abstract The vast majority of theoretical results in machine learning and statistics assume that the available training data is a reasonably reliable reflection of the phenomena to be learned or estimated. Similarly, the majority of machine learning and statistical techniques used in practice are brittle to the presence of large amounts of biased or malicious data. In this work we consider two frameworks in which to study estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, list-decodable learning, asks whether it is possible to return a list of answers, with the guarantee that at least one of them is accurate. For example, given a dataset of $n$ points for which an unknown subset of $\alpha n$ points are drawn from a distribution of interest, and no assumptions are made about the remaining $(1-\alpha)n$ points, is it possible to return a list of $\operatorname{poly}(1/\alpha)$ answers, one of which is correct? The second framework, which we term the semi-verified learning model, considers the extent to which a small dataset of trusted data (drawn from the distribution in question) can be leveraged to enable the accurate extraction of information from a much larger but untrusted dataset (of which only an $\alpha$-fraction is drawn from the distribution). We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This general result has immediate implications for robust estimation in a number of settings, including for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary.
Tasks Stochastic Optimization
Published 2016-11-07
URL http://arxiv.org/abs/1611.02315v2
PDF http://arxiv.org/pdf/1611.02315v2.pdf
PWC https://paperswithcode.com/paper/learning-from-untrusted-data
Repo
Framework
comments powered by Disqus