July 27, 2019

2931 words 14 mins read

Paper Group ANR 739

Paper Group ANR 739

Phase Transitions in Approximate Ranking. Low Precision Neural Networks using Subband Decomposition. Solving Uncalibrated Photometric Stereo Using Fewer Images by Jointly Optimizing Low-rank Matrix Completion and Integrability. Learning Deep Mean Field Games for Modeling Large Population Behavior. Joint Denoising / Compression of Image Contours via …

Phase Transitions in Approximate Ranking

Title Phase Transitions in Approximate Ranking
Authors Chao Gao
Abstract We study the problem of approximate ranking from observations of pairwise interactions. The goal is to estimate the underlying ranks of $n$ objects from data through interactions of comparison or collaboration. Under a general framework of approximate ranking models, we characterize the exact optimal statistical error rates of estimating the underlying ranks. We discover important phase transition boundaries of the optimal error rates. Depending on the value of the signal-to-noise ratio (SNR) parameter, the optimal rate, as a function of SNR, is either trivial, polynomial, exponential or zero. The four corresponding regimes thus have completely different error behaviors. To the best of our knowledge, this phenomenon, especially the phase transition between the polynomial and the exponential rates, has not been discovered before.
Tasks
Published 2017-11-30
URL https://arxiv.org/abs/1711.11189v2
PDF https://arxiv.org/pdf/1711.11189v2.pdf
PWC https://paperswithcode.com/paper/phase-transitions-in-approximate-ranking
Repo
Framework

Low Precision Neural Networks using Subband Decomposition

Title Low Precision Neural Networks using Subband Decomposition
Authors Sek Chai, Aswin Raghavan, David Zhang, Mohamed Amer, Tim Shields
Abstract Large-scale deep neural networks (DNN) have been successfully used in a number of tasks from image recognition to natural language processing. They are trained using large training sets on large models, making them computationally and memory intensive. As such, there is much interest in research development for faster training and test time. In this paper, we present a unique approach using lower precision weights for more efficient and faster training phase. We separate imagery into different frequency bands (e.g. with different information content) such that the neural net can better learn using less bits. We present this approach as a complement existing methods such as pruning network connections and encoding learning weights. We show results where this approach supports more stable learning with 2-4X reduction in precision with 17X reduction in DNN parameters.
Tasks
Published 2017-03-24
URL http://arxiv.org/abs/1703.08595v1
PDF http://arxiv.org/pdf/1703.08595v1.pdf
PWC https://paperswithcode.com/paper/low-precision-neural-networks-using-subband
Repo
Framework

Solving Uncalibrated Photometric Stereo Using Fewer Images by Jointly Optimizing Low-rank Matrix Completion and Integrability

Title Solving Uncalibrated Photometric Stereo Using Fewer Images by Jointly Optimizing Low-rank Matrix Completion and Integrability
Authors Soumyadip Sengupta, Hao Zhou, Walter Forkel, Ronen Basri, Tom Goldstein, David W. Jacobs
Abstract We introduce a new, integrated approach to uncalibrated photometric stereo. We perform 3D reconstruction of Lambertian objects using multiple images produced by unknown, directional light sources. We show how to formulate a single optimization that includes rank and integrability constraints, allowing also for missing data. We then solve this optimization using the Alternate Direction Method of Multipliers (ADMM). We conduct extensive experimental evaluation on real and synthetic data sets. Our integrated approach is particularly valuable when performing photometric stereo using as few as 4-6 images, since the integrability constraint is capable of improving estimation of the linear subspace of possible solutions. We show good improvements over prior work in these cases.
Tasks 3D Reconstruction, Low-Rank Matrix Completion, Matrix Completion
Published 2017-02-02
URL http://arxiv.org/abs/1702.00506v1
PDF http://arxiv.org/pdf/1702.00506v1.pdf
PWC https://paperswithcode.com/paper/solving-uncalibrated-photometric-stereo-using
Repo
Framework

Learning Deep Mean Field Games for Modeling Large Population Behavior

Title Learning Deep Mean Field Games for Modeling Large Population Behavior
Authors Jiachen Yang, Xiaojing Ye, Rakshit Trivedi, Huan Xu, Hongyuan Zha
Abstract We consider the problem of representing collective behavior of large populations and predicting the evolution of a population distribution over a discrete state space. A discrete time mean field game (MFG) is motivated as an interpretable model founded on game theory for understanding the aggregate effect of individual actions and predicting the temporal evolution of population distributions. We achieve a synthesis of MFG and Markov decision processes (MDP) by showing that a special MFG is reducible to an MDP. This enables us to broaden the scope of mean field game theory and infer MFG models of large real-world systems via deep inverse reinforcement learning. Our method learns both the reward function and forward dynamics of an MFG from real data, and we report the first empirical test of a mean field game model of a real-world social media population.
Tasks
Published 2017-11-08
URL http://arxiv.org/abs/1711.03156v2
PDF http://arxiv.org/pdf/1711.03156v2.pdf
PWC https://paperswithcode.com/paper/learning-deep-mean-field-games-for-modeling
Repo
Framework

Joint Denoising / Compression of Image Contours via Shape Prior and Context Tree

Title Joint Denoising / Compression of Image Contours via Shape Prior and Context Tree
Authors Amin Zheng, Gene Cheung, Dinei Florencio
Abstract With the advent of depth sensing technologies, the extraction of object contours in images—a common and important pre-processing step for later higher-level computer vision tasks like object detection and human action recognition—has become easier. However, acquisition noise in captured depth images means that detected contours suffer from unavoidable errors. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. We first prove theoretically that in general a joint denoising / compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we first propose a burst error model that models typical errors encountered in an observed string y of directional edges. We then formulate a rate-constrained maximum a posteriori (MAP) problem that trades off the posterior probability p(x’y) of an estimated string x’ given y with its code rate R(x’). We design a dynamic programming (DP) algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree (TST) that can reduce complexity of the algorithm dramatically. Experimental results show that our joint denoising / compression scheme outperformed a competing separate scheme in rate-distortion performance noticeably.
Tasks Denoising, Object Detection, Temporal Action Localization
Published 2017-04-30
URL http://arxiv.org/abs/1705.00268v1
PDF http://arxiv.org/pdf/1705.00268v1.pdf
PWC https://paperswithcode.com/paper/joint-denoising-compression-of-image-contours
Repo
Framework

Scalable, Trie-based Approximate Entity Extraction for Real-Time Financial Transaction Screening

Title Scalable, Trie-based Approximate Entity Extraction for Real-Time Financial Transaction Screening
Authors Emrah Budur
Abstract Financial institutions have to screen their transactions to ensure that they are not affiliated with terrorism entities. Developing appropriate solutions to detect such affiliations precisely while avoiding any kind of interruption to large amount of legitimate transactions is essential. In this paper, we present building blocks of a scalable solution that may help financial institutions to build their own software to extract terrorism entities out of both structured and unstructured financial messages in real time and with approximate similarity matching approach.
Tasks Entity Extraction
Published 2017-01-12
URL http://arxiv.org/abs/1701.03492v1
PDF http://arxiv.org/pdf/1701.03492v1.pdf
PWC https://paperswithcode.com/paper/scalable-trie-based-approximate-entity
Repo
Framework

L$^3$-SVMs: Landmarks-based Linear Local Support Vectors Machines

Title L$^3$-SVMs: Landmarks-based Linear Local Support Vectors Machines
Authors Valentina Zantedeschi, Rémi Emonet, Marc Sebban
Abstract For their ability to capture non-linearities in the data and to scale to large training sets, local Support Vector Machines (SVMs) have received a special attention during the past decade. In this paper, we introduce a new local SVM method, called L$^3$-SVMs, which clusters the input space, carries out dimensionality reduction by projecting the data on landmarks, and jointly learns a linear combination of local models. Simple and effective, our algorithm is also theoretically well-founded. Using the framework of Uniform Stability, we show that our SVM formulation comes with generalization guarantees on the true risk. The experiments based on the simplest configuration of our model (i.e. landmarks randomly selected, linear projection, linear kernel) show that L$^3$-SVMs is very competitive w.r.t. the state of the art and opens the door to new exciting lines of research.
Tasks Dimensionality Reduction
Published 2017-03-01
URL http://arxiv.org/abs/1703.00284v2
PDF http://arxiv.org/pdf/1703.00284v2.pdf
PWC https://paperswithcode.com/paper/l3-svms-landmarks-based-linear-local-support
Repo
Framework

Improving Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms and Its Applications

Title Improving Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms and Its Applications
Authors Qinshi Wang, Wei Chen
Abstract We study combinatorial multi-armed bandit with probabilistically triggered arms (CMAB-T) and semi-bandit feedback. We resolve a serious issue in the prior CMAB-T studies where the regret bounds contain a possibly exponentially large factor of $1/p^$, where $p^$ is the minimum positive probability that an arm is triggered by any action. We address this issue by introducing a triggering probability modulated (TPM) bounded smoothness condition into the general CMAB-T framework, and show that many applications such as influence maximization bandit and combinatorial cascading bandit satisfy this TPM condition. As a result, we completely remove the factor of $1/p^$ from the regret bounds, achieving significantly better regret bounds for influence maximization and cascading bandits than before. Finally, we provide lower bound results showing that the factor $1/p^$ is unavoidable for general CMAB-T problems, suggesting that the TPM condition is crucial in removing this factor.
Tasks
Published 2017-03-05
URL http://arxiv.org/abs/1703.01610v4
PDF http://arxiv.org/pdf/1703.01610v4.pdf
PWC https://paperswithcode.com/paper/improving-regret-bounds-for-combinatorial
Repo
Framework

Generalized Self-Concordant Functions: A Recipe for Newton-Type Methods

Title Generalized Self-Concordant Functions: A Recipe for Newton-Type Methods
Authors Tianxiao Sun, Quoc Tran-Dinh
Abstract We study the smooth structure of convex functions by generalizing a powerful concept so-called self-concordance introduced by Nesterov and Nemirovskii in the early 1990s to a broader class of convex functions, which we call generalized self-concordant functions. This notion allows us to develop a unified framework for designing Newton-type methods to solve convex optimiza- tion problems. The proposed theory provides a mathematical tool to analyze both local and global convergence of Newton-type methods without imposing unverifiable assumptions as long as the un- derlying functionals fall into our generalized self-concordant function class. First, we introduce the class of generalized self-concordant functions, which covers standard self-concordant functions as a special case. Next, we establish several properties and key estimates of this function class, which can be used to design numerical methods. Then, we apply this theory to develop several Newton-type methods for solving a class of smooth convex optimization problems involving the generalized self- concordant functions. We provide an explicit step-size for the damped-step Newton-type scheme which can guarantee a global convergence without performing any globalization strategy. We also prove a local quadratic convergence of this method and its full-step variant without requiring the Lipschitz continuity of the objective Hessian. Then, we extend our result to develop proximal Newton-type methods for a class of composite convex minimization problems involving generalized self-concordant functions. We also achieve both global and local convergence without additional assumption. Finally, we verify our theoretical results via several numerical examples, and compare them with existing methods.
Tasks
Published 2017-03-14
URL http://arxiv.org/abs/1703.04599v3
PDF http://arxiv.org/pdf/1703.04599v3.pdf
PWC https://paperswithcode.com/paper/generalized-self-concordant-functions-a
Repo
Framework

Random vector generation of a semantic space

Title Random vector generation of a semantic space
Authors Jean-François Delpech, Sabine Ploux
Abstract We show how random vectors and random projection can be implemented in the usual vector space model to construct a Euclidean semantic space from a French synonym dictionary. We evaluate theoretically the resulting noise and show the experimental distribution of the similarities of terms in a neighborhood according to the choice of parameters. We also show that the Schmidt orthogonalization process is applicable and can be used to separate homonyms with distinct semantic meanings. Neighboring terms are easily arranged into semantically significant clusters which are well suited to the generation of realistic lists of synonyms and to such applications as word selection for automatic text generation. This process, applicable to any language, can easily be extended to collocations, is extremely fast and can be updated in real time, whenever new synonyms are proposed.
Tasks Text Generation
Published 2017-03-05
URL http://arxiv.org/abs/1703.02031v1
PDF http://arxiv.org/pdf/1703.02031v1.pdf
PWC https://paperswithcode.com/paper/random-vector-generation-of-a-semantic-space
Repo
Framework

An Empirical Study on Writer Identification & Verification from Intra-variable Individual Handwriting

Title An Empirical Study on Writer Identification & Verification from Intra-variable Individual Handwriting
Authors Chandranath Adak, Bidyut B. Chaudhuri, Michael Blumenstein
Abstract The handwriting of an individual may vary substantially with factors such as mood, time, space, writing speed, writing medium and tool, writing topic, etc. It becomes challenging to perform automated writer verification/identification on a particular set of handwritten patterns (e.g., speedy handwriting) of a person, especially when the system is trained using a different set of writing patterns (e.g., normal speed) of that same person. However, it would be interesting to experimentally analyze if there exists any implicit characteristic of individuality which is insensitive to high intra-variable handwriting. In this paper, we study some handcrafted features and auto-derived features extracted from intra-variable writing. Here, we work on writer identification/verification from offline Bengali handwriting of high intra-variability. To this end, we use various models mainly based on handcrafted features with SVM (Support Vector Machine) and features auto-derived by the convolutional network. For experimentation, we have generated two handwritten databases from two different sets of 100 writers and enlarged the dataset by a data-augmentation technique. We have obtained some interesting results.
Tasks Data Augmentation
Published 2017-08-10
URL http://arxiv.org/abs/1708.03361v3
PDF http://arxiv.org/pdf/1708.03361v3.pdf
PWC https://paperswithcode.com/paper/an-empirical-study-on-writer-identification
Repo
Framework

More is Less: A More Complicated Network with Less Inference Complexity

Title More is Less: A More Complicated Network with Less Inference Complexity
Authors Xuanyi Dong, Junshi Huang, Yi Yang, Shuicheng Yan
Abstract In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer, 2) LCCL is very fast if it is implemented as a 1*1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32% on average with negligible performance drop.
Tasks
Published 2017-03-25
URL http://arxiv.org/abs/1703.08651v2
PDF http://arxiv.org/pdf/1703.08651v2.pdf
PWC https://paperswithcode.com/paper/more-is-less-a-more-complicated-network-with
Repo
Framework

Imbalanced Malware Images Classification: a CNN based Approach

Title Imbalanced Malware Images Classification: a CNN based Approach
Authors Songqing Yue
Abstract Deep convolutional neural networks (CNNs) can be applied to malware binary detection through images classification. The performance, however, is degraded due to the imbalance of malware families (classes). To mitigate this issue, we propose a simple yet effective weighted softmax loss which can be employed as the final layer of deep CNNs. The original softmax loss is weighted, and the weight value can be determined according to class size. A scaling parameter is also included in computing the weight. Proper selection of this parameter has been studied and an empirical option is given. The weighted loss aims at alleviating the impact of data imbalance in an end-to-end learning fashion. To validate the efficacy, we deploy the proposed weighted loss in a pre-trained deep CNN model and fine-tune it to achieve promising results on malware images classification. Extensive experiments also indicate that the new loss function can fit other typical CNNs with an improved classification performance.
Tasks
Published 2017-08-27
URL http://arxiv.org/abs/1708.08042v1
PDF http://arxiv.org/pdf/1708.08042v1.pdf
PWC https://paperswithcode.com/paper/imbalanced-malware-images-classification-a
Repo
Framework

Topologically Robust 3D Shape Matching via Gradual Deflation and Inflation

Title Topologically Robust 3D Shape Matching via Gradual Deflation and Inflation
Authors Asli Genctav, Yusuf Sahillioglu, Sibel Tari
Abstract Despite being vastly ignored in the literature, coping with topological noise is an issue of increasing importance, especially as a consequence of the increasing number and diversity of 3D polygonal models that are captured by devices of different qualities or synthesized by algorithms of different stabilities. One approach for matching 3D shapes under topological noise is to replace the topology-sensitive geodesic distance with distances that are less sensitive to topological changes. We propose an alternative approach utilising gradual deflation (or inflation) of the shape volume, of which purpose is to bring the pair of shapes to be matched to a \emph{comparable} topology before the search for correspondences. Illustrative experiments using different datasets demonstrate that as the level of topological noise increases, our approach outperforms the other methods in the literature.
Tasks
Published 2017-04-30
URL http://arxiv.org/abs/1705.00274v2
PDF http://arxiv.org/pdf/1705.00274v2.pdf
PWC https://paperswithcode.com/paper/topologically-robust-3d-shape-matching-via
Repo
Framework

Random Sampling for Fast Face Sketch Synthesis

Title Random Sampling for Fast Face Sketch Synthesis
Authors Nannan Wang, Xinbo Gao, Jie Li
Abstract Exemplar-based face sketch synthesis plays an important role in both digital entertainment and law enforcement. It generally consists of two parts: neighbor selection and reconstruction weight representation. The most time-consuming or main computation complexity for exemplar-based face sketch synthesis methods lies in the neighbor selection process. State-of-the-art face sketch synthesis methods perform neighbor selection online in a data-driven manner by $K$ nearest neighbor ($K$-NN) searching. Actually, the online search increases the time consuming for synthesis. Moreover, since these methods need to traverse the whole training dataset for neighbor selection, the computational complexity increases with the scale of the training database and hence these methods have limited scalability. In this paper, we proposed a simple but effective offline random sampling in place of online $K$-NN search to improve the synthesis efficiency. Extensive experiments on public face sketch databases demonstrate the superiority of the proposed method in comparison to state-of-the-art methods, in terms of both synthesis quality and time consumption. The proposed method could be extended to other heterogeneous face image transformation problems such as face hallucination. We release the source codes of our proposed methods and the evaluation metrics for future study online: http://www.ihitworld.com/RSLCR.html.
Tasks Face Hallucination, Face Sketch Synthesis
Published 2017-01-08
URL http://arxiv.org/abs/1701.01911v2
PDF http://arxiv.org/pdf/1701.01911v2.pdf
PWC https://paperswithcode.com/paper/random-sampling-for-fast-face-sketch
Repo
Framework
comments powered by Disqus