Paper Group ANR 11
In Nomine Function: Naming Functions in Stripped Binaries with Neural Networks. Expression of Fractals Through Neural Network Functions. Multi-Level Network for High-Speed Multi-Person Pose Estimation. Federated Learning for Emoji Prediction in a Mobile Keyboard. On The Classification-Distortion-Perception Tradeoff. On Local Optimizers of Acquisiti …
In Nomine Function: Naming Functions in Stripped Binaries with Neural Networks
Title | In Nomine Function: Naming Functions in Stripped Binaries with Neural Networks |
Authors | Fiorella Artuso, Giuseppe Antonio Di Luna, Luca Massarelli, Leonardo Querzoni |
Abstract | In this paper we investigate the problem of automatically naming pieces of assembly code. Where by naming we mean assigning to an assembly function a string of words that would likely be assigned by a human reverse engineer. We formally and precisely define the framework in which our investigation takes place. That is we define the problem, we provide reasonable justifications for the choices that we made for the design of training and the tests. We performed an analysis on a large real-world corpora constituted by nearly 9 millions of functions taken from more than 22k softwares. In such framework we test baselines coming from the field of Natural Language Processing (e.g., Seq2Seq networks and Transformer). Interestingly, our evaluation shows promising results beating the state-of-the-art and reaching good performance. We investigate the applicability of tine-tuning (i.e., taking a model already trained on a large generic corpora and retraining it for a specific task). Such technique is popular and well-known in the NLP field. Our results confirm that fine-tuning is effective even when neural networks are applied to binaries. We show that a model, pre-trained on the aforementioned corpora, when fine-tuned has higher performances on specific domains (such as predicting names in system utilites, malware, etc). |
Tasks | |
Published | 2019-12-17 |
URL | https://arxiv.org/abs/1912.07946v2 |
https://arxiv.org/pdf/1912.07946v2.pdf | |
PWC | https://paperswithcode.com/paper/function-naming-in-stripped-binaries-using |
Repo | |
Framework | |
Expression of Fractals Through Neural Network Functions
Title | Expression of Fractals Through Neural Network Functions |
Authors | Nadav Dym, Barak Sober, Ingrid Daubechies |
Abstract | To help understand the underlying mechanisms of neural networks (NNs), several groups have, in recent years, studied the number of linear regions $\ell$ of piecewise linear functions generated by deep neural networks (DNN). In particular, they showed that $\ell$ can grow exponentially with the number of network parameters $p$, a property often used to explain the advantages of DNNs over shallow NNs in approximating complicated functions. Nonetheless, a simple dimension argument shows that DNNs cannot generate all piecewise linear functions with $\ell$ linear regions as soon as $\ell > p$. It is thus natural to seek to characterize specific families of functions with $\ell$ linear regions that can be constructed by DNNs. Iterated Function Systems (IFS) generate sequences of piecewise linear functions $F_k$ with a number of linear regions exponential in $k$. We show that, under mild assumptions, $F_k$ can be generated by a NN using only $\mathcal{O}(k)$ parameters. IFS are used extensively to generate, at low computational cost, natural-looking landscape textures in artificial images. They have also been proposed for compression of natural images, albeit with less commercial success. The surprisingly good performance of this fractal-based compression suggests that our visual system may lock in, to some extent, on self-similarities in images. The combination of this phenomenon with the capacity, demonstrated here, of DNNs to efficiently approximate IFS may contribute to the success of DNNs, particularly striking for image processing tasks, as well as suggest new algorithms for representing self similarities in images based on the DNN mechanism. |
Tasks | |
Published | 2019-05-27 |
URL | https://arxiv.org/abs/1905.11345v1 |
https://arxiv.org/pdf/1905.11345v1.pdf | |
PWC | https://paperswithcode.com/paper/expression-of-fractals-through-neural-network |
Repo | |
Framework | |
Multi-Level Network for High-Speed Multi-Person Pose Estimation
Title | Multi-Level Network for High-Speed Multi-Person Pose Estimation |
Authors | Ying Huang, Jiankai Zhuang, Zengchang Qin |
Abstract | In multi-person pose estimation, the left/right joint type discrimination is always a hard problem because of the similar appearance. Traditionally, we solve this problem by stacking multiple refinement modules to increase network’s receptive fields and capture more global context, which can also increase a great amount of computation. In this paper, we propose a Multi-level Network (MLN) that learns to aggregate features from lower-level (left/right information), upper-level (localization information), joint-limb level (complementary information) and global-level (context) information for discrimination of joint type. Through feature reuse and its intra-relation, MLN can attain comparable performance to other conventional methods while runtime speed retains at 42.2 FPS. |
Tasks | Multi-Person Pose Estimation, Pose Estimation |
Published | 2019-11-26 |
URL | https://arxiv.org/abs/1911.11686v1 |
https://arxiv.org/pdf/1911.11686v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-level-network-for-high-speed-multi |
Repo | |
Framework | |
Federated Learning for Emoji Prediction in a Mobile Keyboard
Title | Federated Learning for Emoji Prediction in a Mobile Keyboard |
Authors | Swaroop Ramaswamy, Rajiv Mathews, Kanishka Rao, Françoise Beaufays |
Abstract | We show that a word-level recurrent neural network can predict emoji from text typed on a mobile keyboard. We demonstrate the usefulness of transfer learning for predicting emoji by pretraining the model using a language modeling task. We also propose mechanisms to trigger emoji and tune the diversity of candidates. The model is trained using a distributed on-device learning framework called federated learning. The federated model is shown to achieve better performance than a server-trained model. This work demonstrates the feasibility of using federated learning to train production-quality models for natural language understanding tasks while keeping users’ data on their devices. |
Tasks | Language Modelling, Transfer Learning |
Published | 2019-06-11 |
URL | https://arxiv.org/abs/1906.04329v1 |
https://arxiv.org/pdf/1906.04329v1.pdf | |
PWC | https://paperswithcode.com/paper/federated-learning-for-emoji-prediction-in-a |
Repo | |
Framework | |
On The Classification-Distortion-Perception Tradeoff
Title | On The Classification-Distortion-Perception Tradeoff |
Authors | Dong Liu, Haochen Zhang, Zhiwei Xiong |
Abstract | Signal degradation is ubiquitous and computational restoration of degraded signal has been investigated for many years. Recently, it is reported that the capability of signal restoration is fundamentally limited by the perception-distortion tradeoff, i.e. the distortion and the perceptual difference between the restored signal and the ideal `original’ signal cannot be made both minimal simultaneously. Distortion corresponds to signal fidelity and perceptual difference corresponds to perceptual naturalness, both of which are important metrics in practice. Besides, there is another dimension worthy of consideration, namely the semantic quality or the utility for recognition purpose, of the restored signal. In this paper, we extend the previous perception-distortion tradeoff to the case of classification-distortion-perception (CDP) tradeoff, where we introduced the classification error rate of the restored signal in addition to distortion and perceptual difference. Two versions of the CDP tradeoff are considered, one using a predefined classifier and the other dealing with the optimal classifier for the restored signal. For both versions, we can rigorously prove the existence of the CDP tradeoff, i.e. the distortion, perceptual difference, and classification error rate cannot be made all minimal simultaneously. Our findings can be useful especially for computer vision researches where some low-level vision tasks (signal restoration) serve for high-level vision tasks (visual understanding). | |
Tasks | |
Published | 2019-04-18 |
URL | http://arxiv.org/abs/1904.08816v1 |
http://arxiv.org/pdf/1904.08816v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-classification-distortion-perception |
Repo | |
Framework | |
On Local Optimizers of Acquisition Functions in Bayesian Optimization
Title | On Local Optimizers of Acquisition Functions in Bayesian Optimization |
Authors | Jungtaek Kim, Seungjin Choi |
Abstract | Bayesian optimization is a sample-efficient method for finding a global optimum of an expensive-to-evaluate black-box function. A global solution is found by accumulating a pair of query point and corresponding function value, repeating these two procedures: (i) learning a surrogate model for the objective function using the data observed so far; (ii) maximizing an acquisition function to determine where next to query the objective function. Convergence guarantees are only valid when the global optimizer of the acquisition function is found at each round and selected as the next query point. In practice, however, local optimizers of an acquisition function are also used, since searching the global optimizer of an acquisition function is often a non-trivial or time-consuming task. In this paper we consider three popular acquisition functions, PI, EI, and GP-UCB induced by GP regression surrogate model. Then we present an analysis on the behavior of local optimizers of those acquisition functions, in terms of {\em instantaneous regrets} over global optimizers. We also present a performance analysis when a maximum of the acquisition function is searched, allowing a local optimization method to start from multiple different initial conditions. Numerical experiments confirm the validity of our theoretical analysis. |
Tasks | |
Published | 2019-01-24 |
URL | https://arxiv.org/abs/1901.08350v3 |
https://arxiv.org/pdf/1901.08350v3.pdf | |
PWC | https://paperswithcode.com/paper/on-local-optimizers-of-acquisition-functions |
Repo | |
Framework | |
Fine-grained robust prosody transfer for single-speaker neural text-to-speech
Title | Fine-grained robust prosody transfer for single-speaker neural text-to-speech |
Authors | Viacheslav Klimkov, Srikanth Ronanki, Jonas Rohnke, Thomas Drugman |
Abstract | We present a neural text-to-speech system for fine-grained prosody transfer from one speaker to another. Conventional approaches for end-to-end prosody transfer typically use either fixed-dimensional or variable-length prosody embedding via a secondary attention to encode the reference signal. However, when trained on a single-speaker dataset, the conventional prosody transfer systems are not robust enough to speaker variability, especially in the case of a reference signal coming from an unseen speaker. Therefore, we propose decoupling of the reference signal alignment from the overall system. For this purpose, we pre-compute phoneme-level time stamps and use them to aggregate prosodic features per phoneme, injecting them into a sequence-to-sequence text-to-speech system. We incorporate a variational auto-encoder to further enhance the latent representation of prosody embeddings. We show that our proposed approach is significantly more stable and achieves reliable prosody transplantation from an unseen speaker. We also propose a solution to the use case in which the transcription of the reference signal is absent. We evaluate all our proposed methods using both objective and subjective listening tests. |
Tasks | |
Published | 2019-07-04 |
URL | https://arxiv.org/abs/1907.02479v1 |
https://arxiv.org/pdf/1907.02479v1.pdf | |
PWC | https://paperswithcode.com/paper/fine-grained-robust-prosody-transfer-for |
Repo | |
Framework | |
Query-oriented text summarization based on hypergraph transversals
Title | Query-oriented text summarization based on hypergraph transversals |
Authors | Hadrien Van Lierde, Tommy W. S. Chow |
Abstract | Existing graph- and hypergraph-based algorithms for document summarization represent the sentences of a corpus as the nodes of a graph or a hypergraph in which the edges represent relationships of lexical similarities between sentences. Each sentence of the corpus is then scored individually, using popular node ranking algorithms, and a summary is produced by extracting highly scored sentences. This approach fails to select a subset of jointly relevant sentences and it may produce redundant summaries that are missing important topics of the corpus. To alleviate this issue, a new hypergraph-based summarizer is proposed in this paper, in which each node is a sentence and each hyperedge is a theme, namely a group of sentences sharing a topic. Themes are weighted in terms of their prominence in the corpus and their relevance to a user-defined query. It is further shown that the problem of identifying a subset of sentences covering the relevant themes of the corpus is equivalent to that of finding a hypergraph transversal in our theme-based hypergraph. Two extensions of the notion of hypergraph transversal are proposed for the purpose of summarization, and polynomial time algorithms building on the theory of submodular functions are proposed for solving the associated discrete optimization problems. The worst-case time complexity of the proposed algorithms is squared in the number of terms, which makes it cheaper than the existing hypergraph-based methods. A thorough comparative analysis with related models on DUC benchmark datasets demonstrates the effectiveness of our approach, which outperforms existing graph- or hypergraph-based methods by at least 6% of ROUGE-SU4 score. |
Tasks | Document Summarization, Text Summarization |
Published | 2019-02-02 |
URL | http://arxiv.org/abs/1902.00672v1 |
http://arxiv.org/pdf/1902.00672v1.pdf | |
PWC | https://paperswithcode.com/paper/query-oriented-text-summarization-based-on |
Repo | |
Framework | |
Using Ontologies To Improve Performance In Massively Multi-label Prediction Models
Title | Using Ontologies To Improve Performance In Massively Multi-label Prediction Models |
Authors | Ethan Steinberg, Peter J. Liu |
Abstract | Massively multi-label prediction/classification problems arise in environments like health-care or biology where very precise predictions are useful. One challenge with massively multi-label problems is that there is often a long-tailed frequency distribution for the labels, which results in few positive examples for the rare labels. We propose a solution to this problem by modifying the output layer of a neural network to create a Bayesian network of sigmoids which takes advantage of ontology relationships between the labels to help share information between the rare and the more common labels. We apply this method to the two massively multi-label tasks of disease prediction (ICD-9 codes) and protein function prediction (Gene Ontology terms) and obtain significant improvements in per-label AUROC and average precision for less common labels. |
Tasks | Disease Prediction, Protein Function Prediction |
Published | 2019-05-28 |
URL | https://arxiv.org/abs/1905.12126v1 |
https://arxiv.org/pdf/1905.12126v1.pdf | |
PWC | https://paperswithcode.com/paper/using-ontologies-to-improve-performance-in-1 |
Repo | |
Framework | |
Beyond the Chinese Restaurant and Pitman-Yor processes: Statistical Models with Double Power-law Behavior
Title | Beyond the Chinese Restaurant and Pitman-Yor processes: Statistical Models with Double Power-law Behavior |
Authors | Fadhel Ayed, Juho Lee, François Caron |
Abstract | Bayesian nonparametric approaches, in particular the Pitman-Yor process and the associated two-parameter Chinese Restaurant process, have been successfully used in applications where the data exhibit a power-law behavior. Examples include natural language processing, natural images or networks. There is also growing empirical evidence that some datasets exhibit a two-regime power-law behavior: one regime for small frequencies, and a second regime, with a different exponent, for high frequencies. In this paper, we introduce a class of completely random measures which are doubly regularly-varying. Contrary to the Pitman-Yor process, we show that when completely random measures in this class are normalized to obtain random probability measures and associated random partitions, such partitions exhibit a double power-law behavior. We discuss in particular three models within this class: the beta prime process (Broderick et al. (2015, 2018), a novel process called generalized BFRY process, and a mixture construction. We derive efficient Markov chain Monte Carlo algorithms to estimate the parameters of these models. Finally, we show that the proposed models provide a better fit than the Pitman-Yor process on various datasets. |
Tasks | |
Published | 2019-02-13 |
URL | https://arxiv.org/abs/1902.04714v2 |
https://arxiv.org/pdf/1902.04714v2.pdf | |
PWC | https://paperswithcode.com/paper/beyond-the-chinese-restaurant-and-pitman-yor |
Repo | |
Framework | |
Multi-Stage HRNet: Multiple Stage High-Resolution Network for Human Pose Estimation
Title | Multi-Stage HRNet: Multiple Stage High-Resolution Network for Human Pose Estimation |
Authors | Junjie Huang, Zheng Zhu, Guan Huang |
Abstract | Human pose estimation are of importance for visual understanding tasks such as action recognition and human-computer interaction. In this work, we present a Multiple Stage High-Resolution Network (Multi-Stage HRNet) to tackling the problem of multi-person pose estimation in images. Specifically, we follow the top-down pipelines and high-resolution representations are maintained during single-person pose estimation. In addition, multiple stage network and cross stage feature aggregation are adopted to further refine the keypoint position. The resulting approach achieves promising results in COCO datasets. Our single-model-single-scale test configuration obtains 77.1 AP score in test-dev using publicly available training data. |
Tasks | Multi-Person Pose Estimation, Pose Estimation |
Published | 2019-10-14 |
URL | https://arxiv.org/abs/1910.05901v1 |
https://arxiv.org/pdf/1910.05901v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-stage-hrnet-multiple-stage-high |
Repo | |
Framework | |
Sentence transition matrix: An efficient approach that preserves sentence semantics
Title | Sentence transition matrix: An efficient approach that preserves sentence semantics |
Authors | Myeongjun Jang, Pilsung Kang |
Abstract | Sentence embedding is a significant research topic in the field of natural language processing (NLP). Generating sentence embedding vectors reflecting the intrinsic meaning of a sentence is a key factor to achieve an enhanced performance in various NLP tasks such as sentence classification and document summarization. Therefore, various sentence embedding models based on supervised and unsupervised learning have been proposed after the advent of researches regarding the distributed representation of words. They were evaluated through semantic textual similarity (STS) tasks, which measure the degree of semantic preservation of a sentence and neural network-based supervised embedding models generally yielded state-of-the-art performance. However, these models have a limitation in that they have multiple parameters to update, thereby requiring a tremendous amount of labeled training data. In this study, we propose an efficient approach that learns a transition matrix that refines a sentence embedding vector to reflect the latent semantic meaning of a sentence. The proposed method has two practical advantages; (1) it can be applied to any sentence embedding method, and (2) it can achieve robust performance in STS tasks irrespective of the number of training examples. |
Tasks | Document Summarization, Semantic Textual Similarity, Sentence Classification, Sentence Embedding |
Published | 2019-01-16 |
URL | http://arxiv.org/abs/1901.05219v1 |
http://arxiv.org/pdf/1901.05219v1.pdf | |
PWC | https://paperswithcode.com/paper/sentence-transition-matrix-an-efficient |
Repo | |
Framework | |
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT
Title | Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT |
Authors | Shijie Wu, Mark Dredze |
Abstract | Pretrained contextual representation models (Peters et al., 2018; Devlin et al., 2018) have pushed forward the state-of-the-art on many NLP tasks. A new release of BERT (Devlin, 2018) includes a model simultaneously pretrained on 104 languages with impressive performance for zero-shot cross-lingual transfer on a natural language inference task. This paper explores the broader cross-lingual potential of mBERT (multilingual) as a zero shot language transfer model on 5 NLP tasks covering a total of 39 languages from various language families: NLI, document classification, NER, POS tagging, and dependency parsing. We compare mBERT with the best-published methods for zero-shot cross-lingual transfer and find mBERT competitive on each task. Additionally, we investigate the most effective strategy for utilizing mBERT in this manner, determine to what extent mBERT generalizes away from language specific features, and measure factors that influence cross-lingual transfer. |
Tasks | Cross-Lingual Transfer, Dependency Parsing, Document Classification, Natural Language Inference |
Published | 2019-04-19 |
URL | https://arxiv.org/abs/1904.09077v2 |
https://arxiv.org/pdf/1904.09077v2.pdf | |
PWC | https://paperswithcode.com/paper/beto-bentz-becas-the-surprising-cross-lingual |
Repo | |
Framework | |
Data-driven Evolutions of Critical Points
Title | Data-driven Evolutions of Critical Points |
Authors | Stefano Almi, Massimo Fornasier, Richard Huber |
Abstract | In this paper we are concerned with the learnability of energies from data obtained by observing time evolutions of their critical points starting at random initial equilibria. As a byproduct of our theoretical framework we introduce the novel concept of mean-field limit of critical point evolutions and of their energy balance as a new form of transport. We formulate the energy learning as a variational problem, minimizing the discrepancy of energy competitors from fulfilling the equilibrium condition along any trajectory of critical points originated at random initial equilibria. By Gamma-convergence arguments we prove the convergence of minimal solutions obtained from finite number of observations to the exact energy in a suitable sense. The abstract framework is actually fully constructive and numerically implementable. Hence, the approximation of the energy from a finite number of observations of past evolutions allows to simulate further evolutions, which are fully data-driven. As we aim at a precise quantitative analysis, and to provide concrete examples of tractable solutions, we present analytic and numerical results on the reconstruction of an elastic energy for a one-dimensional model of thin nonlinear-elastic rod. |
Tasks | |
Published | 2019-11-01 |
URL | https://arxiv.org/abs/1911.00298v1 |
https://arxiv.org/pdf/1911.00298v1.pdf | |
PWC | https://paperswithcode.com/paper/data-driven-evolutions-of-critical-points |
Repo | |
Framework | |
Robust Autocalibrated Structured Low-Rank EPI Ghost Correction
Title | Robust Autocalibrated Structured Low-Rank EPI Ghost Correction |
Authors | Rodrigo A. Lobos, W. Scott Hoge, Ahsan Javed, Congyu Liao, Kawin Setsompop, Krishna S. Nayak, Justin P. Haldar |
Abstract | Purpose: We propose and evaluate a new structured low-rank method for EPI ghost correction called Robust Autocalibrated LORAKS (RAC-LORAKS). The method can be used to suppress EPI ghosts arising from the differences between different readout gradient polarities and/or the differences between different shots. It does not require conventional EPI navigator signals, and is robust to imperfect autocalibration data. Methods: Autocalibrated LORAKS is a previous structured low-rank method for EPI ghost correction that uses GRAPPA-type autocalibration data to enable high-quality ghost correction. This method works well when the autocalibration data is pristine, but performance degrades substantially when the autocalibration information is imperfect. RAC-LORAKS generalizes Autocalibrated LORAKS in two ways. First, it does not completely trust the information from autocalibration data, and instead considers the autocalibration and EPI data simultaneously when estimating low-rank matrix structure. And second, it uses complementary information from the autocalibration data to improve EPI reconstruction in a multi-contrast joint reconstruction framework. RAC-LORAKS is evaluated using simulations and in vivo data, and compared to state-of-the-art methods. Results: RAC-LORAKS is demonstrated to have good ghost elimination performance compared to state-of-the-art methods in several complicated acquisition scenarios (including gradient-echo brain imaging, diffusion-encoded brain imaging, and cardiac imaging). Conclusion: RAC-LORAKS provides effective suppression of EPI ghosts and is robust to imperfect autocalibration data. |
Tasks | |
Published | 2019-07-30 |
URL | https://arxiv.org/abs/1907.13261v1 |
https://arxiv.org/pdf/1907.13261v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-autocalibrated-structured-low-rank-epi |
Repo | |
Framework | |