Paper Group ANR 1753
Exploiting Operation Importance for Differentiable Neural Architecture Search. Autolabeling 3D Objects with Differentiable Rendering of SDF Shape Priors. Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing. Uncovering Download Fraud Activities in Mobile App Markets. Improving CAT Tools in the Translation Work …
Exploiting Operation Importance for Differentiable Neural Architecture Search
Title | Exploiting Operation Importance for Differentiable Neural Architecture Search |
Authors | Xukai Xie, Yuan Zhou, Sun-Yuan Kung |
Abstract | Recently, differentiable neural architecture search methods significantly reduce the search cost by constructing a super network and relax the architecture representation by assigning architecture weights to the candidate operations. All the existing methods determine the importance of each operation directly by architecture weights. However, architecture weights cannot accurately reflect the importance of each operation; that is, the operation with the highest weight might not related to the best performance. To alleviate this deficiency, we propose a simple yet effective solution to neural architecture search, termed as exploiting operation importance for effective neural architecture search (EoiNAS), in which a new indicator is proposed to fully exploit the operation importance and guide the model search. Based on this new indicator, we propose a gradual operation pruning strategy to further improve the search efficiency and accuracy. Experimental results have demonstrated the effectiveness of the proposed method. Specifically, we achieve an error rate of 2.50% on CIFAR-10, which significantly outperforms state-of-the-art methods. When transferred to ImageNet, it achieves the top-1 error of 25.6%, comparable to the state-of-the-art performance under the mobile setting. |
Tasks | Neural Architecture Search |
Published | 2019-11-24 |
URL | https://arxiv.org/abs/1911.10511v1 |
https://arxiv.org/pdf/1911.10511v1.pdf | |
PWC | https://paperswithcode.com/paper/exploiting-operation-importance-for |
Repo | |
Framework | |
Autolabeling 3D Objects with Differentiable Rendering of SDF Shape Priors
Title | Autolabeling 3D Objects with Differentiable Rendering of SDF Shape Priors |
Authors | Sergey Zakharov, Wadim Kehl, Arjun Bhargava, Adrien Gaidon |
Abstract | We present an automatic annotation pipeline to recover 9D cuboids and 3D shape from pre-trained off-the-shelf 2D detectors and sparse LIDAR data. Our autolabeling method solves this challenging ill-posed inverse problem by relying on learned shape priors and optimization of geometric and physical parameters. To that end, we propose a novel differentiable shape renderer over signed distance fields (SDF), which we leverage in combination with normalized object coordinate spaces (NOCS). Initially trained on synthetic data to predict shape and coordinates, our method uses these predictions for projective and geometrical alignment over real samples. We also propose a curriculum learning strategy, iteratively retraining on samples of increasing difficulty for subsequent self-improving annotation rounds. Our experiments on the KITTI3D dataset show that we can recover a substantial amount of accurate cuboids, and that these autolabels can be used to train 3D vehicle detectors with state-of-the-art results. We will make the code publicly available soon. |
Tasks | |
Published | 2019-11-26 |
URL | https://arxiv.org/abs/1911.11288v1 |
https://arxiv.org/pdf/1911.11288v1.pdf | |
PWC | https://paperswithcode.com/paper/autolabeling-3d-objects-with-differentiable |
Repo | |
Framework | |
Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing
Title | Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing |
Authors | Vishal Monga, Yuelong Li, Yonina C. Eldar |
Abstract | Deep neural networks provide unprecedented performance gains in many real world problems in signal and image processing. Despite these gains, future development and practical deployment of deep networks is hindered by their blackbox nature, i.e., lack of interpretability, and by the need for very large training sets. An emerging technique called algorithm unrolling or unfolding offers promise in eliminating these issues by providing a concrete and systematic connection between iterative algorithms that are used widely in signal processing and deep neural networks. Unrolling methods were first proposed to develop fast neural network approximations for sparse coding. More recently, this direction has attracted enormous attention and is rapidly growing both in theoretic investigations and practical applications. The growing popularity of unrolled deep networks is due in part to their potential in developing efficient, high-performance and yet interpretable network architectures from reasonable size training sets. In this article, we review algorithm unrolling for signal and image processing. We extensively cover popular techniques for algorithm unrolling in various domains of signal and image processing including imaging, vision and recognition, and speech processing. By reviewing previous works, we reveal the connections between iterative algorithms and neural networks and present recent theoretical results. Finally, we provide a discussion on current limitations of unrolling and suggest possible future research directions. |
Tasks | |
Published | 2019-12-22 |
URL | https://arxiv.org/abs/1912.10557v1 |
https://arxiv.org/pdf/1912.10557v1.pdf | |
PWC | https://paperswithcode.com/paper/algorithm-unrolling-interpretable-efficient |
Repo | |
Framework | |
Uncovering Download Fraud Activities in Mobile App Markets
Title | Uncovering Download Fraud Activities in Mobile App Markets |
Authors | Yingtong Dou, Weijian Li, Zhirong Liu, Zhenhua Dong, Jiebo Luo, Philip S. Yu |
Abstract | Download fraud is a prevalent threat in mobile App markets, where fraudsters manipulate the number of downloads of Apps via various cheating approaches. Purchased fake downloads can mislead recommendation and search algorithms and further lead to bad user experience in App markets. In this paper, we investigate download fraud problem based on a company’s App Market, which is one of the most popular Android App markets. We release a honeypot App on the App Market and purchase fake downloads from fraudster agents to track fraud activities in the wild. Based on our interaction with the fraudsters, we categorize download fraud activities into three types according to their intentions: boosting front end downloads, optimizing App search ranking, and enhancing user acquisition&retention rate. For the download fraud aimed at optimizing App search ranking, we select, evaluate, and validate several features in identifying fake downloads based on billions of download data. To get a comprehensive understanding of download fraud, we further gather stances of App marketers, fraudster agencies, and market operators on download fraud. The followed analysis and suggestions shed light on the ways to mitigate download fraud in App markets and other social platforms. To the best of our knowledge, this is the first work that investigates the download fraud problem in mobile App markets. |
Tasks | |
Published | 2019-07-05 |
URL | https://arxiv.org/abs/1907.03048v2 |
https://arxiv.org/pdf/1907.03048v2.pdf | |
PWC | https://paperswithcode.com/paper/uncovering-download-fraud-activities-in |
Repo | |
Framework | |
Improving CAT Tools in the Translation Workflow: New Approaches and Evaluation
Title | Improving CAT Tools in the Translation Workflow: New Approaches and Evaluation |
Authors | Mihaela Vela, Santanu Pal, Marcos Zampieri, Sudip Kumar Naskar, Josef van Genabith |
Abstract | This paper describes strategies to improve an existing web-based computer-aided translation (CAT) tool entitled CATaLog Online. CATaLog Online provides a post-editing environment with simple yet helpful project management tools. It offers translation suggestions from translation memories (TM), machine translation (MT), and automatic post-editing (APE) and records detailed logs of post-editing activities. To test the new approaches proposed in this paper, we carried out a user study on an English–German translation task using CATaLog Online. User feedback revealed that the users preferred using CATaLog Online over existing CAT tools in some respects, especially by selecting the output of the MT system and taking advantage of the color scheme for TM suggestions. |
Tasks | Automatic Post-Editing, Machine Translation |
Published | 2019-08-16 |
URL | https://arxiv.org/abs/1908.06140v1 |
https://arxiv.org/pdf/1908.06140v1.pdf | |
PWC | https://paperswithcode.com/paper/improving-cat-tools-in-the-translation |
Repo | |
Framework | |
An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction
Title | An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction |
Authors | Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, Kentaro Inui |
Abstract | The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models. However, consensus is lacking on experimental configurations, namely, choosing how the pseudo data should be generated or used. In this study, these choices are investigated through extensive experiments, and state-of-the-art performance is achieved on the CoNLL-2014 test set ($F_{0.5}=65.0$) and the official test set of the BEA-2019 shared task ($F_{0.5}=70.2$) without making any modifications to the model architecture. |
Tasks | Grammatical Error Correction |
Published | 2019-09-02 |
URL | https://arxiv.org/abs/1909.00502v1 |
https://arxiv.org/pdf/1909.00502v1.pdf | |
PWC | https://paperswithcode.com/paper/an-empirical-study-of-incorporating-pseudo |
Repo | |
Framework | |
Global Minima of DNNs: The Plenty Pantry
Title | Global Minima of DNNs: The Plenty Pantry |
Authors | Nicole Mücke, Ingo Steinwart |
Abstract | A common strategy to train deep neural networks (DNNs) is to use very large architectures and to train them until they (almost) achieve zero training error. Empirically observed good generalization performance on test data, even in the presence of lots of label noise, corroborate such a procedure. On the other hand, in statistical learning theory it is known that over-fitting models may lead to poor generalization properties, occurring in e.g. empirical risk minimization (ERM) over too large hypotheses classes. Inspired by this contradictory behavior, so-called interpolation methods have recently received much attention, leading to consistent and optimally learning methods for some local averaging schemes with zero training error. However, there is no theoretical analysis of interpolating ERM-like methods so far. We take a step in this direction by showing that for certain, large hypotheses classes, some interpolating ERMs enjoy very good statistical guarantees while others fail in the worst sense. Moreover, we show that the same phenomenon occurs for DNNs with zero training error and sufficiently large architectures. |
Tasks | |
Published | 2019-05-25 |
URL | https://arxiv.org/abs/1905.10686v1 |
https://arxiv.org/pdf/1905.10686v1.pdf | |
PWC | https://paperswithcode.com/paper/global-minima-of-dnns-the-plenty-pantry |
Repo | |
Framework | |
Deep Decentralized Reinforcement Learning for Cooperative Control
Title | Deep Decentralized Reinforcement Learning for Cooperative Control |
Authors | Florian Köpf, Samuel Tesfazgi, Michael Flad, Sören Hohmann |
Abstract | In order to collaborate efficiently with unknown partners in cooperative control settings, adaptation of the partners based on online experience is required. The rather general and widely applicable control setting, where each cooperation partner might strive for individual goals while the control laws and objectives of the partners are unknown, entails various challenges such as the non-stationarity of the environment, the multi-agent credit assignment problem, the alter-exploration problem and the coordination problem. We propose new, modular deep decentralized Multi-Agent Reinforcement Learning mechanisms to account for these challenges. Therefore, our method uses a time-dependent prioritization of samples, incorporates a model of the system dynamics and utilizes variable, accountability-driven learning rates and simulated, artificial experiences in order to guide the learning process. The effectiveness of our method is demonstrated by means of a simulated, nonlinear cooperative control task. |
Tasks | Multi-agent Reinforcement Learning |
Published | 2019-10-29 |
URL | https://arxiv.org/abs/1910.13196v1 |
https://arxiv.org/pdf/1910.13196v1.pdf | |
PWC | https://paperswithcode.com/paper/191013196 |
Repo | |
Framework | |
Cheetah: Mixed Low-Precision Hardware & Software Co-Design Framework for DNNs on the Edge
Title | Cheetah: Mixed Low-Precision Hardware & Software Co-Design Framework for DNNs on the Edge |
Authors | Hamed F. Langroudi, Zachariah Carmichael, David Pastuch, Dhireesha Kudithipudi |
Abstract | Low-precision DNNs have been extensively explored in order to reduce the size of DNN models for edge devices. Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision in [5..8]-bits. However, previous studies were limited to studying posit for DNN inference only. In this paper, we propose the Cheetah framework, which supports both DNN training and inference using posits, as well as other commonly used formats. Additionally, the framework is amenable for different quantization approaches and supports mixed-precision floating point and fixed-point numerical formats. Cheetah is evaluated on three datasets: MNIST, Fashion MNIST, and CIFAR-10. Results indicate that 16-bit posits outperform 16-bit floating point in DNN training. Furthermore, performing inference with [5..8]-bit posits improves the trade-off between performance and energy-delay-product over both [5..8]-bit float and fixed-point. |
Tasks | Quantization |
Published | 2019-08-06 |
URL | https://arxiv.org/abs/1908.02386v1 |
https://arxiv.org/pdf/1908.02386v1.pdf | |
PWC | https://paperswithcode.com/paper/cheetah-mixed-low-precision-hardware-software |
Repo | |
Framework | |
Travel Time Estimation without Road Networks: An Urban Morphological Layout Representation Approach
Title | Travel Time Estimation without Road Networks: An Urban Morphological Layout Representation Approach |
Authors | Wuwei Lan, Yanyan Xu, Bin Zhao |
Abstract | Travel time estimation is a crucial task for not only personal travel scheduling but also city planning. Previous methods focus on modeling toward road segments or sub-paths, then summing up for a final prediction, which have been recently replaced by deep neural models with end-to-end training. Usually, these methods are based on explicit feature representations, including spatio-temporal features, traffic states, etc. Here, we argue that the local traffic condition is closely tied up with the land-use and built environment, i.e., metro stations, arterial roads, intersections, commercial area, residential area, and etc, yet the relation is time-varying and too complicated to model explicitly and efficiently. Thus, this paper proposes an end-to-end multi-task deep neural model, named Deep Image to Time (DeepI2T), to learn the travel time mainly from the built environment images, a.k.a. the morphological layout images, and showoff the new state-of-the-art performance on real-world datasets in two cities. Moreover, our model is designed to tackle both path-aware and path-blind scenarios in the testing phase. This work opens up new opportunities of using the publicly available morphological layout images as considerable information in multiple geography-related smart city applications. |
Tasks | |
Published | 2019-07-08 |
URL | https://arxiv.org/abs/1907.03381v1 |
https://arxiv.org/pdf/1907.03381v1.pdf | |
PWC | https://paperswithcode.com/paper/travel-time-estimation-without-road-networks |
Repo | |
Framework | |
Walking with MIND: Mental Imagery eNhanceD Embodied QA
Title | Walking with MIND: Mental Imagery eNhanceD Embodied QA |
Authors | Juncheng Li, Siliang Tang, Fei Wu, Yueting Zhuang |
Abstract | The EmbodiedQA is a task of training an embodied agent by intelligently navigating in a simulated environment and gathering visual information to answer questions. Existing approaches fail to explicitly model the mental imagery function of the agent, while the mental imagery is crucial to embodied cognition, and has a close relation to many high-level meta-skills such as generalization and interpretation. In this paper, we propose a novel Mental Imagery eNhanceD (MIND) module for the embodied agent, as well as a relevant deep reinforcement framework for training. The MIND module can not only model the dynamics of the environment (e.g. ‘what might happen if the agent passes through a door’) but also help the agent to create a better understanding of the environment (e.g. ‘The refrigerator is usually in the kitchen’). Such knowledge makes the agent a faster and better learner in locating a feasible policy with only a few trails. Furthermore, the MIND module can generate mental images that are treated as short-term subgoals by our proposed deep reinforcement framework. These mental images facilitate policy learning since short-term subgoals are easy to achieve and reusable. This yields better planning efficiency than other algorithms that learn a policy directly from primitive actions. Finally, the mental images visualize the agent’s intentions in a way that human can understand, and this endows our agent’s actions with more interpretability. The experimental results and further analysis prove that the agent with the MIND module is superior to its counterparts not only in EQA performance but in many other aspects such as route planning, behavioral interpretation, and the ability to generalize from a few examples. |
Tasks | |
Published | 2019-08-05 |
URL | https://arxiv.org/abs/1908.01482v1 |
https://arxiv.org/pdf/1908.01482v1.pdf | |
PWC | https://paperswithcode.com/paper/walking-with-mind-mental-imagery-enhanced |
Repo | |
Framework | |
CANZSL: Cycle-Consistent Adversarial Networks for Zero-Shot Learning from Natural Language
Title | CANZSL: Cycle-Consistent Adversarial Networks for Zero-Shot Learning from Natural Language |
Authors | Zhi Chen, Jingjing Li, Yadan Luo, Zi Huang, Yang Yang |
Abstract | Existing methods using generative adversarial approaches for Zero-Shot Learning (ZSL) aim to generate realistic visual features from class semantics by a single generative network, which is highly under-constrained. As a result, the previous methods cannot guarantee that the generated visual features can truthfully reflect the corresponding semantics. To address this issue, we propose a novel method named Cycle-consistent Adversarial Networks for Zero-Shot Learning (CANZSL). It encourages a visual feature generator to synthesize realistic visual features from semantics, and then inversely translate back synthesized the visual feature to corresponding semantic space by a semantic feature generator. Furthermore, in this paper a more challenging and practical ZSL problem is considered where the original semantics are from natural language with irrelevant words instead of clean semantics that are widely used in previous work. Specifically, a multi-modal consistent bidirectional generative adversarial network is trained to handle unseen instances by leveraging noise in the natural language. A forward one-to-many mapping from one text description to multiple visual features is coupled with an inverse many-to-one mapping from the visual space to the semantic space. Thus, a multi-modal cycle-consistency loss between the synthesized semantic representations and the ground truth can be learned and leveraged to enforce the generated semantic features to approximate to the real distribution in semantic space. Extensive experiments are conducted to demonstrate that our method consistently outperforms state-of-the-art approaches on natural language-based zero-shot learning tasks. |
Tasks | Zero-Shot Learning |
Published | 2019-09-21 |
URL | https://arxiv.org/abs/1909.09822v1 |
https://arxiv.org/pdf/1909.09822v1.pdf | |
PWC | https://paperswithcode.com/paper/190909822 |
Repo | |
Framework | |
Towards a Precipitation Bias Corrector against Noise and Maldistribution
Title | Towards a Precipitation Bias Corrector against Noise and Maldistribution |
Authors | Xiaoyang Xu, Yiqun Liu, Hanqing Chao, Youcheng Luo, Hai Chu, Lei Chen, Junping Zhang, Leiming Ma |
Abstract | With broad applications in various public services like aviation management and urban disaster warning, numerical precipitation prediction plays a crucial role in weather forecast. However, constrained by the limitation of observation and conventional meteorological models, the numerical precipitation predictions are often highly biased. To correct this bias, classical correction methods heavily depend on profound experts who have knowledge in aerodynamics, thermodynamics and meteorology. As precipitation can be influenced by countless factors, however, the performances of these expert-driven methods can drop drastically when some un-modeled factors change. To address this issue, this paper presents a data-driven deep learning model which mainly includes two blocks, i.e. a Denoising Autoencoder Block and an Ordinal Regression Block. To the best of our knowledge, it is the first expert-free models for bias correction. The proposed model can effectively correct the numerical precipitation prediction based on 37 basic meteorological data from European Centre for Medium-Range Weather Forecasts (ECMWF). Experiments indicate that compared with several classical machine learning algorithms and deep learning models, our method achieves the best correcting performance and meteorological index, namely the threat scores (TS), obtaining satisfactory visualization effect. |
Tasks | Denoising |
Published | 2019-10-15 |
URL | https://arxiv.org/abs/1910.07633v1 |
https://arxiv.org/pdf/1910.07633v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-a-precipitation-bias-corrector |
Repo | |
Framework | |
Intra-Variable Handwriting Inspection Reinforced with Idiosyncrasy Analysis
Title | Intra-Variable Handwriting Inspection Reinforced with Idiosyncrasy Analysis |
Authors | Chandranath Adak, Bidyut B. Chaudhuri, Chin-Teng Lin, Michael Blumenstein |
Abstract | In this paper, we work on intra-variable handwriting, where the writing samples of an individual can vary significantly. Such within-writer variation throws a challenge for automatic writer inspection, where the state-of-the-art methods do not perform well. To deal with intra-variability, we analyze the idiosyncrasy in individual handwriting. We identify/verify the writer from highly idiosyncratic text-patches. Such patches are detected using a deep recurrent reinforcement learning-based architecture. An idiosyncratic score is assigned to every patch, which is predicted by employing deep regression analysis. For writer identification, we propose a deep neural architecture, which makes the final decision by the idiosyncratic score-induced weighted sum of patch-based decisions. For writer verification, we propose two algorithms for deep feature aggregation, which assist in authentication using a triplet network. The experiments were performed on two databases, where we obtained encouraging results. |
Tasks | |
Published | 2019-12-19 |
URL | https://arxiv.org/abs/1912.12168v1 |
https://arxiv.org/pdf/1912.12168v1.pdf | |
PWC | https://paperswithcode.com/paper/intra-variable-handwriting-inspection |
Repo | |
Framework | |
Transformer-based Automatic Post-Editing with a Context-Aware Encoding Approach for Multi-Source Inputs
Title | Transformer-based Automatic Post-Editing with a Context-Aware Encoding Approach for Multi-Source Inputs |
Authors | WonKee Lee, Junsu Park, Byung-Hyun Go, Jong-Hyeok Lee |
Abstract | Recent approaches to the Automatic Post-Editing (APE) research have shown that better results are obtained by multi-source models, which jointly encode both source (src) and machine translation output (mt) to produce post-edited sentence (pe). Along this trend, we present a new multi-source APE model based on the Transformer. To construct effective joint representations, our model internally learns to incorporate src context into mt representation. With this approach, we achieve a significant improvement over baseline systems, as well as the state-of-the-art multi-source APE model. Moreover, to demonstrate the capability of our model to incorporate src context, we show that the word alignment of the unknown MT system is successfully captured in our encoding results. |
Tasks | Automatic Post-Editing, Machine Translation, Word Alignment |
Published | 2019-08-15 |
URL | https://arxiv.org/abs/1908.05679v1 |
https://arxiv.org/pdf/1908.05679v1.pdf | |
PWC | https://paperswithcode.com/paper/transformer-based-automatic-post-editing-with |
Repo | |
Framework | |