Paper Group AWR 156
Analogical Reasoning on Chinese Morphological and Semantic Relations. Learning to Understand Goal Specifications by Modelling Reward. Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding. Sampling-Free Variational Inference of Bayesian Neural Networks by Variance Backpropagation. Adaptive Temporal Encoding Network for V …
Analogical Reasoning on Chinese Morphological and Semantic Relations
Title | Analogical Reasoning on Chinese Morphological and Semantic Relations |
Authors | Shen Li, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, Xiaoyong Du |
Abstract | Analogical reasoning is effective in capturing linguistic regularities. This paper proposes an analogical reasoning task on Chinese. After delving into Chinese lexical knowledge, we sketch 68 implicit morphological relations and 28 explicit semantic relations. A big and balanced dataset CA8 is then built for this task, including 17813 questions. Furthermore, we systematically explore the influences of vector representations, context features, and corpora on analogical reasoning. With the experiments, CA8 is proved to be a reliable benchmark for evaluating Chinese word embeddings. |
Tasks | Word Embeddings |
Published | 2018-05-12 |
URL | http://arxiv.org/abs/1805.06504v1 |
http://arxiv.org/pdf/1805.06504v1.pdf | |
PWC | https://paperswithcode.com/paper/analogical-reasoning-on-chinese-morphological |
Repo | https://github.com/Embedding/Chinese-Word-Vectors |
Framework | none |
Learning to Understand Goal Specifications by Modelling Reward
Title | Learning to Understand Goal Specifications by Modelling Reward |
Authors | Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, Edward Grefenstette |
Abstract | Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards. However, this places on environment designers the onus of designing language-conditional reward functions which may not be easily or tractably implemented as the complexity of the environment and the language scales. To overcome this limitation, we present a framework within which instruction-conditional RL agents are trained using rewards obtained not from the environment, but from reward models which are jointly trained from expert examples. As reward models improve, they learn to accurately reward agents for completing tasks for environment configurations—and for instructions—not present amongst the expert data. This framework effectively separates the representation of what instructions require from how they can be executed. In a simple grid world, it enables an agent to learn a range of commands requiring interaction with blocks and understanding of spatial relations and underspecified abstract arrangements. We further show the method allows our agent to adapt to changes in the environment without requiring new expert examples. |
Tasks | |
Published | 2018-06-05 |
URL | https://arxiv.org/abs/1806.01946v4 |
https://arxiv.org/pdf/1806.01946v4.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-understand-goal-specifications-by |
Repo | https://github.com/AMDonati/RL_Deep-RL_AM_Leandro |
Framework | tf |
Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding
Title | Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding |
Authors | Jiaxi Tang, Ke Wang |
Abstract | Top-$N$ sequential recommendation models each user as a sequence of items interacted in the past and aims to predict top-$N$ ranked items that a user will likely interact in a near future'. The order of interaction implies that sequential patterns play an important role where more recent items in a sequence have a larger impact on the next item. In this paper, we propose a Convolutional Sequence Embedding Recommendation Model (\emph{Caser}) as a solution to address this requirement. The idea is to embed a sequence of recent items into an image’ in the time and latent spaces and learn sequential patterns as local features of the image using convolutional filters. This approach provides a unified and flexible network structure for capturing both general preferences and sequential patterns. The experiments on public datasets demonstrated that Caser consistently outperforms state-of-the-art sequential recommendation methods on a variety of common evaluation metrics. |
Tasks | |
Published | 2018-09-19 |
URL | http://arxiv.org/abs/1809.07426v1 |
http://arxiv.org/pdf/1809.07426v1.pdf | |
PWC | https://paperswithcode.com/paper/personalized-top-n-sequential-recommendation |
Repo | https://github.com/graytowne/caser_pytorch |
Framework | pytorch |
Sampling-Free Variational Inference of Bayesian Neural Networks by Variance Backpropagation
Title | Sampling-Free Variational Inference of Bayesian Neural Networks by Variance Backpropagation |
Authors | Manuel Haussmann, Fred A. Hamprecht, Melih Kandemir |
Abstract | We propose a new Bayesian Neural Net formulation that affords variational inference for which the evidence lower bound is analytically tractable subject to a tight approximation. We achieve this tractability by (i) decomposing ReLU nonlinearities into the product of an identity and a Heaviside step function, (ii) introducing a separate path that decomposes the neural net expectation from its variance. We demonstrate formally that introducing separate latent binary variables to the activations allows representing the neural network likelihood as a chain of linear operations. Performing variational inference on this construction enables a sampling-free computation of the evidence lower bound which is a more effective approximation than the widely applied Monte Carlo sampling and CLT related techniques. We evaluate the model on a range of regression and classification tasks against BNN inference alternatives, showing competitive or improved performance over the current state-of-the-art. |
Tasks | |
Published | 2018-05-19 |
URL | https://arxiv.org/abs/1805.07654v2 |
https://arxiv.org/pdf/1805.07654v2.pdf | |
PWC | https://paperswithcode.com/paper/sampling-free-variational-inference-of |
Repo | https://github.com/manuelhaussmann/vbp |
Framework | none |
Adaptive Temporal Encoding Network for Video Instance-level Human Parsing
Title | Adaptive Temporal Encoding Network for Video Instance-level Human Parsing |
Authors | Qixian Zhou, Xiaodan Liang, Ke Gong, Liang Lin |
Abstract | Beyond the existing single-person and multiple-person human parsing tasks in static images, this paper makes the first attempt to investigate a more realistic video instance-level human parsing that simultaneously segments out each person instance and parses each instance into more fine-grained parts (e.g., head, leg, dress). We introduce a novel Adaptive Temporal Encoding Network (ATEN) that alternatively performs temporal encoding among key frames and flow-guided feature propagation from other consecutive frames between two key frames. Specifically, ATEN first incorporates a Parsing-RCNN to produce the instance-level parsing result for each key frame, which integrates both the global human parsing and instance-level human segmentation into a unified model. To balance between accuracy and efficiency, the flow-guided feature propagation is used to directly parse consecutive frames according to their identified temporal consistency with key frames. On the other hand, ATEN leverages the convolution gated recurrent units (convGRU) to exploit temporal changes over a series of key frames, which are further used to facilitate the frame-level instance-level parsing. By alternatively performing direct feature propagation between consistent frames and temporal encoding network among key frames, our ATEN achieves a good balance between frame-level accuracy and time efficiency, which is a common crucial problem in video object segmentation research. To demonstrate the superiority of our ATEN, extensive experiments are conducted on the most popular video segmentation benchmark (DAVIS) and a newly collected Video Instance-level Parsing (VIP) dataset, which is the first video instance-level human parsing dataset comprised of 404 sequences and over 20k frames with instance-level and pixel-wise annotations. |
Tasks | Human Parsing, Semantic Segmentation, Video Object Segmentation, Video Semantic Segmentation |
Published | 2018-08-02 |
URL | http://arxiv.org/abs/1808.00661v2 |
http://arxiv.org/pdf/1808.00661v2.pdf | |
PWC | https://paperswithcode.com/paper/adaptive-temporal-encoding-network-for-video |
Repo | https://github.com/HCPLab-SYSU/ATEN |
Framework | tf |
Manifold Mixup: Better Representations by Interpolating Hidden States
Title | Manifold Mixup: Better Representations by Interpolating Hidden States |
Authors | Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, Aaron Courville, David Lopez-Paz, Yoshua Bengio |
Abstract | Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples. This includes distribution shifts, outliers, and adversarial examples. To address these issues, we propose Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations. Manifold Mixup leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance. We prove theory on why this flattening happens under ideal conditions, validate it on practical situations, and connect it to previous works on information theory and generalization. In spite of incurring no significant computation and being implemented in a few lines of code, Manifold Mixup improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood. |
Tasks | Image Classification |
Published | 2018-06-13 |
URL | https://arxiv.org/abs/1806.05236v7 |
https://arxiv.org/pdf/1806.05236v7.pdf | |
PWC | https://paperswithcode.com/paper/manifold-mixup-better-representations-by |
Repo | https://github.com/makeyourownmaker/mixup |
Framework | pytorch |
Learning Deep Gradient Descent Optimization for Image Deconvolution
Title | Learning Deep Gradient Descent Optimization for Image Deconvolution |
Authors | Dong Gong, Zhen Zhang, Qinfeng Shi, Anton van den Hengel, Chunhua Shen, Yanning Zhang |
Abstract | As an integral component of blind image deblurring, non-blind deconvolution removes image blur with a given blur kernel, which is essential but difficult due to the ill-posed nature of the inverse problem. The predominant approach is based on optimization subject to regularization functions that are either manually designed, or learned from examples. Existing learning based methods have shown superior restoration quality but are not practical enough due to their restricted and static model design. They solely focus on learning a prior and require to know the noise level for deconvolution. We address the gap between the optimization-based and learning-based approaches by learning a universal gradient descent optimizer. We propose a Recurrent Gradient Descent Network (RGDN) by systematically incorporating deep neural networks into a fully parameterized gradient descent scheme. A hyper-parameter-free update unit shared across steps is used to generate updates from the current estimates, based on a convolutional neural network. By training on diverse examples, the Recurrent Gradient Descent Network learns an implicit image prior and a universal update rule through recursive supervision. The learned optimizer can be repeatedly used to improve the quality of diverse degenerated observations. The proposed method possesses strong interpretability and high generalization. Extensive experiments on synthetic benchmarks and challenging real-world images demonstrate that the proposed deep optimization method is effective and robust to produce favorable results as well as practical for real-world image deblurring applications. |
Tasks | Blind Image Deblurring, Deblurring, Image Deconvolution |
Published | 2018-04-10 |
URL | https://arxiv.org/abs/1804.03368v2 |
https://arxiv.org/pdf/1804.03368v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-an-optimizer-for-image-deconvolution |
Repo | https://github.com/donggong1/learn-optimizer-rgdn |
Framework | pytorch |
FD-MobileNet: Improved MobileNet with a Fast Downsampling Strategy
Title | FD-MobileNet: Improved MobileNet with a Fast Downsampling Strategy |
Authors | Zheng Qin, Zhaoning Zhang, Xiaotao Chen, Yuxing Peng |
Abstract | We present Fast-Downsampling MobileNet (FD-MobileNet), an efficient and accurate network for very limited computational budgets (e.g., 10-140 MFLOPs). Our key idea is applying an aggressive downsampling strategy to MobileNet framework. In FD-MobileNet, we perform 32$\times$ downsampling within 12 layers, only half the layers in the original MobileNet. This design brings three advantages: (i) It remarkably reduces the computational cost. (ii) It increases the information capacity and achieves significant performance improvements. (iii) It is engineering-friendly and provides fast actual inference speed. Experiments on ILSVRC 2012 and PASCAL VOC 2007 datasets demonstrate that FD-MobileNet consistently outperforms MobileNet and achieves comparable results with ShuffleNet under different computational budgets, for instance, surpassing MobileNet by 5.5% on the ILSVRC 2012 top-1 accuracy and 3.6% on the VOC 2007 mAP under a complexity of 12 MFLOPs. On an ARM-based device, FD-MobileNet achieves 1.11$\times$ inference speedup over MobileNet and 1.82$\times$ over ShuffleNet under the same complexity. |
Tasks | |
Published | 2018-02-11 |
URL | http://arxiv.org/abs/1802.03750v1 |
http://arxiv.org/pdf/1802.03750v1.pdf | |
PWC | https://paperswithcode.com/paper/fd-mobilenet-improved-mobilenet-with-a-fast |
Repo | https://github.com/osmr/imgclsmob |
Framework | mxnet |
Market Making via Reinforcement Learning
Title | Market Making via Reinforcement Learning |
Authors | Thomas Spooner, John Fearnley, Rahul Savani, Andreas Koukorinis |
Abstract | Market making is a fundamental trading problem in which an agent provides liquidity by continually offering to buy and sell a security. The problem is challenging due to inventory risk, the risk of accumulating an unfavourable position and ultimately losing money. In this paper, we develop a high-fidelity simulation of limit order book markets, and use it to design a market making agent using temporal-difference reinforcement learning. We use a linear combination of tile codings as a value function approximator, and design a custom reward function that controls inventory risk. We demonstrate the effectiveness of our approach by showing that our agent outperforms both simple benchmark strategies and a recent online learning approach from the literature. |
Tasks | |
Published | 2018-04-11 |
URL | http://arxiv.org/abs/1804.04216v1 |
http://arxiv.org/pdf/1804.04216v1.pdf | |
PWC | https://paperswithcode.com/paper/market-making-via-reinforcement-learning |
Repo | https://github.com/tspooner/rl_markets |
Framework | none |
On the Complexity of Exploration in Goal-Driven Navigation
Title | On the Complexity of Exploration in Goal-Driven Navigation |
Authors | Maruan Al-Shedivat, Lisa Lee, Ruslan Salakhutdinov, Eric Xing |
Abstract | Building agents that can explore their environments intelligently is a challenging open problem. In this paper, we make a step towards understanding how a hierarchical design of the agent’s policy can affect its exploration capabilities. First, we design EscapeRoom environments, where the agent must figure out how to navigate to the exit by accomplishing a number of intermediate tasks (\emph{subgoals}), such as finding keys or opening doors. Our environments are procedurally generated and vary in complexity, which can be controlled by the number of subgoals and relationships between them. Next, we propose to measure the complexity of each environment by constructing dependency graphs between the goals and analytically computing \emph{hitting times} of a random walk in the graph. We empirically evaluate Proximal Policy Optimization (PPO) with sparse and shaped rewards, a variation of policy sketches, and a hierarchical version of PPO (called HiPPO) akin to h-DQN. We show that analytically estimated \emph{hitting time} in goal dependency graphs is an informative metric of the environment complexity. We conjecture that the result should hold for environments other than navigation. Finally, we show that solving environments beyond certain level of complexity requires hierarchical approaches. |
Tasks | |
Published | 2018-11-16 |
URL | http://arxiv.org/abs/1811.06889v1 |
http://arxiv.org/pdf/1811.06889v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-complexity-of-exploration-in-goal |
Repo | https://github.com/maximecb/gym-minigrid |
Framework | pytorch |
Learning Actionable Representations with Goal-Conditioned Policies
Title | Learning Actionable Representations with Goal-Conditioned Policies |
Authors | Dibya Ghosh, Abhishek Gupta, Sergey Levine |
Abstract | Representation learning is a central challenge across a range of machine learning areas. In reinforcement learning, effective and functional representations have the potential to tremendously accelerate learning progress and solve more challenging problems. Most prior work on representation learning has focused on generative approaches, learning representations that capture all underlying factors of variation in the observation space in a more disentangled or well-ordered manner. In this paper, we instead aim to learn functionally salient representations: representations that are not necessarily complete in terms of capturing all factors of variation in the observation space, but rather aim to capture those factors of variation that are important for decision making – that are “actionable.” These representations are aware of the dynamics of the environment, and capture only the elements of the observation that are necessary for decision making rather than all factors of variation, without explicit reconstruction of the observation. We show how these representations can be useful to improve exploration for sparse reward problems, to enable long horizon hierarchical reinforcement learning, and as a state representation for learning policies for downstream tasks. We evaluate our method on a number of simulated environments, and compare it to prior methods for representation learning, exploration, and hierarchical reinforcement learning. |
Tasks | Decision Making, Hierarchical Reinforcement Learning, Representation Learning |
Published | 2018-11-19 |
URL | http://arxiv.org/abs/1811.07819v2 |
http://arxiv.org/pdf/1811.07819v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-actionable-representations-with-goal |
Repo | https://github.com/elitalobo/Hierarchical-RL-Algorithms |
Framework | pytorch |
SCRDet: Towards More Robust Detection for Small, Cluttered and Rotated Objects
Title | SCRDet: Towards More Robust Detection for Small, Cluttered and Rotated Objects |
Authors | Xue Yang, Jirui Yang, Junchi Yan, Yue Zhang, Tengfei Zhang, Zhi Guo, Sun Xian, Kun Fu |
Abstract | Object detection has been a building block in computer vision. Though considerable progress has been made, there still exist challenges for objects with small size, arbitrary direction, and dense distribution. Apart from natural images, such issues are especially pronounced for aerial images of great importance. This paper presents a novel multi-category rotation detector for small, cluttered and rotated objects, namely SCRDet. Specifically, a sampling fusion network is devised which fuses multi-layer feature with effective anchor sampling, to improve the sensitivity to small objects. Meanwhile, the supervised pixel attention network and the channel attention network are jointly explored for small and cluttered object detection by suppressing the noise and highlighting the objects feature. For more accurate rotation estimation, the IoU constant factor is added to the smooth L1 loss to address the boundary problem for the rotating bounding box. Extensive experiments on two remote sensing public datasets DOTA, NWPU VHR-10 as well as natural image datasets COCO, VOC2007 and scene text data ICDAR2015 show the state-of-the-art performance of our detector. The code and models will be available at https://github.com/DetectionTeamUCAS. |
Tasks | Object Detection, Object Detection In Aerial Images |
Published | 2018-11-17 |
URL | https://arxiv.org/abs/1811.07126v4 |
https://arxiv.org/pdf/1811.07126v4.pdf | |
PWC | https://paperswithcode.com/paper/r2cnn-multi-dimensional-attention-based |
Repo | https://github.com/Thinklab-SJTU/R3Det_Tensorflow |
Framework | tf |
Constructing Financial Sentimental Factors in Chinese Market Using Natural Language Processing
Title | Constructing Financial Sentimental Factors in Chinese Market Using Natural Language Processing |
Authors | Junfeng Jiang, Jiahao Li |
Abstract | In this paper, we design an integrated algorithm to evaluate the sentiment of Chinese market. Firstly, with the help of the web browser automation, we crawl a lot of news and comments from several influential financial websites automatically. Secondly, we use techniques of Natural Language Processing(NLP) under Chinese context, including tokenization, Word2vec word embedding and semantic database WordNet, to compute Senti-scores of these news and comments, and then construct the sentimental factor. Here, we build a finance-specific sentimental lexicon so that the sentimental factor can reflect the sentiment of financial market but not the general sentiments as happiness, sadness, etc. Thirdly, we also implement an adjustment of the standard sentimental factor. Our experimental performance shows that there is a significant correlation between our standard sentimental factor and the Chinese market, and the adjusted factor is even more informative, having a stronger correlation with the Chinese market. Therefore, our sentimental factors can be important references when making investment decisions. Especially during the Chinese market crash in 2015, the Pearson correlation coefficient of adjusted sentimental factor with SSE is 0.5844, which suggests that our model can provide a solid guidance, especially in the special period when the market is influenced greatly by public sentiment. |
Tasks | Tokenization |
Published | 2018-09-22 |
URL | http://arxiv.org/abs/1809.08390v1 |
http://arxiv.org/pdf/1809.08390v1.pdf | |
PWC | https://paperswithcode.com/paper/constructing-financial-sentimental-factors-in |
Repo | https://github.com/Coldog2333/Financial-NLP |
Framework | none |
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Title | Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks |
Authors | Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, Yi Yang |
Abstract | This paper proposed a Soft Filter Pruning (SFP) method to accelerate the inference procedure of deep Convolutional Neural Networks (CNNs). Specifically, the proposed SFP enables the pruned filters to be updated when training the model after pruning. SFP has two advantages over previous works: (1) Larger model capacity. Updating previously pruned filters provides our approach with larger optimization space than fixing the filters to zero. Therefore, the network trained by our method has a larger model capacity to learn from the training data. (2) Less dependence on the pre-trained model. Large capacity enables SFP to train from scratch and prune the model simultaneously. In contrast, previous filter pruning methods should be conducted on the basis of the pre-trained model to guarantee their performance. Empirically, SFP from scratch outperforms the previous filter pruning methods. Moreover, our approach has been demonstrated effective for many advanced CNN architectures. Notably, on ILSCRC-2012, SFP reduces more than 42% FLOPs on ResNet-101 with even 0.2% top-5 accuracy improvement, which has advanced the state-of-the-art. Code is publicly available on GitHub: https://github.com/he-y/soft-filter-pruning |
Tasks | |
Published | 2018-08-21 |
URL | http://arxiv.org/abs/1808.06866v1 |
http://arxiv.org/pdf/1808.06866v1.pdf | |
PWC | https://paperswithcode.com/paper/soft-filter-pruning-for-accelerating-deep |
Repo | https://github.com/he-y/soft-filter-pruning |
Framework | pytorch |
Iris Presentation Attack Detection Based on Photometric Stereo Features
Title | Iris Presentation Attack Detection Based on Photometric Stereo Features |
Authors | Adam Czajka, Zhaoyuan Fang, Kevin W. Bowyer |
Abstract | We propose a new iris presentation attack detection method using three-dimensional features of an observed iris region estimated by photometric stereo. Our implementation uses a pair of iris images acquired by a common commercial iris sensor (LG 4000). No hardware modifications of any kind are required. Our approach should be applicable to any iris sensor that can illuminate the eye from two different directions. Each iris image in the pair is captured under near-infrared illumination at a different angle relative to the eye. Photometric stereo is used to estimate surface normal vectors in the non-occluded portions of the iris region. The variability of the normal vectors is used as the presentation attack detection score. This score is larger for a texture that is irregularly opaque and printed on a convex contact lens, and is smaller for an authentic iris texture. Thus the problem is formulated as binary classification into (a) an eye wearing textured contact lens and (b) the texture of an actual iris surface (possibly seen through a clear contact lens). Experiments were carried out on a database of approx. 2,900 iris image pairs acquired from approx. 100 subjects. Our method was able to correctly classify over 95% of samples when tested on contact lens brands unseen in training, and over 98% of samples when the contact lens brand was seen during training. The source codes of the method are made available to other researchers. |
Tasks | |
Published | 2018-11-18 |
URL | http://arxiv.org/abs/1811.07252v1 |
http://arxiv.org/pdf/1811.07252v1.pdf | |
PWC | https://paperswithcode.com/paper/iris-presentation-attack-detection-based-on |
Repo | https://github.com/CVRL/PhotometricStereoIrisPAD |
Framework | none |