Paper Group ANR 438
A Formulation of Recursive Self-Improvement and Its Possible Efficiency. Tensor Embedding: A Supervised Framework for Human Behavioral Data Mining and Prediction. Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning. Nearly-tight bounds on linear regions of piecewise linear neural networks. Geometry-Based Multiple Camera …
A Formulation of Recursive Self-Improvement and Its Possible Efficiency
Title | A Formulation of Recursive Self-Improvement and Its Possible Efficiency |
Authors | Wenyi Wang |
Abstract | Recursive self-improving (RSI) systems have been dreamed of since the early days of computer science and artificial intelligence. However, many existing studies on RSI systems remain philosophical, and lacks clear formulation and results. In this paper, we provide a formal definition for one class of RSI systems, and then demonstrate the existence of computable and efficient RSI systems on a restricted version. We use simulation to empirically show that we achieve logarithmic runtime complexity with respect to the size of the search space, and these results suggest it is possible to achieve an efficient recursive self-improvement. |
Tasks | |
Published | 2018-05-17 |
URL | http://arxiv.org/abs/1805.06610v1 |
http://arxiv.org/pdf/1805.06610v1.pdf | |
PWC | https://paperswithcode.com/paper/a-formulation-of-recursive-self-improvement |
Repo | |
Framework | |
Tensor Embedding: A Supervised Framework for Human Behavioral Data Mining and Prediction
Title | Tensor Embedding: A Supervised Framework for Human Behavioral Data Mining and Prediction |
Authors | Homa Hosseinmardi, Amir Ghasemian, Shrikanth Narayanan, Kristina Lerman, Emilio Ferrara |
Abstract | Today’s densely instrumented world offers tremendous opportunities for continuous acquisition and analysis of multimodal sensor data providing temporal characterization of an individual’s behaviors. Is it possible to efficiently couple such rich sensor data with predictive modeling techniques to provide contextual, and insightful assessments of individual performance and wellbeing? Prediction of different aspects of human behavior from these noisy, incomplete, and heterogeneous bio-behavioral temporal data is a challenging problem, beyond unsupervised discovery of latent structures. We propose a Supervised Tensor Embedding (STE) algorithm for high dimension multimodal data with join decomposition of input and target variable. Furthermore, we show that features selection will help to reduce the contamination in the prediction and increase the performance. The efficiently of the methods was tested via two different real world datasets. |
Tasks | |
Published | 2018-08-31 |
URL | http://arxiv.org/abs/1808.10867v1 |
http://arxiv.org/pdf/1808.10867v1.pdf | |
PWC | https://paperswithcode.com/paper/tensor-embedding-a-supervised-framework-for |
Repo | |
Framework | |
Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning
Title | Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning |
Authors | Shuai Tang, Virginia R. de Sa |
Abstract | The encoder-decoder models for unsupervised sentence representation learning tend to discard the decoder after being trained on a large unlabelled corpus, since only the encoder is needed to map the input sentence into a vector representation. However, parameters learnt in the decoder also contain useful information about language. In order to utilise the decoder after learning, we present two types of decoding functions whose inverse can be easily derived without expensive inverse calculation. Therefore, the inverse of the decoding function serves as another encoder that produces sentence representations. We show that, with careful design of the decoding functions, the model learns good sentence representations, and the ensemble of the representations produced from the encoder and the inverse of the decoder demonstrate even better generalisation ability and solid transferability. |
Tasks | Representation Learning |
Published | 2018-09-08 |
URL | https://arxiv.org/abs/1809.02731v3 |
https://arxiv.org/pdf/1809.02731v3.pdf | |
PWC | https://paperswithcode.com/paper/exploiting-invertible-decoders-for |
Repo | |
Framework | |
Nearly-tight bounds on linear regions of piecewise linear neural networks
Title | Nearly-tight bounds on linear regions of piecewise linear neural networks |
Authors | Qiang Hu, Hao Zhang |
Abstract | The developments of deep neural networks (DNN) in recent years have ushered a brand new era of artificial intelligence. DNNs are proved to be excellent in solving very complex problems, e.g., visual recognition and text understanding, to the extent of competing with or even surpassing people. Despite inspiring and encouraging success of DNNs, thorough theoretical analyses still lack to unravel the mystery of their magics. The design of DNN structure is dominated by empirical results in terms of network depth, number of neurons and activations. A few of remarkable works published recently in an attempt to interpret DNNs have established the first glimpses of their internal mechanisms. Nevertheless, research on exploring how DNNs operate is still at the initial stage with plenty of room for refinement. In this paper, we extend precedent research on neural networks with piecewise linear activations (PLNN) concerning linear regions bounds. We present (i) the exact maximal number of linear regions for single layer PLNNs; (ii) a upper bound for multi-layer PLNNs; and (iii) a tighter upper bound for the maximal number of liner regions on rectifier networks. The derived bounds also indirectly explain why deep models are more powerful than shallow counterparts, and how non-linearity of activation functions impacts on expressiveness of networks. |
Tasks | |
Published | 2018-10-31 |
URL | http://arxiv.org/abs/1810.13192v4 |
http://arxiv.org/pdf/1810.13192v4.pdf | |
PWC | https://paperswithcode.com/paper/nearly-tight-bounds-on-linear-regions-of |
Repo | |
Framework | |
Geometry-Based Multiple Camera Head Detection in Dense Crowds
Title | Geometry-Based Multiple Camera Head Detection in Dense Crowds |
Authors | Nicola Pellicanò, Emanuel Aldea, Sylvie Le Hégarat-Mascle |
Abstract | This paper addresses the problem of head detection in crowded environments. Our detection is based entirely on the geometric consistency across cameras with overlapping fields of view, and no additional learning process is required. We propose a fully unsupervised method for inferring scene and camera geometry, in contrast to existing algorithms which require specific calibration procedures. Moreover, we avoid relying on the presence of body parts other than heads or on background subtraction, which have limited effectiveness under heavy clutter. We cast the head detection problem as a stereo MRF-based optimization of a dense pedestrian height map, and we introduce a constraint which aligns the height gradient according to the vertical vanishing point direction. We validate the method in an outdoor setting with varying pedestrian density levels. With only three views, our approach is able to detect simultaneously tens of heavily occluded pedestrians across a large, homogeneous area. |
Tasks | Calibration, Head Detection |
Published | 2018-08-02 |
URL | http://arxiv.org/abs/1808.00856v1 |
http://arxiv.org/pdf/1808.00856v1.pdf | |
PWC | https://paperswithcode.com/paper/geometry-based-multiple-camera-head-detection |
Repo | |
Framework | |
DVQA: Understanding Data Visualizations via Question Answering
Title | DVQA: Understanding Data Visualizations via Question Answering |
Authors | Kushal Kafle, Brian Price, Scott Cohen, Christopher Kanan |
Abstract | Bar charts are an effective way to convey numeric information, but today’s algorithms cannot parse them. Existing methods fail when faced with even minor variations in appearance. Here, we present DVQA, a dataset that tests many aspects of bar chart understanding in a question answering framework. Unlike visual question answering (VQA), DVQA requires processing words and answers that are unique to a particular bar chart. State-of-the-art VQA algorithms perform poorly on DVQA, and we propose two strong baselines that perform considerably better. Our work will enable algorithms to automatically extract numeric and semantic information from vast quantities of bar charts found in scientific publications, Internet articles, business reports, and many other areas. |
Tasks | Question Answering, Visual Question Answering |
Published | 2018-01-24 |
URL | http://arxiv.org/abs/1801.08163v2 |
http://arxiv.org/pdf/1801.08163v2.pdf | |
PWC | https://paperswithcode.com/paper/dvqa-understanding-data-visualizations-via |
Repo | |
Framework | |
A Bandit Approach to Maximum Inner Product Search
Title | A Bandit Approach to Maximum Inner Product Search |
Authors | Rui Liu, Tianyi Wu, Barzan Mozafari |
Abstract | There has been substantial research on sub-linear time approximate algorithms for Maximum Inner Product Search (MIPS). To achieve fast query time, state-of-the-art techniques require significant preprocessing, which can be a burden when the number of subsequent queries is not sufficiently large to amortize the cost. Furthermore, existing methods do not have the ability to directly control the suboptimality of their approximate results with theoretical guarantees. In this paper, we propose the first approximate algorithm for MIPS that does not require any preprocessing, and allows users to control and bound the suboptimality of the results. We cast MIPS as a Best Arm Identification problem, and introduce a new bandit setting that can fully exploit the special structure of MIPS. Our approach outperforms state-of-the-art methods on both synthetic and real-world datasets. |
Tasks | |
Published | 2018-12-15 |
URL | http://arxiv.org/abs/1812.06360v1 |
http://arxiv.org/pdf/1812.06360v1.pdf | |
PWC | https://paperswithcode.com/paper/a-bandit-approach-to-maximum-inner-product |
Repo | |
Framework | |
DART: Domain-Adversarial Residual-Transfer Networks for Unsupervised Cross-Domain Image Classification
Title | DART: Domain-Adversarial Residual-Transfer Networks for Unsupervised Cross-Domain Image Classification |
Authors | Xianghong Fang, Haoli Bai, Ziyi Guo, Bin Shen, Steven Hoi, Zenglin Xu |
Abstract | The accuracy of deep learning (e.g., convolutional neural networks) for an image classification task critically relies on the amount of labeled training data. Aiming to solve an image classification task on a new domain that lacks labeled data but gains access to cheaply available unlabeled data, unsupervised domain adaptation is a promising technique to boost the performance without incurring extra labeling cost, by assuming images from different domains share some invariant characteristics. In this paper, we propose a new unsupervised domain adaptation method named Domain-Adversarial Residual-Transfer (DART) learning of Deep Neural Networks to tackle cross-domain image classification tasks. In contrast to the existing unsupervised domain adaption approaches, the proposed DART not only learns domain-invariant features via adversarial training, but also achieves robust domain-adaptive classification via a residual-transfer strategy, all in an end-to-end training framework. We evaluate the performance of the proposed method for cross-domain image classification tasks on several well-known benchmark data sets, in which our method clearly outperforms the state-of-the-art approaches. |
Tasks | Domain Adaptation, Image Classification, Unsupervised Domain Adaptation |
Published | 2018-12-30 |
URL | http://arxiv.org/abs/1812.11478v1 |
http://arxiv.org/pdf/1812.11478v1.pdf | |
PWC | https://paperswithcode.com/paper/dart-domain-adversarial-residual-transfer |
Repo | |
Framework | |
TWINs: Two Weighted Inconsistency-reduced Networks for Partial Domain Adaptation
Title | TWINs: Two Weighted Inconsistency-reduced Networks for Partial Domain Adaptation |
Authors | Toshihiko Matsuura, Kuniaki Saito, Tatsuya Harada |
Abstract | The task of unsupervised domain adaptation is proposed to transfer the knowledge of a label-rich domain (source domain) to a label-scarce domain (target domain). Matching feature distributions between different domains is a widely applied method for the aforementioned task. However, the method does not perform well when classes in the two domains are not identical. Specifically, when the classes of the target correspond to a subset of those of the source, target samples can be incorrectly aligned with the classes that exist only in the source. This problem setting is termed as partial domain adaptation (PDA). In this study, we propose a novel method called Two Weighted Inconsistency-reduced Networks (TWINs) for PDA. We utilize two classification networks to estimate the ratio of the target samples in each class with which a classification loss is weighted to adapt the classes present in the target domain. Furthermore, to extract discriminative features for the target, we propose to minimize the divergence between domains measured by the classifiers’ inconsistency on target samples. We empirically demonstrate that reducing the inconsistency between two networks is effective for PDA and that our method outperforms other existing methods with a large margin in several datasets. |
Tasks | Domain Adaptation, Partial Domain Adaptation, Unsupervised Domain Adaptation |
Published | 2018-12-18 |
URL | http://arxiv.org/abs/1812.07405v1 |
http://arxiv.org/pdf/1812.07405v1.pdf | |
PWC | https://paperswithcode.com/paper/twins-two-weighted-inconsistency-reduced |
Repo | |
Framework | |
InverSynth: Deep Estimation of Synthesizer Parameter Configurations from Audio Signals
Title | InverSynth: Deep Estimation of Synthesizer Parameter Configurations from Audio Signals |
Authors | Oren Barkan, David Tsiris, Ori Katz, Noam Koenigstein |
Abstract | Sound synthesis is a complex field that requires domain expertise. Manual tuning of synthesizer parameters to match a specific sound can be an exhaustive task, even for experienced sound engineers. In this paper, we introduce InverSynth - an automatic method for synthesizer parameters tuning to match a given input sound. InverSynth is based on strided convolutional neural networks and is capable of inferring the synthesizer parameters configuration from the input spectrogram and even from the raw audio. The effectiveness InverSynth is demonstrated on a subtractive synthesizer with four frequency modulated oscillators, envelope generator and a gater effect. We present extensive quantitative and qualitative results that showcase the superiority InverSynth over several baselines. Furthermore, we show that the network depth is an important factor that contributes to the prediction accuracy. |
Tasks | |
Published | 2018-12-15 |
URL | https://arxiv.org/abs/1812.06349v2 |
https://arxiv.org/pdf/1812.06349v2.pdf | |
PWC | https://paperswithcode.com/paper/deep-synthesizer-parameter-estimation |
Repo | |
Framework | |
Deep Recurrent Neural Networks for ECG Signal Denoising
Title | Deep Recurrent Neural Networks for ECG Signal Denoising |
Authors | Karol Antczak |
Abstract | Electrocardiographic signal is a subject to multiple noises, caused by various factors. It is therefore a standard practice to denoise such signal before further analysis. With advances of new branch of machine learning, called deep learning, new methods are available that promises state-of-the-art performance for this task. We present a novel approach to denoise electrocardiographic signals with deep recurrent denoising neural networks. We utilize a transfer learning technique by pretraining the network using synthetic data, generated by a dynamic ECG model, and fine-tuning it with a real data. We also investigate the impact of the synthetic training data on the network performance on real signals. The proposed method was tested on a real dataset with varying amount of noise. The results indicate that four-layer deep recurrent neural network can outperform reference methods for heavily noised signal. Moreover, networks pretrained with synthetic data seem to have better results than network trained with real data only. We show that it is possible to create state-of-the art denoising neural network that, pretrained on artificial data, can perform exceptionally well on real ECG signals after proper fine-tuning. |
Tasks | Denoising, Transfer Learning |
Published | 2018-07-30 |
URL | http://arxiv.org/abs/1807.11551v3 |
http://arxiv.org/pdf/1807.11551v3.pdf | |
PWC | https://paperswithcode.com/paper/deep-recurrent-neural-networks-for-ecg-signal |
Repo | |
Framework | |
An agent-based evaluation of impacts of transport developments on the modal shift in Tehran, Iran
Title | An agent-based evaluation of impacts of transport developments on the modal shift in Tehran, Iran |
Authors | A. Shirzadi Babakan, A. Alimohammadi, M. Taleai |
Abstract | Changes in travel modes used by people, particularly reduction of the private car use, is an important determinant of effectiveness of transportation plans. Because of dependencies between the choices of residential location and travel mode, integrated modelling of these choices has been proposed by some researchers. In this paper, an agent-based microsimulation model has been developed to evaluate impacts of different transport development plans on choices of residential location and commuting mode of tenant households in Tehran, the capital of Iran. In the proposed model, households are considered as agents who select their desired residential location using a constrained NSGA-II algorithm and in a competition with other households. In addition, they choose their commuting mode by applying a multi-criteria decision making method. Afterwards, effects of development of a new highway, subway and bus rapid transit (BRT) line on their residential location and commuting mode choices are evaluated. Results show that despite the residential self-selection effects, these plans result in considerable changes in the commuting mode of different socioeconomic categories of households. Development of the new subway line shows promising results by reducing the private car use among the all socio-economic categories of households. But the new highway development unsatisfactorily results in increase in the private car use. In addition, development of the new BRT line does not show significant effects on the commuting mode change, particularly on decrease in the private car use. |
Tasks | Decision Making |
Published | 2018-03-13 |
URL | http://arxiv.org/abs/1803.04934v1 |
http://arxiv.org/pdf/1803.04934v1.pdf | |
PWC | https://paperswithcode.com/paper/an-agent-based-evaluation-of-impacts-of |
Repo | |
Framework | |
MASA: Motif-Aware State Assignment in Noisy Time Series Data
Title | MASA: Motif-Aware State Assignment in Noisy Time Series Data |
Authors | Saachi Jain, David Hallac, Rok Sosic, Jure Leskovec |
Abstract | Complex systems, such as airplanes, cars, or financial markets, produce multivariate time series data consisting of a large number of system measurements over a period of time. Such data can be interpreted as a sequence of states, where each state represents a prototype of system behavior. An important problem in this domain is to identify repeated sequences of states, known as motifs. Such motifs correspond to complex behaviors that capture common sequences of state transitions. For example, in automotive data, a motif of “making a turn” might manifest as a sequence of states: slowing down, turning the wheel, and then speeding back up. However, discovering these motifs is challenging, because the individual states and state assignments are unknown, have different durations, and need to be jointly learned from the noisy time series. Here we develop motif-aware state assignment (MASA), a method to discover common motifs in noisy time series data and leverage those motifs to more robustly assign states to measurements. We formulate the problem of motif discovery as a large optimization problem, which we solve using an expectation-maximization type approach. MASA performs well in the presence of noise in the input data and is scalable to very large datasets. Experiments on synthetic data show that MASA outperforms state-of-the-art baselines by up to 38.2%, and two case studies demonstrate how our approach discovers insightful motifs in the presence of noise in real-world time series data. |
Tasks | Time Series |
Published | 2018-09-06 |
URL | https://arxiv.org/abs/1809.01819v2 |
https://arxiv.org/pdf/1809.01819v2.pdf | |
PWC | https://paperswithcode.com/paper/casc-context-aware-segmentation-and |
Repo | |
Framework | |
On the Relative Succinctness of Sentential Decision Diagrams
Title | On the Relative Succinctness of Sentential Decision Diagrams |
Authors | Beate Bollig, Matthias Buttkus |
Abstract | Sentential decision diagrams (SDDs) introduced by Darwiche in 2011 are a promising representation type used in knowledge compilation. The relative succinctness of representation types is an important subject in this area. The aim of the paper is to identify which kind of Boolean functions can be represented by SDDs of small size with respect to the number of variables the functions are defined on. For this reason the sets of Boolean functions representable by different representation types in polynomial size are investigated and SDDs are compared with representation types from the classical knowledge compilation map of Darwiche and Marquis. Ordered binary decision diagrams (OBDDs) which are a popular data structure for Boolean functions are one of these representation types. SDDs are more general than OBDDs by definition but only recently, a Boolean function was presented with polynomial SDD size but exponential OBDD size. This result is strengthened in several ways. The main result is a quasipolynomial simulation of SDDs by equivalent unambiguous nondeterministic OBDDs, a nondeterministic variant where there exists exactly one accepting computation for each satisfying input. As a side effect an open problem about the relative succinctness between SDDs and free binary decision diagrams (FBDDs) which are more general than OBDDs is answered. |
Tasks | |
Published | 2018-02-13 |
URL | http://arxiv.org/abs/1802.04544v1 |
http://arxiv.org/pdf/1802.04544v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-relative-succinctness-of-sentential |
Repo | |
Framework | |
Few-Shot Self Reminder to Overcome Catastrophic Forgetting
Title | Few-Shot Self Reminder to Overcome Catastrophic Forgetting |
Authors | Junfeng Wen, Yanshuai Cao, Ruitong Huang |
Abstract | Deep neural networks are known to suffer the catastrophic forgetting problem, where they tend to forget the knowledge from the previous tasks when sequentially learning new tasks. Such failure hinders the application of deep learning based vision system in continual learning settings. In this work, we present a simple yet surprisingly effective way of preventing catastrophic forgetting. Our method, called Few-shot Self Reminder (FSR), regularizes the neural net from changing its learned behaviour by performing logit matching on selected samples kept in episodic memory from the old tasks. Surprisingly, this simplistic approach only requires to retrain a small amount of data in order to outperform previous methods in knowledge retention. We demonstrate the superiority of our method to the previous ones in two different continual learning settings on popular benchmarks, as well as a new continual learning problem where tasks are designed to be more dissimilar. |
Tasks | Continual Learning |
Published | 2018-12-03 |
URL | http://arxiv.org/abs/1812.00543v1 |
http://arxiv.org/pdf/1812.00543v1.pdf | |
PWC | https://paperswithcode.com/paper/few-shot-self-reminder-to-overcome |
Repo | |
Framework | |