January 30, 2020

3258 words 16 mins read

Paper Group ANR 205

Paper Group ANR 205

A new CP-approach for a parallel machine scheduling problem with time constraints on machine qualifications. Deep Divergence-Based Approach to Clustering. The Power of Comparisons for Actively Learning Linear Classifiers. Wi2Vi: Generating Video Frames from WiFi CSI Samples. Domain-Invariant Feature Distillation for Cross-Domain Sentiment Classific …

A new CP-approach for a parallel machine scheduling problem with time constraints on machine qualifications

Title A new CP-approach for a parallel machine scheduling problem with time constraints on machine qualifications
Authors Arnaud Malapert, Margaux Nattaf
Abstract This paper considers the scheduling of job families on parallel machines with time constraints on machine qualifications. In this problem, each job belongs to a family and a family can only be executed on a subset of qualified machines. In addition, machines can lose their qualifications during the schedule. Indeed, if no job of a family is scheduled on a machine during a given amount of time, the machine loses its qualification for this family. The goal is to minimize the sum of job completion times, i.e. the flow time, while maximizing the number of qualifications at the end of the schedule. The paper presents a new Constraint Programming (CP) model taking more advantages of the CP feature to model machine disqualifications. This model is compared with two existing models: an Integer Linear Programming (ILP) model and a Constraint Programming model. The experiments show that the new CP model outperforms the other model when the priority is given to the number of disqualifications objective. Furthermore, it is competitive with the other model when the flow time objective is prioritized.
Tasks
Published 2019-10-16
URL https://arxiv.org/abs/1910.07203v1
PDF https://arxiv.org/pdf/1910.07203v1.pdf
PWC https://paperswithcode.com/paper/a-new-cp-approach-for-a-parallel-machine
Repo
Framework

Deep Divergence-Based Approach to Clustering

Title Deep Divergence-Based Approach to Clustering
Authors Michael Kampffmeyer, Sigurd Løkse, Filippo M. Bianchi, Lorenzo Livi, Arnt-Børre Salberg, Robert Jenssen
Abstract A promising direction in deep learning research consists in learning representations and simultaneously discovering cluster structure in unlabeled data by optimizing a discriminative loss function. As opposed to supervised deep learning, this line of research is in its infancy, and how to design and optimize suitable loss functions to train deep neural networks for clustering is still an open question. Our contribution to this emerging field is a new deep clustering network that leverages the discriminative power of information-theoretic divergence measures, which have been shown to be effective in traditional clustering. We propose a novel loss function that incorporates geometric regularization constraints, thus avoiding degenerate structures of the resulting clustering partition. Experiments on synthetic benchmarks and real datasets show that the proposed network achieves competitive performance with respect to other state-of-the-art methods, scales well to large datasets, and does not require pre-training steps.
Tasks
Published 2019-02-13
URL http://arxiv.org/abs/1902.04981v1
PDF http://arxiv.org/pdf/1902.04981v1.pdf
PWC https://paperswithcode.com/paper/deep-divergence-based-approach-to-clustering
Repo
Framework

The Power of Comparisons for Actively Learning Linear Classifiers

Title The Power of Comparisons for Actively Learning Linear Classifiers
Authors Max Hopkins, Daniel M. Kane, Shachar Lovett
Abstract In the world of big data, large but costly to label datasets dominate many fields. Active learning, an unsupervised alternative to the standard PAC-learning model, was introduced to explore whether adaptive labeling could learn concepts with exponentially fewer labeled samples. While previous results show that active learning performs no better than its supervised alternative for important concept classes such as linear separators, we show that by adding weak distributional assumptions and allowing comparison queries, active learning requires exponentially fewer samples. Further, we show that these results hold as well for a stronger model of learning called Reliable and Probably Useful (RPU) learning. In this model, our learner is not allowed to make mistakes, but may instead answer “I don’t know.” While previous negative results showed this model to have intractably large sample complexity for label queries, we show that comparison queries make RPU-learning at worst logarithmically more expensive in the passive case, and quadratically more expensive in the active case.
Tasks Active Learning
Published 2019-07-08
URL https://arxiv.org/abs/1907.03816v1
PDF https://arxiv.org/pdf/1907.03816v1.pdf
PWC https://paperswithcode.com/paper/the-power-of-comparisons-for-actively
Repo
Framework

Wi2Vi: Generating Video Frames from WiFi CSI Samples

Title Wi2Vi: Generating Video Frames from WiFi CSI Samples
Authors Mohammad Hadi Kefayati, Vahid Pourahmadi, Hassan Aghaeinia
Abstract Objects in an environment affect electromagnetic waves. While this effect varies across frequencies, there exists a correlation between them, and a model with enough capacity can capture this correlation between the measurements in different frequencies. In this paper, we propose the Wi2Vi model for associating variations in the WiFi channel state information with video frames. The proposed Wi2Vi system can generate video frames entirely using CSI measurements. The produced video frames by the Wi2Vi provide auxiliary information to the conventional surveillance system in critical circumstances. Our implementation of the Wi2Vi system confirms the feasibility of constructing a system capable of deriving the correlations between measurements in different frequency spectrums.
Tasks
Published 2019-12-30
URL https://arxiv.org/abs/2001.05842v1
PDF https://arxiv.org/pdf/2001.05842v1.pdf
PWC https://paperswithcode.com/paper/wi2vi-generating-video-frames-from-wifi-csi
Repo
Framework

Domain-Invariant Feature Distillation for Cross-Domain Sentiment Classification

Title Domain-Invariant Feature Distillation for Cross-Domain Sentiment Classification
Authors Mengting Hu, Yike Wu, Shiwan Zhao, Honglei Guo, Renhong Cheng, Zhong Su
Abstract Cross-domain sentiment classification has drawn much attention in recent years. Most existing approaches focus on learning domain-invariant representations in both the source and target domains, while few of them pay attention to the domain-specific information. Despite the non-transferability of the domain-specific information, simultaneously learning domain-dependent representations can facilitate the learning of domain-invariant representations. In this paper, we focus on aspect-level cross-domain sentiment classification, and propose to distill the domain-invariant sentiment features with the help of an orthogonal domain-dependent task, i.e. aspect detection, which is built on the aspects varying widely in different domains. We conduct extensive experiments on three public datasets and the experimental results demonstrate the effectiveness of our method.
Tasks Sentiment Analysis
Published 2019-08-24
URL https://arxiv.org/abs/1908.09122v1
PDF https://arxiv.org/pdf/1908.09122v1.pdf
PWC https://paperswithcode.com/paper/domain-invariant-feature-distillation-for
Repo
Framework

MOBA: A multi-objective bounded-abstention model for two-class cost-sensitive problems

Title MOBA: A multi-objective bounded-abstention model for two-class cost-sensitive problems
Authors Hongjiao Guan
Abstract Abstaining classifiers have been widely used in cost-sensitive applications to avoid ambiguous classification and reduce the cost of misclassification. Previous abstaining classification models rely on cost information, such as a cost matrix or cost ratio. However, it is difficult to obtain or estimate costs in practical applications. Furthermore, these abstention models are typically restricted to a single optimization metric, which may not be the expected indicator when evaluating classification performance. To overcome such problems, a multi-objective bounded-abstention (MOBA) model is proposed to optimize essential metrics. Specifically, the MOBA model minimizes the error rate of each class under class-dependent abstention constraints. The MOBA model is then solved using the non-dominated sorting genetic algorithm II, which is a popular evolutionary multi-objective optimization algorithm. A set of Pareto-optimal solutions will be generated and the best one can be selected according to provided conditions (whether costs are known) or performance demands (e.g., obtaining a high accuracy, F-measure, and etc). Hence, the MOBA model is robust towards variations in the conditions and requirements. Compared to state-of-the-art abstention models, MOBA achieves lower expected costs when cost information is considered, and better performance-abstention trade-offs when it is not.
Tasks
Published 2019-05-17
URL https://arxiv.org/abs/1905.07297v1
PDF https://arxiv.org/pdf/1905.07297v1.pdf
PWC https://paperswithcode.com/paper/moba-a-multi-objective-bounded-abstention
Repo
Framework

Passive TCP Identification for Wired and WirelessNetworks: A Long-Short Term Memory Approach

Title Passive TCP Identification for Wired and WirelessNetworks: A Long-Short Term Memory Approach
Authors Xiaoyu Chen, Shugong Xu, Xudong Chen, Shan Cao, Shunqing Zhang, Yanzan Sun
Abstract Transmission control protocol (TCP) congestion control is one of the key techniques to improve network performance. TCP congestion control algorithm identification (TCP identification) can be used to significantly improve network efficiency. Existing TCP identification methods can only be applied to limited number of TCP congestion control algorithms and focus on wired networks. In this paper, we proposed a machine learning based passive TCP identification method for wired and wireless networks. After comparing among three typical machine learning models, we concluded that the 4-layers Long Short Term Memory (LSTM) model achieves the best identification accuracy. Our approach achieves better than 98% accuracy in wired and wireless networks and works for newly proposed TCP congestion control algorithms.
Tasks
Published 2019-04-09
URL http://arxiv.org/abs/1904.04430v1
PDF http://arxiv.org/pdf/1904.04430v1.pdf
PWC https://paperswithcode.com/paper/passive-tcp-identification-for-wired-and
Repo
Framework

Elastic registration based on compliance analysis and biomechanical graph matching

Title Elastic registration based on compliance analysis and biomechanical graph matching
Authors Jaime Garcia Guevara, Igor Peterlik, Marie-Odile Berger, Stéphane Cotin
Abstract An automatic elastic registration method suited for vascularized organs is proposed. The vasculature in both the preoperative and intra-operative images is represented as a graph. A typical application of this method is the fusion of pre-operative information onto the organ during surgery, to compensate for the limited details provided by the intra-operative imaging modality (e.g. CBCT) and to cope with changes in the shape of the organ. Due to image modalities differences and organ deformation, each graph has a different topology and shape. The Adaptive Compliance Graph Matching (ACGM) method presented does not require any manual initialization, handles intra-operative nonrigid deformations of up to 65 mm and computes a complete displacement field over the organ from only the matched vasculature. ACGM is better than the previous Biomechanical Graph Matching method 3 (BGM) because it uses an efficient biomechanical vascularized liver model to compute the organ’s transformation and the vessels bifurcations compliance. This allows to efficiently find the best graph matches with a novel compliance-based adaptive search. These contributions are evaluated on ten realistic synthetic and two real porcine automatically segmented datasets. ACGM obtains better target registration error (TRE) than BGM, with an average TRE in the real datasets of 4.2 mm compared to 6.5 mm, respectively. It also is up to one order of magnitude faster, less dependent on the parameters used and more robust to noise.
Tasks Graph Matching
Published 2019-12-13
URL https://arxiv.org/abs/1912.06353v1
PDF https://arxiv.org/pdf/1912.06353v1.pdf
PWC https://paperswithcode.com/paper/elastic-registration-based-on-compliance
Repo
Framework

Bandit Learning for Diversified Interactive Recommendation

Title Bandit Learning for Diversified Interactive Recommendation
Authors Yong Liu, Yingtai Xiao, Qiong Wu, Chunyan Miao, Juyong Zhang
Abstract Interactive recommender systems that enable the interactions between users and the recommender system have attracted increasing research attentions. Previous methods mainly focus on optimizing recommendation accuracy. However, they usually ignore the diversity of the recommendation results, thus usually results in unsatisfying user experiences. In this paper, we propose a novel diversified recommendation model, named Diversified Contextual Combinatorial Bandit (DC$^2$B), for interactive recommendation with users’ implicit feedback. Specifically, DC$^2$B employs determinantal point process in the recommendation procedure to promote diversity of the recommendation results. To learn the model parameters, a Thompson sampling-type algorithm based on variational Bayesian inference is proposed. In addition, theoretical regret analysis is also provided to guarantee the performance of DC$^2$B. Extensive experiments on real datasets are performed to demonstrate the effectiveness of the proposed method.
Tasks Bayesian Inference, Recommendation Systems
Published 2019-07-01
URL https://arxiv.org/abs/1907.01647v1
PDF https://arxiv.org/pdf/1907.01647v1.pdf
PWC https://paperswithcode.com/paper/bandit-learning-for-diversified-interactive
Repo
Framework

Interpretable Structure-aware Document Encoders with Hierarchical Attention

Title Interpretable Structure-aware Document Encoders with Hierarchical Attention
Authors Khalil Mrini, Claudiu Musat, Michael Baeriswyl, Martin Jaggi
Abstract We propose a method to create document representations that reflect their internal structure. We modify Tree-LSTMs to hierarchically merge basic elements such as words and sentences into blocks of increasing complexity. Our Structure Tree-LSTM implements a hierarchical attention mechanism over individual components and combinations thereof. We thus emphasize the usefulness of Tree-LSTMs for texts larger than a sentence. We show that structure-aware encoders can be used to improve the performance of document classification. We demonstrate that our method is resilient to changes to the basic building blocks, as it performs well with both sentence and word embeddings. The Structure Tree-LSTM outperforms all the baselines on two datasets by leveraging structural clues. We show our model’s interpretability by visualizing how our model distributes attention inside a document. On a third dataset from the medical domain, our model achieves competitive performance with the state of the art. This result shows the Structure Tree-LSTM can leverage dependency relations other than text structure, such as a set of reports on the same patient.
Tasks Document Classification, Word Embeddings
Published 2019-02-26
URL https://arxiv.org/abs/1902.09713v2
PDF https://arxiv.org/pdf/1902.09713v2.pdf
PWC https://paperswithcode.com/paper/structure-tree-lstm-structure-aware
Repo
Framework

Dynamic Graph Representation for Partially Occluded Biometrics

Title Dynamic Graph Representation for Partially Occluded Biometrics
Authors Min Ren, Yunlong Wang, Zhenan Sun, Tieniu Tan
Abstract The generalization ability of Convolutional neural networks (CNNs) for biometrics drops greatly due to the adverse effects of various occlusions. To this end, we propose a novel unified framework integrated the merits of both CNNs and graphical models to learn dynamic graph representations for occlusion problems in biometrics, called Dynamic Graph Representation (DGR). Convolutional features onto certain regions are re-crafted by a graph generator to establish the connections among the spatial parts of biometrics and build Feature Graphs based on these node representations. Each node of Feature Graphs corresponds to a specific part of the input image and the edges express the spatial relationships between parts. By analyzing the similarities between the nodes, the framework is able to adaptively remove the nodes representing the occluded parts. During dynamic graph matching, we propose a novel strategy to measure the distances of both nodes and adjacent matrixes. In this way, the proposed method is more convincing than CNNs-based methods because the dynamic graph method implies a more illustrative and reasonable inference of the biometrics decision. Experiments conducted on iris and face demonstrate the superiority of the proposed framework, which boosts the accuracy of occluded biometrics recognition by a large margin comparing with baseline methods.
Tasks Graph Matching
Published 2019-12-01
URL https://arxiv.org/abs/1912.00377v1
PDF https://arxiv.org/pdf/1912.00377v1.pdf
PWC https://paperswithcode.com/paper/dynamic-graph-representation-for-partially
Repo
Framework

Ensuring Responsible Outcomes from Technology

Title Ensuring Responsible Outcomes from Technology
Authors Aaditeshwar Seth
Abstract We attempt to make two arguments in this essay. First, through a case study of a mobile phone based voice-media service we have been running in rural central India for more than six years, we describe several implementation complexities we had to navigate towards realizing our intended vision of bringing social development through technology. Most of these complexities arose in the interface of our technology with society, and we argue that even other technology providers can create similar processes to manage this socio-technological interface and ensure intended outcomes from their technology use. We then build our second argument about how to ensure that the organizations behind both market driven technologies and those technologies that are adopted by the state, pay due attention towards responsibly managing the socio-technological interface of their innovations. We advocate for the technology engineers and researchers who work within these organizations, to take up the responsibility and ensure that their labour leads to making the world a better place especially for the poor and marginalized. We outline possible governance structures that can give more voice to the technology developers to push their organizations towards ensuring that responsible outcomes emerge from their technology. We note that the examples we use to build our arguments are limited to contemporary information and communication technology (ICT) platforms used directly by end-users to share content with one another, and hence our argument may not generalize to other ICTs in a straightforward manner.
Tasks
Published 2019-07-07
URL https://arxiv.org/abs/1907.03263v1
PDF https://arxiv.org/pdf/1907.03263v1.pdf
PWC https://paperswithcode.com/paper/ensuring-responsible-outcomes-from-technology
Repo
Framework

Learning Sparse Representations Incrementally in Deep Reinforcement Learning

Title Learning Sparse Representations Incrementally in Deep Reinforcement Learning
Authors J. Fernando Hernandez-Garcia, Richard S. Sutton
Abstract Sparse representations have been shown to be useful in deep reinforcement learning for mitigating catastrophic interference and improving the performance of agents in terms of cumulative reward. Previous results were based on a two step process were the representation was learned offline and the action-value function was learned online afterwards. In this paper, we investigate if it is possible to learn a sparse representation and the action-value function simultaneously and incrementally. We investigate this question by employing several regularization techniques and observing how they affect sparsity of the representation learned by a DQN agent in two different benchmark domains. Our results show that with appropriate regularization it is possible to increase the sparsity of the representations learned by DQN agents. Moreover, we found that learning sparse representations also resulted in improved performance in terms of cumulative reward. Finally, we found that the performance of the agents that learned a sparse representation was more robust to the size of the experience replay buffer. This last finding supports the long standing hypothesis that the overlap in representations learned by deep neural networks is the leading cause of catastrophic interference.
Tasks
Published 2019-12-09
URL https://arxiv.org/abs/1912.04002v1
PDF https://arxiv.org/pdf/1912.04002v1.pdf
PWC https://paperswithcode.com/paper/learning-sparse-representations-incrementally
Repo
Framework

Toward a Knowledge-based Personalised Recommender System for Mobile App Development

Title Toward a Knowledge-based Personalised Recommender System for Mobile App Development
Authors Bilal Abu-Salih
Abstract Over the last few years, the arena of mobile application development has expanded considerably beyond the balance of the world's software markets. With the growing number of mobile software companies, and the mounting sophistication of smartphones' technology, developers have been building several categories of applications on dissimilar platforms. However, developers confront several challenges through the implementation of mobile application projects. In particular, there is a lack of consolidated systems that are competent to provide developers with personalised services promptly and efficiently. Hence, it is essential to develop tailored systems which can recommend appropriate tools, IDEs, platforms, software components and other correlated artifacts to mobile application developers. This paper proposes a new recommender system framework comprising a fortified set of techniques that are designed to provide mobile app developers with a distinctive platform to browse and search for the personalised artifacts. The proposed system make use of ontology and semantic web technology as well as machine learning techniques. In particular, the new RS framework comprises the following components; (i) domain knowledge inference module: including various semantic web technologies and lightweight ontologies; (ii) profiling and preferencing: a new proposed time-aware multidimensional user modelling; (iii) query expansion: to improve and enhance the retrieved results by semantically augmenting users' query; and (iv) recommendation and information filtration: to make use of the aforementioned components to provide personalised services to the designated users and to answer a user's query with the minimum mismatches.
Tasks Recommendation Systems
Published 2019-09-09
URL https://arxiv.org/abs/1909.03733v1
PDF https://arxiv.org/pdf/1909.03733v1.pdf
PWC https://paperswithcode.com/paper/toward-a-knowledge-based-personalised
Repo
Framework

Bayesian Inference for Large Scale Image Classification

Title Bayesian Inference for Large Scale Image Classification
Authors Jonathan Heek, Nal Kalchbrenner
Abstract Bayesian inference promises to ground and improve the performance of deep neural networks. It promises to be robust to overfitting, to simplify the training procedure and the space of hyperparameters, and to provide a calibrated measure of uncertainty that can enhance decision making, agent exploration and prediction fairness. Markov Chain Monte Carlo (MCMC) methods enable Bayesian inference by generating samples from the posterior distribution over model parameters. Despite the theoretical advantages of Bayesian inference and the similarity between MCMC and optimization methods, the performance of sampling methods has so far lagged behind optimization methods for large scale deep learning tasks. We aim to fill this gap and introduce ATMC, an adaptive noise MCMC algorithm that estimates and is able to sample from the posterior of a neural network. ATMC dynamically adjusts the amount of momentum and noise applied to each parameter update in order to compensate for the use of stochastic gradients. We use a ResNet architecture without batch normalization to test ATMC on the Cifar10 benchmark and the large scale ImageNet benchmark and show that, despite the absence of batch normalization, ATMC outperforms a strong optimization baseline in terms of both classification accuracy and test log-likelihood. We show that ATMC is intrinsically robust to overfitting on the training data and that ATMC provides a better calibrated measure of uncertainty compared to the optimization baseline.
Tasks Bayesian Inference, Decision Making, Image Classification
Published 2019-08-09
URL https://arxiv.org/abs/1908.03491v1
PDF https://arxiv.org/pdf/1908.03491v1.pdf
PWC https://paperswithcode.com/paper/bayesian-inference-for-large-scale-image
Repo
Framework
comments powered by Disqus