Paper Group ANR 882
Effective Strategies for Using Hashtags in Online Communication. Decision-making and Fuzzy Temporal Logic. Phonetic-enriched Text Representation for Chinese Sentiment Analysis with Reinforcement Learning. The Oracle of DLphi. Large Scale Model Predictive Control with Neural Networks and Primal Active Sets. Object Detection and 3D Estimation via an …
Effective Strategies for Using Hashtags in Online Communication
Title | Effective Strategies for Using Hashtags in Online Communication |
Authors | Solomiia Fedushko, Sofia Kolos |
Abstract | The features of use of hashtags among students of Lviv were investigated. The list of optimal strategies for using these communicative tools for personal branding is determined. The effective strategy for using hashtags in online communication for the personal and company branding is considered. The results of calculation of effectiveness of hashtags related to #education is calculated. The reports of using hashtag #education in social networks is presented. |
Tasks | |
Published | 2019-09-03 |
URL | https://arxiv.org/abs/1909.01474v1 |
https://arxiv.org/pdf/1909.01474v1.pdf | |
PWC | https://paperswithcode.com/paper/effective-strategies-for-using-hashtags-in |
Repo | |
Framework | |
Decision-making and Fuzzy Temporal Logic
Title | Decision-making and Fuzzy Temporal Logic |
Authors | José Cláudio do Nascimento |
Abstract | This paper shows that the fuzzy temporal logic can model figures of thought to describe decision-making behaviors. In order to exemplify, some economic behaviors observed experimentally were modeled from problems of choice containing time, uncertainty and fuzziness. Related to time preference, it is noted that the subadditive discounting is mandatory in positive rewards situations and, consequently, results in the magnitude effect and time effect, where the last has a stronger discounting for earlier delay periods (as in, one hour, one day), but a weaker discounting for longer delay periods (for instance, six months, one year, ten years). In addition, it is possible to explain the preference reversal (change of preference when two rewards proposed on different dates are shifted in the time). Related to the Prospect Theory, it is shown that the risk seeking and the risk aversion are magnitude dependents, where the risk seeking may disappear when the values to be lost are very high. |
Tasks | Decision Making |
Published | 2019-01-07 |
URL | http://arxiv.org/abs/1901.01970v2 |
http://arxiv.org/pdf/1901.01970v2.pdf | |
PWC | https://paperswithcode.com/paper/decision-making-and-fuzzy-temporal-logic |
Repo | |
Framework | |
Phonetic-enriched Text Representation for Chinese Sentiment Analysis with Reinforcement Learning
Title | Phonetic-enriched Text Representation for Chinese Sentiment Analysis with Reinforcement Learning |
Authors | Haiyun Peng, Yukun Ma, Soujanya Poria, Yang Li, Erik Cambria |
Abstract | The Chinese pronunciation system offers two characteristics that distinguish it from other languages: deep phonemic orthography and intonation variations. We are the first to argue that these two important properties can play a major role in Chinese sentiment analysis. Particularly, we propose two effective features to encode phonetic information. Next, we develop a Disambiguate Intonation for Sentiment Analysis (DISA) network using a reinforcement network. It functions as disambiguating intonations for each Chinese character (pinyin). Thus, a precise phonetic representation of Chinese is learned. Furthermore, we also fuse phonetic features with textual and visual features in order to mimic the way humans read and understand Chinese text. Experimental results on five different Chinese sentiment analysis datasets show that the inclusion of phonetic features significantly and consistently improves the performance of textual and visual representations and outshines the state-of-the-art Chinese character level representations. |
Tasks | Sentiment Analysis |
Published | 2019-01-23 |
URL | http://arxiv.org/abs/1901.07880v1 |
http://arxiv.org/pdf/1901.07880v1.pdf | |
PWC | https://paperswithcode.com/paper/phonetic-enriched-text-representation-for |
Repo | |
Framework | |
The Oracle of DLphi
Title | The Oracle of DLphi |
Authors | Dominik Alfke, Weston Baines, Jan Blechschmidt, Mauricio J. del Razo Sarmina, Amnon Drory, Dennis Elbrächter, Nando Farchmin, Matteo Gambara, Silke Glas, Philipp Grohs, Peter Hinz, Danijel Kivaranovic, Christian Kümmerle, Gitta Kutyniok, Sebastian Lunz, Jan Macdonald, Ryan Malthaner, Gregory Naisat, Ariel Neufeld, Philipp Christian Petersen, Rafael Reisenhofer, Jun-Da Sheng, Laura Thesing, Philipp Trunschke, Johannes von Lindheim, David Weber, Melanie Weber |
Abstract | We present a novel technique based on deep learning and set theory which yields exceptional classification and prediction results. Having access to a sufficiently large amount of labelled training data, our methodology is capable of predicting the labels of the test data almost always even if the training data is entirely unrelated to the test data. In other words, we prove in a specific setting that as long as one has access to enough data points, the quality of the data is irrelevant. |
Tasks | |
Published | 2019-01-17 |
URL | http://arxiv.org/abs/1901.05744v2 |
http://arxiv.org/pdf/1901.05744v2.pdf | |
PWC | https://paperswithcode.com/paper/the-oracle-of-dlphi |
Repo | |
Framework | |
Large Scale Model Predictive Control with Neural Networks and Primal Active Sets
Title | Large Scale Model Predictive Control with Neural Networks and Primal Active Sets |
Authors | Steven W. Chen, Tianyu Wang, Nikolay Atanasov, Vijay Kumar, Manfred Morari |
Abstract | This work presents an explicit-implicit procedure that combines an offline trained neural network with an online primal active set solver to compute a model predictive control (MPC) law with guarantees on recursive feasibility and asymptotic stability. The neural network improves the suboptimality of the controller performance and accelerates online inference speed for large systems, while the primal active set method provides corrective steps to ensure feasibility and stability. We highlight the connections between MPC and neural networks and introduce a primal-dual loss function to train a neural network to initialize the online controller. We then demonstrate online computation of the primal feasibility and suboptimality criteria to provide the desired guarantees. Next, we use these neural network and criteria measures to accelerate an online primal active set method through warm starts and early termination. Finally, we present a data set generation algorithm that is critical for successfully applying our approach to high dimensional systems. The primary motivation is developing an algorithm that scales to systems that are challenging for current approaches, involving state and input dimensions as well as planning horizons in the order of tens to hundreds. |
Tasks | |
Published | 2019-10-23 |
URL | https://arxiv.org/abs/1910.10835v1 |
https://arxiv.org/pdf/1910.10835v1.pdf | |
PWC | https://paperswithcode.com/paper/large-scale-model-predictive-control-with |
Repo | |
Framework | |
Object Detection and 3D Estimation via an FMCW Radar Using a Fully Convolutional Network
Title | Object Detection and 3D Estimation via an FMCW Radar Using a Fully Convolutional Network |
Authors | Guoqiang Zhang, Haopeng Li, Fabian Wenger |
Abstract | This paper considers object detection and 3D estimation using an FMCW radar. The state-of-the-art deep learning framework is employed instead of using traditional signal processing. In preparing the radar training data, the ground truth of an object orientation in 3D space is provided by conducting image analysis, of which the images are obtained through a coupled camera to the radar device. To ensure successful training of a fully convolutional network (FCN), we propose a normalization method, which is found to be essential to be applied to the radar signal before feeding into the neural network. The system after proper training is able to first detect the presence of an object in an environment. If it does, the system then further produces an estimation of its 3D position. Experimental results show that the proposed system can be successfully trained and employed for detecting a car and further estimating its 3D position in a noisy environment. |
Tasks | Object Detection |
Published | 2019-02-04 |
URL | http://arxiv.org/abs/1902.05394v1 |
http://arxiv.org/pdf/1902.05394v1.pdf | |
PWC | https://paperswithcode.com/paper/object-detection-and-3d-estimation-via-an |
Repo | |
Framework | |
Object Discovery with a Copy-Pasting GAN
Title | Object Discovery with a Copy-Pasting GAN |
Authors | Relja Arandjelović, Andrew Zisserman |
Abstract | We tackle the problem of object discovery, where objects are segmented for a given input image, and the system is trained without using any direct supervision whatsoever. A novel copy-pasting GAN framework is proposed, where the generator learns to discover an object in one image by compositing it into another image such that the discriminator cannot tell that the resulting image is fake. After carefully addressing subtle issues, such as preventing the generator from `cheating’, this game results in the generator learning to select objects, as copy-pasting objects is most likely to fool the discriminator. The system is shown to work well on four very different datasets, including large object appearance variations in challenging cluttered backgrounds. | |
Tasks | |
Published | 2019-05-27 |
URL | https://arxiv.org/abs/1905.11369v1 |
https://arxiv.org/pdf/1905.11369v1.pdf | |
PWC | https://paperswithcode.com/paper/object-discovery-with-a-copy-pasting-gan |
Repo | |
Framework | |
A Self-Adaptive Synthetic Over-Sampling Technique for Imbalanced Classification
Title | A Self-Adaptive Synthetic Over-Sampling Technique for Imbalanced Classification |
Authors | Xiaowei Gu, Plamen P Angelov, Eduardo Almeida Soares |
Abstract | Traditionally, in supervised machine learning, (a significant) part of the available data (usually 50% to 80%) is used for training and the rest for validation. In many problems, however, the data is highly imbalanced in regard to different classes or does not have good coverage of the feasible data space which, in turn, creates problems in validation and usage phase. In this paper, we propose a technique for synthesising feasible and likely data to help balance the classes as well as to boost the performance in terms of confusion matrix as well as overall. The idea, in a nutshell, is to synthesise data samples in close vicinity to the actual data samples specifically for the less represented (minority) classes. This has also implications to the so-called fairness of machine learning. In this paper, we propose a specific method for synthesising data in a way to balance the classes and boost the performance, especially of the minority classes. It is generic and can be applied to different base algorithms, e.g. support vector machine, k-nearest neighbour, deep networks, rule-based classifiers, decision trees, etc. The results demonstrated that: i) a significantly more balanced (and fair) classification results can be achieved; ii) that the overall performance as well as the performance per class measured by confusion matrix can be boosted. In addition, this approach can be very valuable for the cases when the number of actual available labelled data is small which itself is one of the problems of the contemporary machine learning. |
Tasks | |
Published | 2019-11-25 |
URL | https://arxiv.org/abs/1911.11018v1 |
https://arxiv.org/pdf/1911.11018v1.pdf | |
PWC | https://paperswithcode.com/paper/a-self-adaptive-synthetic-over-sampling |
Repo | |
Framework | |
Bounds in Query Learning
Title | Bounds in Query Learning |
Authors | Hunter Chase, James Freitag |
Abstract | We introduce new combinatorial quantities for concept classes, and prove lower and upper bounds for learning complexity in several models of query learning in terms of various combinatorial quantities. Our approach is flexible and powerful enough to enough to give new and very short proofs of the efficient learnability of several prominent examples (e.g. regular languages and regular $\omega$-languages), in some cases also producing new bounds on the number of queries. In the setting of equivalence plus membership queries, we give an algorithm which learns a class in polynomially many queries whenever any such algorithm exists. We also study equivalence query learning in a randomized model, producing new bounds on the expected number of queries required to learn an arbitrary concept. Many of the techniques and notions of dimension draw inspiration from or are related to notions from model theory, and these connections are explained. We also use techniques from query learning to mildly improve a result of Laskowski regarding compression schemes. |
Tasks | |
Published | 2019-04-23 |
URL | http://arxiv.org/abs/1904.10122v1 |
http://arxiv.org/pdf/1904.10122v1.pdf | |
PWC | https://paperswithcode.com/paper/bounds-in-query-learning |
Repo | |
Framework | |
The SpectACl of Nonconvex Clustering: A Spectral Approach to Density-Based Clustering
Title | The SpectACl of Nonconvex Clustering: A Spectral Approach to Density-Based Clustering |
Authors | Sibylle Hess, Wouter Duivesteijn, Philipp Honysz, Katharina Morik |
Abstract | When it comes to clustering nonconvex shapes, two paradigms are used to find the most suitable clustering: minimum cut and maximum density. The most popular algorithms incorporating these paradigms are Spectral Clustering and DBSCAN. Both paradigms have their pros and cons. While minimum cut clusterings are sensitive to noise, density-based clusterings have trouble handling clusters with varying densities. In this paper, we propose \textsc{SpectACl}: a method combining the advantages of both approaches, while solving the two mentioned drawbacks. Our method is easy to implement, such as spectral clustering, and theoretically founded to optimize a proposed density criterion of clusterings. Through experiments on synthetic and real-world data, we demonstrate that our approach provides robust and reliable clusterings. |
Tasks | |
Published | 2019-07-01 |
URL | https://arxiv.org/abs/1907.00680v1 |
https://arxiv.org/pdf/1907.00680v1.pdf | |
PWC | https://paperswithcode.com/paper/the-spectacl-of-nonconvex-clustering-a |
Repo | |
Framework | |
Fast Spatio-Temporal Residual Network for Video Super-Resolution
Title | Fast Spatio-Temporal Residual Network for Video Super-Resolution |
Authors | Sheng Li, Fengxiang He, Bo Du, Lefei Zhang, Yonghao Xu, Dacheng Tao |
Abstract | Recently, deep learning based video super-resolution (SR) methods have achieved promising performance. To simultaneously exploit the spatial and temporal information of videos, employing 3-dimensional (3D) convolutions is a natural approach. However, straight utilizing 3D convolutions may lead to an excessively high computational complexity which restricts the depth of video SR models and thus undermine the performance. In this paper, we present a novel fast spatio-temporal residual network (FSTRN) to adopt 3D convolutions for the video SR task in order to enhance the performance while maintaining a low computational load. Specifically, we propose a fast spatio-temporal residual block (FRB) that divide each 3D filter to the product of two 3D filters, which have considerably lower dimensions. Furthermore, we design a cross-space residual learning that directly links the low-resolution space and the high-resolution space, which can greatly relieve the computational burden on the feature fusion and up-scaling parts. Extensive evaluations and comparisons on benchmark datasets validate the strengths of the proposed approach and demonstrate that the proposed network significantly outperforms the current state-of-the-art methods. |
Tasks | Super-Resolution, Video Super-Resolution |
Published | 2019-04-05 |
URL | http://arxiv.org/abs/1904.02870v1 |
http://arxiv.org/pdf/1904.02870v1.pdf | |
PWC | https://paperswithcode.com/paper/fast-spatio-temporal-residual-network-for |
Repo | |
Framework | |
The Potential of Restarts for ProbSAT
Title | The Potential of Restarts for ProbSAT |
Authors | Jan-Hendrik Lorenz, Julian Nickerl |
Abstract | This work analyses the potential of restarts for probSAT, a quite successful algorithm for k-SAT, by estimating its runtime distributions on random 3-SAT instances that are close to the phase transition. We estimate an optimal restart time from empirical data, reaching a potential speedup factor of 1.39. Calculating restart times from fitted probability distributions reduces this factor to a maximum of 1.30. A spin-off result is that the Weibull distribution approximates the runtime distribution for over 93% of the used instances well. A machine learning pipeline is presented to compute a restart time for a fixed-cutoff strategy to exploit this potential. The main components of the pipeline are a random forest for determining the distribution type and a neural network for the distribution’s parameters. ProbSAT performs statistically significantly better than Luby’s restart strategy and the policy without restarts when using the presented approach. The structure is particularly advantageous on hard problems. |
Tasks | |
Published | 2019-04-26 |
URL | http://arxiv.org/abs/1904.11757v1 |
http://arxiv.org/pdf/1904.11757v1.pdf | |
PWC | https://paperswithcode.com/paper/the-potential-of-restarts-for-probsat |
Repo | |
Framework | |
CESMA: Centralized Expert Supervises Multi-Agents
Title | CESMA: Centralized Expert Supervises Multi-Agents |
Authors | Alex Tong Lin, Mark J. Debord, Katia Estabridis, Gary Hewer, Stanley Osher |
Abstract | We consider the reinforcement learning problem of training multiple agents in order to maximize a shared reward. In this multi-agent system, each agent seeks to maximize the reward while interacting with other agents, and they may or may not be able to communicate. Typically the agents do not have access to other agent policies and thus each agent observes a non-stationary and partially-observable environment. In order to obtain multi-agents that act in a decentralized manner, we introduce a novel algorithm under the framework of centralized learning, but decentralized execution. This training framework first obtains solutions to a multi-agent problem with a single centralized joint-space learner. This centralized expert is then used to guide imitation learning for independent decentralized multi-agents. This framework has the flexibility to use any reinforcement learning algorithm to obtain the expert as well as any imitation learning algorithm to obtain the decentralized agents. This is in contrast to other multi-agent learning algorithms that, for example, can require more specific structures. We present some theoretical error bounds for our method, and we show that one can obtain decentralized solutions to a multi-agent problem through imitation learning. |
Tasks | Imitation Learning |
Published | 2019-02-06 |
URL | https://arxiv.org/abs/1902.02311v3 |
https://arxiv.org/pdf/1902.02311v3.pdf | |
PWC | https://paperswithcode.com/paper/cesma-centralized-expert-supervises-multi |
Repo | |
Framework | |
Learning to Benchmark: Determining Best Achievable Misclassification Error from Training Data
Title | Learning to Benchmark: Determining Best Achievable Misclassification Error from Training Data |
Authors | Morteza Noshad, Li Xu, Alfred Hero |
Abstract | We address the problem of learning to benchmark the best achievable classifier performance. In this problem the objective is to establish statistically consistent estimates of the Bayes misclassification error rate without having to learn a Bayes-optimal classifier. Our learning to benchmark framework improves on previous work on learning bounds on Bayes misclassification rate since it learns the {\it exact} Bayes error rate instead of a bound on error rate. We propose a benchmark learner based on an ensemble of $\epsilon$-ball estimators and Chebyshev approximation. Under a smoothness assumption on the class densities we show that our estimator achieves an optimal (parametric) mean squared error (MSE) rate of $O(N^{-1})$, where $N$ is the number of samples. Experiments on both simulated and real datasets establish that our proposed benchmark learning algorithm produces estimates of the Bayes error that are more accurate than previous approaches for learning bounds on Bayes error probability. |
Tasks | |
Published | 2019-09-16 |
URL | https://arxiv.org/abs/1909.07192v1 |
https://arxiv.org/pdf/1909.07192v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-benchmark-determining-best |
Repo | |
Framework | |
Identification of Effective Connectivity Subregions
Title | Identification of Effective Connectivity Subregions |
Authors | Ruben Sanchez-Romero, Joseph D. Ramsey, Kun Zhang, Clark Glymour |
Abstract | Standard fMRI connectivity analyses depend on aggregating the time series of individual voxels within regions of interest (ROIs). In certain cases, this spatial aggregation implies a loss of valuable functional and anatomical information about smaller subsets of voxels that drive the ROI level connectivity. We use two recently published graphical search methods to identify subsets of voxels that are highly responsible for the connectivity between larger ROIs. To illustrate the procedure, we apply both methods to longitudinal high-resolution resting state fMRI data from regions in the medial temporal lobe from a single individual. Both methods recovered similar subsets of voxels within larger ROIs of entorhinal cortex and hippocampus subfields that also show spatial consistency across different scanning sessions and across hemispheres. In contrast to standard functional connectivity methods, both algorithms applied here are robust against false positive connections produced by common causes and indirect paths (in contrast to Pearson’s correlation) and common effect conditioning (in contrast to partial correlation based approaches). These algorithms allow for identification of subregions of voxels driving the connectivity between regions of interest, recovering valuable anatomical and functional information that is lost when ROIs are aggregated. Both methods are specially suited for voxelwise connectivity research, given their running times and scalability to big data problems. |
Tasks | Time Series |
Published | 2019-08-08 |
URL | https://arxiv.org/abs/1908.03264v1 |
https://arxiv.org/pdf/1908.03264v1.pdf | |
PWC | https://paperswithcode.com/paper/identification-of-effective-connectivity |
Repo | |
Framework | |