Paper Group ANR 302
Why is it Difficult to Detect Sudden and Unexpected Epidemic Outbreaks in Twitter?. Can Machine Generate Traditional Chinese Poetry? A Feigenbaum Test. Human vs. Computer Go: Review and Prospect. Word Segmentation on Micro-blog Texts with External Lexicon and Heterogeneous Data. Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supe …
Why is it Difficult to Detect Sudden and Unexpected Epidemic Outbreaks in Twitter?
Title | Why is it Difficult to Detect Sudden and Unexpected Epidemic Outbreaks in Twitter? |
Authors | Avaré Stewart, Sara Romano, Nattiya Kanhabua, Sergio Di Martino, Wolf Siberski, Antonino Mazzeo, Wolfgang Nejdl, Ernesto Diaz-Aviles |
Abstract | Social media services such as Twitter are a valuable source of information for decision support systems. Many studies have shown that this also holds for the medical domain, where Twitter is considered a viable tool for public health officials to sift through relevant information for the early detection, management, and control of epidemic outbreaks. This is possible due to the inherent capability of social media services to transmit information faster than traditional channels. However, the majority of current studies have limited their scope to the detection of common and seasonal health recurring events (e.g., Influenza-like Illness), partially due to the noisy nature of Twitter data, which makes outbreak detection and management very challenging. Within the European project M-Eco, we developed a Twitter-based Epidemic Intelligence (EI) system, which is designed to also handle a more general class of unexpected and aperiodic outbreaks. In particular, we faced three main research challenges in this endeavor: 1) dynamic classification to manage terminology evolution of Twitter messages, 2) alert generation to produce reliable outbreak alerts analyzing the (noisy) tweet time series, and 3) ranking and recommendation to support domain experts for better assessment of the generated alerts. In this paper, we empirically evaluate our proposed approach to these challenges using real-world outbreak datasets and a large collection of tweets. We validate our solution with domain experts, describe our experiences, and give a more realistic view on the benefits and issues of analyzing social media for public health. |
Tasks | Time Series |
Published | 2016-11-10 |
URL | http://arxiv.org/abs/1611.03426v1 |
http://arxiv.org/pdf/1611.03426v1.pdf | |
PWC | https://paperswithcode.com/paper/why-is-it-difficult-to-detect-sudden-and |
Repo | |
Framework | |
Can Machine Generate Traditional Chinese Poetry? A Feigenbaum Test
Title | Can Machine Generate Traditional Chinese Poetry? A Feigenbaum Test |
Authors | Qixin Wang, Tianyi Luo, Dong Wang |
Abstract | Recent progress in neural learning demonstrated that machines can do well in regularized tasks, e.g., the game of Go. However, artistic activities such as poem generation are still widely regarded as human’s special capability. In this paper, we demonstrate that a simple neural model can imitate human in some tasks of art generation. We particularly focus on traditional Chinese poetry, and show that machines can do as well as many contemporary poets and weakly pass the Feigenbaum Test, a variant of Turing test in professional domains. Our method is based on an attention-based recurrent neural network, which accepts a set of keywords as the theme and generates poems by looking at each keyword during the generation. A number of techniques are proposed to improve the model, including character vector initialization, attention to input and hybrid-style training. Compared to existing poetry generation methods, our model can generate much more theme-consistent and semantic-rich poems. |
Tasks | Game of Go |
Published | 2016-06-19 |
URL | http://arxiv.org/abs/1606.05829v1 |
http://arxiv.org/pdf/1606.05829v1.pdf | |
PWC | https://paperswithcode.com/paper/can-machine-generate-traditional-chinese |
Repo | |
Framework | |
Human vs. Computer Go: Review and Prospect
Title | Human vs. Computer Go: Review and Prospect |
Authors | Chang-Shing Lee, Mei-Hui Wang, Shi-Jim Yen, Ting-Han Wei, I-Chen Wu, Ping-Chiang Chou, Chun-Hsun Chou, Ming-Wan Wang, Tai-Hsiung Yang |
Abstract | The Google DeepMind challenge match in March 2016 was a historic achievement for computer Go development. This article discusses the development of computational intelligence (CI) and its relative strength in comparison with human intelligence for the game of Go. We first summarize the milestones achieved for computer Go from 1998 to 2016. Then, the computer Go programs that have participated in previous IEEE CIS competitions as well as methods and techniques used in AlphaGo are briefly introduced. Commentaries from three high-level professional Go players on the five AlphaGo versus Lee Sedol games are also included. We conclude that AlphaGo beating Lee Sedol is a huge achievement in artificial intelligence (AI) based largely on CI methods. In the future, powerful computer Go programs such as AlphaGo are expected to be instrumental in promoting Go education and AI real-world applications. |
Tasks | Game of Go |
Published | 2016-06-07 |
URL | http://arxiv.org/abs/1606.02032v1 |
http://arxiv.org/pdf/1606.02032v1.pdf | |
PWC | https://paperswithcode.com/paper/human-vs-computer-go-review-and-prospect |
Repo | |
Framework | |
Word Segmentation on Micro-blog Texts with External Lexicon and Heterogeneous Data
Title | Word Segmentation on Micro-blog Texts with External Lexicon and Heterogeneous Data |
Authors | Qingrong Xia, Zhenghua Li, Jiayuan Chao, Min Zhang |
Abstract | This paper describes our system designed for the NLPCC 2016 shared task on word segmentation on micro-blog texts. |
Tasks | |
Published | 2016-08-04 |
URL | http://arxiv.org/abs/1608.01448v2 |
http://arxiv.org/pdf/1608.01448v2.pdf | |
PWC | https://paperswithcode.com/paper/word-segmentation-on-micro-blog-texts-with |
Repo | |
Framework | |
Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision
Title | Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision |
Authors | Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, Christian Theobalt |
Abstract | We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation. |
Tasks | 3D Human Pose Estimation, Motion Capture, Pose Estimation, Transfer Learning |
Published | 2016-11-29 |
URL | http://arxiv.org/abs/1611.09813v5 |
http://arxiv.org/pdf/1611.09813v5.pdf | |
PWC | https://paperswithcode.com/paper/monocular-3d-human-pose-estimation-in-the |
Repo | |
Framework | |
Particle Smoothing for Hidden Diffusion Processes: Adaptive Path Integral Smoother
Title | Particle Smoothing for Hidden Diffusion Processes: Adaptive Path Integral Smoother |
Authors | H. -Ch. Ruiz, H. J. Kappen |
Abstract | Particle smoothing methods are used for inference of stochastic processes based on noisy observations. Typically, the estimation of the marginal posterior distribution given all observations is cumbersome and computational intensive. In this paper, we propose a simple algorithm based on path integral control theory to estimate the smoothing distribution of continuous-time diffusion processes with partial observations. In particular, we use an adaptive importance sampling method to improve the effective sampling size of the posterior over processes given the observations and the reliability of the estimation of the marginals. This is achieved by estimating a feedback controller to sample efficiently from the joint smoothing distributions. We compare the results with estimations obtained from the standard Forward Filter/Backward Simulator for two diffusion processes of different complexity. We show that the proposed method gives more reliable estimations than the standard FFBSi when the smoothing distribution is poorly represented by the filter distribution. |
Tasks | |
Published | 2016-05-01 |
URL | http://arxiv.org/abs/1605.00278v2 |
http://arxiv.org/pdf/1605.00278v2.pdf | |
PWC | https://paperswithcode.com/paper/particle-smoothing-for-hidden-diffusion |
Repo | |
Framework | |
Obstacle evasion using fuzzy logic in a sliding blades problem environment
Title | Obstacle evasion using fuzzy logic in a sliding blades problem environment |
Authors | Shadrack Kimutai |
Abstract | This paper discusses obstacle avoidance using fuzzy logic and shortest path algorithm. This paper also introduces the sliding blades problem and illustrates how a drone can navigate itself through the swinging blade obstacles while tracing a semi-optimal path and also maintaining constant velocity |
Tasks | |
Published | 2016-05-03 |
URL | http://arxiv.org/abs/1605.00787v1 |
http://arxiv.org/pdf/1605.00787v1.pdf | |
PWC | https://paperswithcode.com/paper/obstacle-evasion-using-fuzzy-logic-in-a |
Repo | |
Framework | |
Augur: Mining Human Behaviors from Fiction to Power Interactive Systems
Title | Augur: Mining Human Behaviors from Fiction to Power Interactive Systems |
Authors | Ethan Fast, William McGrath, Pranav Rajpurkar, Michael Bernstein |
Abstract | From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this paper, we instead demonstrate it is possible to mine a broad knowledge base of human behavior by analyzing more than one billion words of modern fiction. Our resulting knowledge base, Augur, trains vector models that can predict many thousands of user activities from surrounding objects in modern contexts: for example, whether a user may be eating food, meeting with a friend, or taking a selfie. Augur uses these predictions to identify actions that people commonly take on objects in the world and estimate a user’s future activities given their current situation. We demonstrate Augur-powered, activity-based systems such as a phone that silences itself when the odds of you answering it are low, and a dynamic music player that adjusts to your present activity. A field deployment of an Augur-powered wearable camera resulted in 96% recall and 71% precision on its unsupervised predictions of common daily activities. A second evaluation where human judges rated the system’s predictions over a broad set of input images found that 94% were rated sensible. |
Tasks | |
Published | 2016-02-22 |
URL | http://arxiv.org/abs/1602.06977v2 |
http://arxiv.org/pdf/1602.06977v2.pdf | |
PWC | https://paperswithcode.com/paper/augur-mining-human-behaviors-from-fiction-to |
Repo | |
Framework | |
Fuzzy-Klassen Model for Development Disparities Analysis based on Gross Regional Domestic Product Sector of a Region
Title | Fuzzy-Klassen Model for Development Disparities Analysis based on Gross Regional Domestic Product Sector of a Region |
Authors | Tb. Ai Munandar, Retantyo Wardoyo |
Abstract | Analysis of regional development imbalances quadrant has a very important meaning in order to see the extent of achievement of the development of certain areas as well as the difference. Factors that could be used as a tool to measure the inequality of development is to look at the average growth and development contribution of each sector of Gross Regional Domestic Product (GRDP) based on the analyzed region and the reference region. This study discusses the development of a model to determine the regional development imbalances using fuzzy approach system, and the rules of typology Klassen. The model is then called fuzzy-Klassen. Implications Product Mamdani fuzzy system is used in the model as an inference engine to generate output after defuzzyfication process. Application of MATLAB is used as a tool of analysis in this study. The test a result of Kota Cilegon is shows that there are significant differences between traditional Klassen typology analyses with the results of the model developed. Fuzzy model-Klassen shows GRDP sector inequality Cilegon City is dominated by Quadrant I (K4), where status is the sector forward and grows exponentially. While the traditional Klassen typology, half of GRDP sector is dominated by Quadrant IV (K4) with a sector that is lagging relative status. |
Tasks | |
Published | 2016-06-10 |
URL | http://arxiv.org/abs/1606.03191v1 |
http://arxiv.org/pdf/1606.03191v1.pdf | |
PWC | https://paperswithcode.com/paper/fuzzy-klassen-model-for-development |
Repo | |
Framework | |
GOGMA: Globally-Optimal Gaussian Mixture Alignment
Title | GOGMA: Globally-Optimal Gaussian Mixture Alignment |
Authors | Dylan Campbell, Lars Petersson |
Abstract | Gaussian mixture alignment is a family of approaches that are frequently used for robustly solving the point-set registration problem. However, since they use local optimisation, they are susceptible to local minima and can only guarantee local optimality. Consequently, their accuracy is strongly dependent on the quality of the initialisation. This paper presents the first globally-optimal solution to the 3D rigid Gaussian mixture alignment problem under the L2 distance between mixtures. The algorithm, named GOGMA, employs a branch-and-bound approach to search the space of 3D rigid motions SE(3), guaranteeing global optimality regardless of the initialisation. The geometry of SE(3) was used to find novel upper and lower bounds for the objective function and local optimisation was integrated into the scheme to accelerate convergence without voiding the optimality guarantee. The evaluation empirically supported the optimality proof and showed that the method performed much more robustly on two challenging datasets than an existing globally-optimal registration solution. |
Tasks | |
Published | 2016-03-01 |
URL | http://arxiv.org/abs/1603.00150v1 |
http://arxiv.org/pdf/1603.00150v1.pdf | |
PWC | https://paperswithcode.com/paper/gogma-globally-optimal-gaussian-mixture |
Repo | |
Framework | |
Learning Concept Hierarchies through Probabilistic Topic Modeling
Title | Learning Concept Hierarchies through Probabilistic Topic Modeling |
Authors | V. S. Anoop, S. Asharaf, P. Deepak |
Abstract | With the advent of semantic web, various tools and techniques have been introduced for presenting and organizing knowledge. Concept hierarchies are one such technique which gained significant attention due to its usefulness in creating domain ontologies that are considered as an integral part of semantic web. Automated concept hierarchy learning algorithms focus on extracting relevant concepts from unstructured text corpus and connect them together by identifying some potential relations exist between them. In this paper, we propose a novel approach for identifying relevant concepts from plain text and then learns hierarchy of concepts by exploiting subsumption relation between them. To start with, we model topics using a probabilistic topic model and then make use of some lightweight linguistic process to extract semantically rich concepts. Then we connect concepts by identifying an “is-a” relationship between pair of concepts. The proposed method is completely unsupervised and there is no need for a domain specific training corpus for concept extraction and learning. Experiments on large and real-world text corpora such as BBC News dataset and Reuters News corpus shows that the proposed method outperforms some of the existing methods for concept extraction and efficient concept hierarchy learning is possible if the overall task is guided by a probabilistic topic modeling algorithm. |
Tasks | |
Published | 2016-11-29 |
URL | http://arxiv.org/abs/1611.09573v1 |
http://arxiv.org/pdf/1611.09573v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-concept-hierarchies-through |
Repo | |
Framework | |
Semantic Properties of Customer Sentiment in Tweets
Title | Semantic Properties of Customer Sentiment in Tweets |
Authors | Eun Hee Ko, Diego Klabjan |
Abstract | An increasing number of people are using online social networking services (SNSs), and a significant amount of information related to experiences in consumption is shared in this new media form. Text mining is an emerging technique for mining useful information from the web. We aim at discovering in particular tweets semantic patterns in consumers’ discussions on social media. Specifically, the purposes of this study are twofold: 1) finding similarity and dissimilarity between two sets of textual documents that include consumers’ sentiment polarities, two forms of positive vs. negative opinions and 2) driving actual content from the textual data that has a semantic trend. The considered tweets include consumers opinions on US retail companies (e.g., Amazon, Walmart). Cosine similarity and K-means clustering methods are used to achieve the former goal, and Latent Dirichlet Allocation (LDA), a popular topic modeling algorithm, is used for the latter purpose. This is the first study which discover semantic properties of textual data in consumption context beyond sentiment analysis. In addition to major findings, we apply LDA (Latent Dirichlet Allocations) to the same data and drew latent topics that represent consumers’ positive opinions and negative opinions on social media. |
Tasks | Sentiment Analysis |
Published | 2016-03-24 |
URL | http://arxiv.org/abs/1603.07624v1 |
http://arxiv.org/pdf/1603.07624v1.pdf | |
PWC | https://paperswithcode.com/paper/semantic-properties-of-customer-sentiment-in |
Repo | |
Framework | |
A Tutorial about Random Neural Networks in Supervised Learning
Title | A Tutorial about Random Neural Networks in Supervised Learning |
Authors | Sebastián Basterrech, Gerardo Rubino |
Abstract | Random Neural Networks (RNNs) are a class of Neural Networks (NNs) that can also be seen as a specific type of queuing network. They have been successfully used in several domains during the last 25 years, as queuing networks to analyze the performance of resource sharing in many engineering areas, as learning tools and in combinatorial optimization, where they are seen as neural systems, and also as models of neurological aspects of living beings. In this article we focus on their learning capabilities, and more specifically, we present a practical guide for using the RNN to solve supervised learning problems. We give a general description of these models using almost indistinctly the terminology of Queuing Theory and the neural one. We present the standard learning procedures used by RNNs, adapted from similar well-established improvements in the standard NN field. We describe in particular a set of learning algorithms covering techniques based on the use of first order and, then, of second order derivatives. We also discuss some issues related to these objects and present new perspectives about their use in supervised learning problems. The tutorial describes their most relevant applications, and also provides a large bibliography. |
Tasks | Combinatorial Optimization |
Published | 2016-09-15 |
URL | http://arxiv.org/abs/1609.04846v1 |
http://arxiv.org/pdf/1609.04846v1.pdf | |
PWC | https://paperswithcode.com/paper/a-tutorial-about-random-neural-networks-in |
Repo | |
Framework | |
Decoy Bandits Dueling on a Poset
Title | Decoy Bandits Dueling on a Poset |
Authors | Julien Audiffren, Ralaivola Liva |
Abstract | We adress the problem of dueling bandits defined on partially ordered sets, or posets. In this setting, arms may not be comparable, and there may be several (incomparable) optimal arms. We propose an algorithm, UnchainedBandits, that efficiently finds the set of optimal arms of any poset even when pairs of comparable arms cannot be distinguished from pairs of incomparable arms, with a set of minimal assumptions. This algorithm relies on the concept of decoys, which stems from social psychology. For the easier case where the incomparability information may be accessible, we propose a second algorithm, SlicingBandits, which takes advantage of this information and achieves a very significant gain of performance compared to UnchainedBandits. We provide theoretical guarantees and experimental evaluation for both algorithms. |
Tasks | |
Published | 2016-02-08 |
URL | http://arxiv.org/abs/1602.02706v2 |
http://arxiv.org/pdf/1602.02706v2.pdf | |
PWC | https://paperswithcode.com/paper/decoy-bandits-dueling-on-a-poset |
Repo | |
Framework | |
Kernel Cross-View Collaborative Representation based Classification for Person Re-Identification
Title | Kernel Cross-View Collaborative Representation based Classification for Person Re-Identification |
Authors | Raphael Prates, William Robson Schwartz |
Abstract | Person re-identification aims at the maintenance of a global identity as a person moves among non-overlapping surveillance cameras. It is a hard task due to different illumination conditions, viewpoints and the small number of annotated individuals from each pair of cameras (small-sample-size problem). Collaborative Representation based Classification (CRC) has been employed successfully to address the small-sample-size problem in computer vision. However, the original CRC formulation is not well-suited for person re-identification since it does not consider that probe and gallery samples are from different cameras. Furthermore, it is a linear model, while appearance changes caused by different camera conditions indicate a strong nonlinear transition between cameras. To overcome such limitations, we propose the Kernel Cross-View Collaborative Representation based Classification (Kernel X-CRC) that represents probe and gallery images by balancing representativeness and similarity nonlinearly. It assumes that a probe and its corresponding gallery image are represented with similar coding vectors using individuals from the training set. Experimental results demonstrate that our assumption is true when using a high-dimensional feature vector and becomes more compelling when dealing with a low-dimensional and discriminative representation computed using a common subspace learning method. We achieve state-of-the-art for rank-1 matching rates in two person re-identification datasets (PRID450S and GRID) and the second best results on VIPeR and CUHK01 datasets. |
Tasks | Person Re-Identification |
Published | 2016-11-21 |
URL | http://arxiv.org/abs/1611.06969v1 |
http://arxiv.org/pdf/1611.06969v1.pdf | |
PWC | https://paperswithcode.com/paper/kernel-cross-view-collaborative |
Repo | |
Framework | |