Paper Group ANR 1544
Citizens’ Emotion on GST: A Spatio-Temporal Analysis over Twitter Data. Diagnosing Reinforcement Learning for Traffic Signal Control. A Bayesian Approach for Accurate Classification-Based Aggregates. OutdoorSent: Sentiment Analysis of Urban Outdoor Images by Using Semantic and Deep Features. Identity-Aware Deep Face Hallucination via Adversarial Fa …
Citizens’ Emotion on GST: A Spatio-Temporal Analysis over Twitter Data
Title | Citizens’ Emotion on GST: A Spatio-Temporal Analysis over Twitter Data |
Authors | Deepak Uniyal, Ankit Rai |
Abstract | People might not be close-at-hand but they still are - by virtue of the social network. The social network has transformed lives in many ways. People can express their views, opinions and life experiences on various platforms be it Twitter, Facebook or any other medium there is. Such events constitute of reviewing a product or service, conveying views on political banters, predicting share prices or giving feedback on the government policies like Demonetization or GST. These social platforms can be used to investigate the insights of the emotional curve that the general public is generating. This kind of analysis can help make a product better, predict the future prospects and also to implement the public policies in a better way. Such kind of research on sentiment analysis is increasing rapidly. In this research paper, we have performed temporal analysis and spatial analysis on 1,42,508 and 58,613 tweets respectively and these tweets were posted during the post-GST implementation period from July 04, 2017 to July 25, 2017. The tweets were collected using the Twitter streaming API. A well-known lexicon, National Research Council Canada (NRC) emotion Lexicon is used for opinion mining that exhibits a blend of eight basic emotions i.e. joy, trust, anticipation, surprise, fear sadness, anger, disgust and two sentiments i.e. positive and negative for 6,554 words. |
Tasks | Opinion Mining, Sentiment Analysis |
Published | 2019-06-20 |
URL | https://arxiv.org/abs/1906.08693v1 |
https://arxiv.org/pdf/1906.08693v1.pdf | |
PWC | https://paperswithcode.com/paper/citizens-emotion-on-gst-a-spatio-temporal |
Repo | |
Framework | |
Diagnosing Reinforcement Learning for Traffic Signal Control
Title | Diagnosing Reinforcement Learning for Traffic Signal Control |
Authors | Guanjie Zheng, Xinshi Zang, Nan Xu, Hua Wei, Zhengyao Yu, Vikash Gayah, Kai Xu, Zhenhui Li |
Abstract | With the increasing availability of traffic data and advance of deep reinforcement learning techniques, there is an emerging trend of employing reinforcement learning (RL) for traffic signal control. A key question for applying RL to traffic signal control is how to define the reward and state. The ultimate objective in traffic signal control is to minimize the travel time, which is difficult to reach directly. Hence, existing studies often define reward as an ad-hoc weighted linear combination of several traffic measures. However, there is no guarantee that the travel time will be optimized with the reward. In addition, recent RL approaches use more complicated state (e.g., image) in order to describe the full traffic situation. However, none of the existing studies has discussed whether such a complex state representation is necessary. This extra complexity may lead to significantly slower learning process but may not necessarily bring significant performance gain. In this paper, we propose to re-examine the RL approaches through the lens of classic transportation theory. We ask the following questions: (1) How should we design the reward so that one can guarantee to minimize the travel time? (2) How to design a state representation which is concise yet sufficient to obtain the optimal solution? Our proposed method LIT is theoretically supported by the classic traffic signal control methods in transportation field. LIT has a very simple state and reward design, thus can serve as a building block for future RL approaches to traffic signal control. Extensive experiments on both synthetic and real datasets show that our method significantly outperforms the state-of-the-art traffic signal control methods. |
Tasks | |
Published | 2019-05-12 |
URL | https://arxiv.org/abs/1905.04716v1 |
https://arxiv.org/pdf/1905.04716v1.pdf | |
PWC | https://paperswithcode.com/paper/diagnosing-reinforcement-learning-for-traffic |
Repo | |
Framework | |
A Bayesian Approach for Accurate Classification-Based Aggregates
Title | A Bayesian Approach for Accurate Classification-Based Aggregates |
Authors | Q. A. Meertens, C. G. H. Diks, H. J. van den Herik, F W Takes |
Abstract | In this paper, we study the accuracy of values aggregated over classes predicted by a classification algorithm. The problem is that the resulting aggregates (e.g., sums of a variable) are known to be biased. The bias can be large even for highly accurate classification algorithms, in particular when dealing with class-imbalanced data. To correct this bias, the algorithm’s classification error rates have to be estimated. In this estimation, two issues arise when applying existing bias correction methods. First, inaccuracies in estimating classification error rates have to be taken into account. Second, impermissible estimates, such as a negative estimate for a positive value, have to be dismissed. We show that both issues are relevant in applications where the true labels are known only for a small set of data points. We propose a novel bias correction method using Bayesian inference. The novelty of our method is that it imposes constraints on the model parameters. We show that our method solves the problem of biased classification-based aggregates as well as the two issues above, in the general setting of multi-class classification. In the empirical evaluation, using a binary classifier on a real-world dataset of company tax returns, we show that our method outperforms existing methods in terms of mean squared error. |
Tasks | Bayesian Inference |
Published | 2019-02-06 |
URL | http://arxiv.org/abs/1902.02412v1 |
http://arxiv.org/pdf/1902.02412v1.pdf | |
PWC | https://paperswithcode.com/paper/a-bayesian-approach-for-accurate |
Repo | |
Framework | |
OutdoorSent: Sentiment Analysis of Urban Outdoor Images by Using Semantic and Deep Features
Title | OutdoorSent: Sentiment Analysis of Urban Outdoor Images by Using Semantic and Deep Features |
Authors | Wyverson B. de Oliveira, Leyza B. Dorini, Rodrigo Minetto, Thiago H. Silva |
Abstract | Opinion mining in outdoor images posted by users during different activities can provide valuable information to better understand urban areas. In this regard, we propose a framework to classify the sentiment of outdoor images shared by users on social networks. We compare the performance of state-of-the-art ConvNet architectures, and one specifically designed for sentiment analysis. We also evaluate how the merging of deep features and semantic information derived from the scene attributes can improve classification and cross-dataset generalization performance. The evaluation explores a novel dataset, namely OutdoorSent, and other datasets publicly available. We observe that the incorporation of knowledge about semantic attributes improves the accuracy of all ConvNet architectures studied. Besides, we found that exploring only images related to the context of the study, outdoor in our case, is recommended, i.e., indoor images were not significantly helpful. Furthermore, we demonstrated the applicability of our results in the city of Chicago, USA, showing that they can help to improve the knowledge of subjective characteristics of different areas of the city. For instance, particular areas of the city tend to concentrate more images of a specific class of sentiment, which are also correlated with median income, opening up opportunities in different fields. |
Tasks | Opinion Mining, Sentiment Analysis |
Published | 2019-06-05 |
URL | https://arxiv.org/abs/1906.02331v4 |
https://arxiv.org/pdf/1906.02331v4.pdf | |
PWC | https://paperswithcode.com/paper/outdoorsent-can-semantic-features-help-deep |
Repo | |
Framework | |
Identity-Aware Deep Face Hallucination via Adversarial Face Verification
Title | Identity-Aware Deep Face Hallucination via Adversarial Face Verification |
Authors | Hadi Kazemi, Fariborz Taherkhani, Nasser M. Nasrabadi |
Abstract | In this paper, we address the problem of face hallucination by proposing a novel multi-scale generative adversarial network (GAN) architecture optimized for face verification. First, we propose a multi-scale generator architecture for face hallucination with a high up-scaling ratio factor, which has multiple intermediate outputs at different resolutions. The intermediate outputs have the growing goal of synthesizing small to large images. Second, we incorporate a face verifier with the original GAN discriminator and propose a novel discriminator which learns to discriminate different identities while distinguishing fake generated HR face images from their ground truth images. In particular, the learned generator cares for not only the visual quality of hallucinated face images but also preserving the discriminative features in the hallucination process. In addition, to capture perceptually relevant differences we employ a perceptual similarity loss, instead of similarity in pixel space. We perform a quantitative and qualitative evaluation of our framework on the LFW and CelebA datasets. The experimental results show the advantages of our proposed method against the state-of-the-art methods on the 8x downsampled testing dataset. |
Tasks | Face Hallucination, Face Verification |
Published | 2019-09-17 |
URL | https://arxiv.org/abs/1909.08130v1 |
https://arxiv.org/pdf/1909.08130v1.pdf | |
PWC | https://paperswithcode.com/paper/identity-aware-deep-face-hallucination-via |
Repo | |
Framework | |
Elementary Iterated Revision and the Levi Identity
Title | Elementary Iterated Revision and the Levi Identity |
Authors | Jake Chandler, Richard Booth |
Abstract | Recent work has considered the problem of extending to the case of iterated belief change the so-called Harper Identity' (HI), which defines single-shot contraction in terms of single-shot revision. The present paper considers the prospects of providing a similar extension of the Levi Identity (LI), in which the direction of definition runs the other way. We restrict our attention here to the three classic iterated revision operators--natural, restrained and lexicographic, for which we provide here the first collective characterisation in the literature, under the appellation of elementary’ operators. We consider two prima facie plausible ways of extending (LI). The first proposal involves the use of the rational closure operator to offer a reductive' account of iterated revision in terms of iterated contraction. The second, which doesn't commit to reductionism, was put forward some years ago by Nayak et al. We establish that, for elementary revision operators and under mild assumptions regarding contraction, Nayak's proposal is equivalent to a new set of postulates formalising the claim that contraction by $\neg A$ should be considered to be a kind of mild’ revision by $A$. We then show that these, in turn, under slightly weaker assumptions, jointly amount to the conjunction of a pair of constraints on the extension of (HI) that were recently proposed in the literature. Finally, we consider the consequences of endorsing both suggestions and show that this would yield an identification of rational revision with natural revision. We close the paper by discussing the general prospects for defining iterated revision in terms of iterated contraction. |
Tasks | |
Published | 2019-07-02 |
URL | https://arxiv.org/abs/1907.01224v1 |
https://arxiv.org/pdf/1907.01224v1.pdf | |
PWC | https://paperswithcode.com/paper/elementary-iterated-revision-and-the-levi |
Repo | |
Framework | |
Deep Poisoning Functions: Towards Robust Privacy-safe Image Data Sharing
Title | Deep Poisoning Functions: Towards Robust Privacy-safe Image Data Sharing |
Authors | Hao Guo, Brian Dolhansky, Eric Hsin, Phong Dinh, Song Wang, Cristian Canton Ferrer |
Abstract | As deep networks are applied to an ever-expanding set of computer vision tasks, protecting general privacy in image data has become a critically important goal. This paper presents a new framework for privacy-preserving data sharing that is robust to adversarial attacks and overcomes the known issues existing in previous approaches. We introduce the concept of a Deep Poisoning Function (DPF), which is a module inserted into a pre-trained deep network designed to perform a specific vision task. The DPF is optimized to deliberately poison image data to prevent known adversarial attacks, while ensuring that the altered image data is functionally equivalent to the non-poisoned data for the original task. Given this equivalence, both poisoned and non-poisoned data can be used for further retraining or fine-tuning. Experimental results on image classification and face recognition tasks prove the efficacy of the proposed method. |
Tasks | Face Recognition, Image Classification |
Published | 2019-12-14 |
URL | https://arxiv.org/abs/1912.06895v1 |
https://arxiv.org/pdf/1912.06895v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-poisoning-functions-towards-robust |
Repo | |
Framework | |
RED-NET: A Recursive Encoder-Decoder Network for Edge Detection
Title | RED-NET: A Recursive Encoder-Decoder Network for Edge Detection |
Authors | Truc Le, Yuyan Li, Ye Duan |
Abstract | In this paper, we introduce RED-NET: A Recursive Encoder-Decoder Network with Skip-Connections for edge detection in natural images. The proposed network is a novel integration of a Recursive Neural Network with an Encoder-Decoder architecture. The recursive network enables us to increase the network depth without increasing the number of parameters. Adding skip-connections between encoder and decoder helps the gradients reach all the layers of a network more easily and allows information related to finer details in the early stage of the encoder to be fully utilized in the decoder. Based on our extensive experiments on popular boundary detection datasets including BSDS500 \cite{Arbelaez2011}, NYUD \cite{Silberman2012} and Pascal Context \cite{Mottaghi2014}, RED-NET significantly advances the state-of-the-art on edge detection regarding standard evaluation metrics such as Optimal Dataset Scale (ODS) F-measure, Optimal Image Scale (OIS) F-measure, and Average Precision (AP). |
Tasks | Boundary Detection, Edge Detection |
Published | 2019-12-05 |
URL | https://arxiv.org/abs/1912.02914v1 |
https://arxiv.org/pdf/1912.02914v1.pdf | |
PWC | https://paperswithcode.com/paper/red-net-a-recursive-encoder-decoder-network |
Repo | |
Framework | |
Accelerating Proposal Generation Network for \Fast Face Detection on Mobile Devices
Title | Accelerating Proposal Generation Network for \Fast Face Detection on Mobile Devices |
Authors | Heming Zhang, Xiaolong Wang, Jingwen Zhu, C. -C. Jay Kuo |
Abstract | Face detection is a widely studied problem over the past few decades. Recently, significant improvements have been achieved via the deep neural network, however, it is still challenging to directly apply these techniques to mobile devices for its limited computational power and memory. In this work, we present a proposal generation acceleration framework for real-time face detection. More specifically, we adopt a popular cascaded convolutional neural network (CNN) as the basis, then apply our acceleration approach on the basic framework to speed up the model inference time. We are motivated by the observation that the computation bottleneck of this framework arises from the proposal generation stage, where each level of the dense image pyramid has to go through the network. In this work, we reduce the number of image pyramid levels by utilizing both global and local facial characteristics (i.e., global face and facial parts). Experimental results on public benchmarks WIDER-face and FDDB demonstrate the satisfactory performance and faster speed compared to the state-of-the-arts. %the comparable accuracy to state-of-the-arts with faster speed. |
Tasks | Face Detection |
Published | 2019-04-27 |
URL | http://arxiv.org/abs/1904.12094v1 |
http://arxiv.org/pdf/1904.12094v1.pdf | |
PWC | https://paperswithcode.com/paper/accelerating-proposal-generation-network-for |
Repo | |
Framework | |
See the World through Network Cameras
Title | See the World through Network Cameras |
Authors | Yung-Hsiang Lu, George K. Thiruvathukal, Ahmed S. Kaseb, Kent Gauen, Damini Rijhwani, Ryan Dailey, Deeptanshu Malik, Yutong Huang, Sarah Aghajanzadeh, Minghao Guo |
Abstract | Millions of network cameras have been deployed worldwide. Real-time data from many network cameras can offer instant views of multiple locations with applications in public safety, transportation management, urban planning, agriculture, forestry, social sciences, atmospheric information, and more. This paper describes the real-time data available from worldwide network cameras and potential applications. Second, this paper outlines the CAM2 System available to users at https://www.cam2project.net/. This information includes strategies to discover network cameras and create the camera database, user interface, and computing platforms. Third, this paper describes many opportunities provided by data from network cameras and challenges to be addressed. |
Tasks | |
Published | 2019-04-14 |
URL | http://arxiv.org/abs/1904.06775v1 |
http://arxiv.org/pdf/1904.06775v1.pdf | |
PWC | https://paperswithcode.com/paper/see-the-world-through-network-cameras |
Repo | |
Framework | |
Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks
Title | Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks |
Authors | Zhong Qiu Lin, Alexander Wong |
Abstract | Much of the focus in the area of knowledge distillation has been on distilling knowledge from a larger teacher network to a smaller student network. However, there has been little research on how the concept of distillation can be leveraged to distill the knowledge encapsulated in the training data itself into a reduced form. In this study, we explore the concept of progressive label distillation, where we leverage a series of teacher-student network pairs to progressively generate distilled training data for learning deep neural networks with greatly reduced input dimensions. To investigate the efficacy of the proposed progressive label distillation approach, we experimented with learning a deep limited vocabulary speech recognition network based on generated 500ms input utterances distilled progressively from 1000ms source training data, and demonstrated a significant increase in test accuracy of almost 78% compared to direct learning. |
Tasks | Speech Recognition |
Published | 2019-01-26 |
URL | http://arxiv.org/abs/1901.09135v1 |
http://arxiv.org/pdf/1901.09135v1.pdf | |
PWC | https://paperswithcode.com/paper/progressive-label-distillation-learning-input |
Repo | |
Framework | |
FEED: Feature-level Ensemble for Knowledge Distillation
Title | FEED: Feature-level Ensemble for Knowledge Distillation |
Authors | SeongUk Park, Nojun Kwak |
Abstract | Knowledge Distillation (KD) aims to transfer knowledge in a teacher-student framework, by providing the predictions of the teacher network to the student network in the training stage to help the student network generalize better. It can use either a teacher with high capacity or {an} ensemble of multiple teachers. However, the latter is not convenient when one wants to use feature-map-based distillation methods. For a solution, this paper proposes a versatile and powerful training algorithm named FEature-level Ensemble for knowledge Distillation (FEED), which aims to transfer the ensemble knowledge using multiple teacher networks. We introduce a couple of training algorithms that transfer ensemble knowledge to the student at the feature map level. Among the feature-map-based distillation methods, using several non-linear transformations in parallel for transferring the knowledge of the multiple teacher{s} helps the student find more generalized solutions. We name this method as parallel FEED, andexperimental results on CIFAR-100 and ImageNet show that our method has clear performance enhancements, without introducing any additional parameters or computations at test time. We also show the experimental results of sequentially feeding teacher’s information to the student, hence the name sequential FEED, and discuss the lessons obtained. Additionally, the empirical results on measuring the reconstruction errors at the feature map give hints for the enhancements. |
Tasks | |
Published | 2019-09-24 |
URL | https://arxiv.org/abs/1909.10754v1 |
https://arxiv.org/pdf/1909.10754v1.pdf | |
PWC | https://paperswithcode.com/paper/feed-feature-level-ensemble-for-knowledge |
Repo | |
Framework | |
UG$^{2+}$ Track 2: A Collective Benchmark Effort for Evaluating and Advancing Image Understanding in Poor Visibility Environments
Title | UG$^{2+}$ Track 2: A Collective Benchmark Effort for Evaluating and Advancing Image Understanding in Poor Visibility Environments |
Authors | Ye Yuan, Wenhan Yang, Wenqi Ren, Jiaying Liu, Walter J. Scheirer, Zhangyang Wang |
Abstract | The UG$^{2+}$ challenge in IEEE CVPR 2019 aims to evoke a comprehensive discussion and exploration about how low-level vision techniques can benefit the high-level automatic visual recognition in various scenarios. In its second track, we focus on object or face detection in poor visibility enhancements caused by bad weathers (haze, rain) and low light conditions. While existing enhancement methods are empirically expected to help the high-level end task, that is observed to not always be the case in practice. To provide a more thorough examination and fair comparison, we introduce three benchmark sets collected in real-world hazy, rainy, and low-light conditions, respectively, with annotate objects/faces annotated. To our best knowledge, this is the first and currently largest effort of its kind. Baseline results by cascading existing enhancement and detection models are reported, indicating the highly challenging nature of our new data as well as the large room for further technical innovations. We expect a large participation from the broad research community to address these challenges together. |
Tasks | Face Detection |
Published | 2019-04-09 |
URL | https://arxiv.org/abs/1904.04474v4 |
https://arxiv.org/pdf/1904.04474v4.pdf | |
PWC | https://paperswithcode.com/paper/ug2-track-2-a-collective-benchmark-effort-for |
Repo | |
Framework | |
Sequential Bayesian Detection of Spike Activities from Fluorescence Observations
Title | Sequential Bayesian Detection of Spike Activities from Fluorescence Observations |
Authors | Zhuangkun Wei, Bin Li, Weisi Guo, Wenxiu Hu, Chenglin Zhao |
Abstract | Extracting and detecting spike activities from the fluorescence observations is an important step in understanding how neuron systems work. The main challenge lies in that the combination of the ambient noise with dynamic baseline fluctuation, often contaminates the observations, thereby deteriorating the reliability of spike detection. This may be even worse in the face of the nonlinear biological process, the coupling interactions between spikes and baseline, and the unknown critical parameters of an underlying physiological model, in which erroneous estimations of parameters will affect the detection of spikes causing further error propagation. In this paper, we propose a random finite set (RFS) based Bayesian approach. The dynamic behaviors of spike sequence, fluctuated baseline and unknown parameters are formulated as one RFS. This RFS state is capable of distinguishing the hidden active/silent states induced by spike and non-spike activities respectively, thereby \emph{negating the interaction role} played by spikes and other factors. Then, premised on the RFS states, a Bayesian inference scheme is designed to simultaneously estimate the model parameters, baseline, and crucial spike activities. Our results demonstrate that the proposed scheme can gain an extra $12%$ detection accuracy in comparison with the state-of-the-art MLSpike method. |
Tasks | Bayesian Inference |
Published | 2019-01-31 |
URL | http://arxiv.org/abs/1901.11418v1 |
http://arxiv.org/pdf/1901.11418v1.pdf | |
PWC | https://paperswithcode.com/paper/sequential-bayesian-detection-of-spike |
Repo | |
Framework | |
Combining Geometric and Topological Information in Image Segmentation
Title | Combining Geometric and Topological Information in Image Segmentation |
Authors | Hengrui Luo, Justin Strait |
Abstract | A fundamental problem in computer vision is image segmentation, where the goal is to delineate the boundary of an object in the image. The focus of this work is on the segmentation of grayscale images and its purpose is two-fold. First, we conduct an in-depth study comparing active contour and topology-based methods in a statistical framework, two popular approaches for boundary detection of 2-dimensional images. Certain properties of the image dataset may favor one method over the other, both from an interpretability perspective as well as through evaluation of performance measures. Second, we propose the use of topological knowledge to assist an active contour method, which can potentially incorporate prior shape information. The latter is known to be extremely sensitive to algorithm initialization, and thus, we use a topological model to provide an automatic initialization. In addition, our proposed model can handle objects in images with more complex topological structures, including objects with holes and multiple objects within one image. We demonstrate this on artificially-constructed image datasets from computer vision, as well as real medical image data. |
Tasks | Boundary Detection, Semantic Segmentation |
Published | 2019-10-10 |
URL | https://arxiv.org/abs/1910.04778v2 |
https://arxiv.org/pdf/1910.04778v2.pdf | |
PWC | https://paperswithcode.com/paper/combining-geometric-and-topological |
Repo | |
Framework | |